00:00:00.001 Started by upstream project "autotest-spdk-v24.09-vs-dpdk-v23.11" build number 137 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3638 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.172 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.173 The recommended git tool is: git 00:00:00.173 using credential 00000000-0000-0000-0000-000000000002 00:00:00.177 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.208 Fetching changes from the remote Git repository 00:00:00.210 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.246 Using shallow fetch with depth 1 00:00:00.246 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.246 > git --version # timeout=10 00:00:00.278 > git --version # 'git version 2.39.2' 00:00:00.278 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.297 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.297 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.134 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.148 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.159 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:06.159 > git config core.sparsecheckout # timeout=10 00:00:06.171 > git read-tree -mu HEAD # timeout=10 00:00:06.188 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:06.206 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:06.206 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:06.306 [Pipeline] Start of Pipeline 00:00:06.319 [Pipeline] library 00:00:06.321 Loading library shm_lib@master 00:00:06.321 Library shm_lib@master is cached. Copying from home. 00:00:06.337 [Pipeline] node 00:00:06.352 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:06.354 [Pipeline] { 00:00:06.363 [Pipeline] catchError 00:00:06.365 [Pipeline] { 00:00:06.377 [Pipeline] wrap 00:00:06.386 [Pipeline] { 00:00:06.393 [Pipeline] stage 00:00:06.394 [Pipeline] { (Prologue) 00:00:06.409 [Pipeline] echo 00:00:06.411 Node: VM-host-SM38 00:00:06.418 [Pipeline] cleanWs 00:00:06.430 [WS-CLEANUP] Deleting project workspace... 00:00:06.430 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.438 [WS-CLEANUP] done 00:00:06.617 [Pipeline] setCustomBuildProperty 00:00:06.720 [Pipeline] httpRequest 00:00:07.003 [Pipeline] echo 00:00:07.005 Sorcerer 10.211.164.20 is alive 00:00:07.014 [Pipeline] retry 00:00:07.016 [Pipeline] { 00:00:07.031 [Pipeline] httpRequest 00:00:07.036 HttpMethod: GET 00:00:07.036 URL: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:07.037 Sending request to url: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:07.038 Response Code: HTTP/1.1 200 OK 00:00:07.039 Success: Status code 200 is in the accepted range: 200,404 00:00:07.039 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:08.277 [Pipeline] } 00:00:08.290 [Pipeline] // retry 00:00:08.296 [Pipeline] sh 00:00:08.593 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:08.612 [Pipeline] httpRequest 00:00:08.955 [Pipeline] echo 00:00:08.957 Sorcerer 10.211.164.20 is alive 00:00:08.968 [Pipeline] retry 00:00:08.970 [Pipeline] { 00:00:08.986 [Pipeline] httpRequest 00:00:08.992 HttpMethod: GET 00:00:08.993 URL: http://10.211.164.20/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:08.994 Sending request to url: http://10.211.164.20/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:08.994 Response Code: HTTP/1.1 200 OK 00:00:08.995 Success: Status code 200 is in the accepted range: 200,404 00:00:08.996 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:01:35.027 [Pipeline] } 00:01:35.044 [Pipeline] // retry 00:01:35.052 [Pipeline] sh 00:01:35.336 + tar --no-same-owner -xf spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:01:38.657 [Pipeline] sh 00:01:38.943 + git -C spdk log --oneline -n5 00:01:38.943 b18e1bd62 version: v24.09.1-pre 00:01:38.943 19524ad45 version: v24.09 00:01:38.943 9756b40a3 dpdk: update submodule to include alarm_cancel fix 00:01:38.943 a808500d2 test/nvmf: disable nvmf_shutdown_tc4 on e810 00:01:38.943 3024272c6 bdev/nvme: take nvme_ctrlr.mutex when setting keys 00:01:38.964 [Pipeline] withCredentials 00:01:38.977 > git --version # timeout=10 00:01:38.990 > git --version # 'git version 2.39.2' 00:01:39.011 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:39.013 [Pipeline] { 00:01:39.021 [Pipeline] retry 00:01:39.023 [Pipeline] { 00:01:39.038 [Pipeline] sh 00:01:39.321 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:01:39.595 [Pipeline] } 00:01:39.612 [Pipeline] // retry 00:01:39.617 [Pipeline] } 00:01:39.632 [Pipeline] // withCredentials 00:01:39.641 [Pipeline] httpRequest 00:01:40.273 [Pipeline] echo 00:01:40.275 Sorcerer 10.211.164.20 is alive 00:01:40.285 [Pipeline] retry 00:01:40.287 [Pipeline] { 00:01:40.301 [Pipeline] httpRequest 00:01:40.307 HttpMethod: GET 00:01:40.308 URL: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:40.309 Sending request to url: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:40.313 Response Code: HTTP/1.1 200 OK 00:01:40.314 Success: Status code 200 is in the accepted range: 200,404 00:01:40.314 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:53.365 [Pipeline] } 00:01:53.381 [Pipeline] // retry 00:01:53.387 [Pipeline] sh 00:01:53.671 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:55.603 [Pipeline] sh 00:01:55.890 + git -C dpdk log --oneline -n5 00:01:55.890 eeb0605f11 version: 23.11.0 00:01:55.890 238778122a doc: update release notes for 23.11 00:01:55.890 46aa6b3cfc doc: fix description of RSS features 00:01:55.890 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:55.890 7e421ae345 devtools: support skipping forbid rule check 00:01:55.910 [Pipeline] writeFile 00:01:55.925 [Pipeline] sh 00:01:56.213 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:56.227 [Pipeline] sh 00:01:56.574 + cat autorun-spdk.conf 00:01:56.575 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:56.575 SPDK_TEST_NVME=1 00:01:56.575 SPDK_TEST_FTL=1 00:01:56.575 SPDK_TEST_ISAL=1 00:01:56.575 SPDK_RUN_ASAN=1 00:01:56.575 SPDK_RUN_UBSAN=1 00:01:56.575 SPDK_TEST_XNVME=1 00:01:56.575 SPDK_TEST_NVME_FDP=1 00:01:56.575 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:56.575 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:56.575 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:56.582 RUN_NIGHTLY=1 00:01:56.584 [Pipeline] } 00:01:56.597 [Pipeline] // stage 00:01:56.612 [Pipeline] stage 00:01:56.615 [Pipeline] { (Run VM) 00:01:56.626 [Pipeline] sh 00:01:56.908 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:56.908 + echo 'Start stage prepare_nvme.sh' 00:01:56.908 Start stage prepare_nvme.sh 00:01:56.908 + [[ -n 2 ]] 00:01:56.908 + disk_prefix=ex2 00:01:56.908 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:56.908 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:56.908 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:56.908 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:56.908 ++ SPDK_TEST_NVME=1 00:01:56.908 ++ SPDK_TEST_FTL=1 00:01:56.908 ++ SPDK_TEST_ISAL=1 00:01:56.908 ++ SPDK_RUN_ASAN=1 00:01:56.908 ++ SPDK_RUN_UBSAN=1 00:01:56.908 ++ SPDK_TEST_XNVME=1 00:01:56.908 ++ SPDK_TEST_NVME_FDP=1 00:01:56.908 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:56.908 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:56.908 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:56.908 ++ RUN_NIGHTLY=1 00:01:56.908 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:56.909 + nvme_files=() 00:01:56.909 + declare -A nvme_files 00:01:56.909 + backend_dir=/var/lib/libvirt/images/backends 00:01:56.909 + nvme_files['nvme.img']=5G 00:01:56.909 + nvme_files['nvme-cmb.img']=5G 00:01:56.909 + nvme_files['nvme-multi0.img']=4G 00:01:56.909 + nvme_files['nvme-multi1.img']=4G 00:01:56.909 + nvme_files['nvme-multi2.img']=4G 00:01:56.909 + nvme_files['nvme-openstack.img']=8G 00:01:56.909 + nvme_files['nvme-zns.img']=5G 00:01:56.909 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:56.909 + (( SPDK_TEST_FTL == 1 )) 00:01:56.909 + nvme_files["nvme-ftl.img"]=6G 00:01:56.909 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:56.909 + nvme_files["nvme-fdp.img"]=1G 00:01:56.909 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:56.909 + for nvme in "${!nvme_files[@]}" 00:01:56.909 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:01:56.909 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:56.909 + for nvme in "${!nvme_files[@]}" 00:01:56.909 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-ftl.img -s 6G 00:01:56.909 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:56.909 + for nvme in "${!nvme_files[@]}" 00:01:56.909 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:01:56.909 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:56.909 + for nvme in "${!nvme_files[@]}" 00:01:56.909 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:01:56.909 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:56.909 + for nvme in "${!nvme_files[@]}" 00:01:56.909 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:01:56.909 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:56.909 + for nvme in "${!nvme_files[@]}" 00:01:56.909 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:01:56.909 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:57.167 + for nvme in "${!nvme_files[@]}" 00:01:57.167 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:01:57.167 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:57.167 + for nvme in "${!nvme_files[@]}" 00:01:57.167 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-fdp.img -s 1G 00:01:57.167 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:57.167 + for nvme in "${!nvme_files[@]}" 00:01:57.167 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:01:57.167 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:57.167 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:01:57.167 + echo 'End stage prepare_nvme.sh' 00:01:57.167 End stage prepare_nvme.sh 00:01:57.179 [Pipeline] sh 00:01:57.457 + DISTRO=fedora39 00:01:57.457 + CPUS=10 00:01:57.457 + RAM=12288 00:01:57.457 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:57.457 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex2-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex2-nvme.img -b /var/lib/libvirt/images/backends/ex2-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex2-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:57.457 00:01:57.457 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:57.457 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:57.457 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:57.457 HELP=0 00:01:57.457 DRY_RUN=0 00:01:57.457 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme-ftl.img,/var/lib/libvirt/images/backends/ex2-nvme.img,/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,/var/lib/libvirt/images/backends/ex2-nvme-fdp.img, 00:01:57.457 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:57.457 NVME_AUTO_CREATE=0 00:01:57.457 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,, 00:01:57.457 NVME_CMB=,,,, 00:01:57.457 NVME_PMR=,,,, 00:01:57.457 NVME_ZNS=,,,, 00:01:57.457 NVME_MS=true,,,, 00:01:57.457 NVME_FDP=,,,on, 00:01:57.457 SPDK_VAGRANT_DISTRO=fedora39 00:01:57.457 SPDK_VAGRANT_VMCPU=10 00:01:57.457 SPDK_VAGRANT_VMRAM=12288 00:01:57.457 SPDK_VAGRANT_PROVIDER=libvirt 00:01:57.457 SPDK_VAGRANT_HTTP_PROXY= 00:01:57.457 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:57.457 SPDK_OPENSTACK_NETWORK=0 00:01:57.457 VAGRANT_PACKAGE_BOX=0 00:01:57.457 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:57.457 FORCE_DISTRO=true 00:01:57.457 VAGRANT_BOX_VERSION= 00:01:57.457 EXTRA_VAGRANTFILES= 00:01:57.457 NIC_MODEL=e1000 00:01:57.457 00:01:57.457 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:01:57.457 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:59.998 Bringing machine 'default' up with 'libvirt' provider... 00:02:00.259 ==> default: Creating image (snapshot of base box volume). 00:02:00.520 ==> default: Creating domain with the following settings... 00:02:00.520 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1731851438_8bdd9ac7efc7e6e25794 00:02:00.520 ==> default: -- Domain type: kvm 00:02:00.520 ==> default: -- Cpus: 10 00:02:00.520 ==> default: -- Feature: acpi 00:02:00.520 ==> default: -- Feature: apic 00:02:00.520 ==> default: -- Feature: pae 00:02:00.520 ==> default: -- Memory: 12288M 00:02:00.520 ==> default: -- Memory Backing: hugepages: 00:02:00.520 ==> default: -- Management MAC: 00:02:00.520 ==> default: -- Loader: 00:02:00.520 ==> default: -- Nvram: 00:02:00.520 ==> default: -- Base box: spdk/fedora39 00:02:00.520 ==> default: -- Storage pool: default 00:02:00.520 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1731851438_8bdd9ac7efc7e6e25794.img (20G) 00:02:00.520 ==> default: -- Volume Cache: default 00:02:00.520 ==> default: -- Kernel: 00:02:00.520 ==> default: -- Initrd: 00:02:00.520 ==> default: -- Graphics Type: vnc 00:02:00.520 ==> default: -- Graphics Port: -1 00:02:00.520 ==> default: -- Graphics IP: 127.0.0.1 00:02:00.520 ==> default: -- Graphics Password: Not defined 00:02:00.520 ==> default: -- Video Type: cirrus 00:02:00.520 ==> default: -- Video VRAM: 9216 00:02:00.520 ==> default: -- Sound Type: 00:02:00.520 ==> default: -- Keymap: en-us 00:02:00.520 ==> default: -- TPM Path: 00:02:00.520 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:00.520 ==> default: -- Command line args: 00:02:00.520 ==> default: -> value=-device, 00:02:00.520 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:00.520 ==> default: -> value=-drive, 00:02:00.520 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:02:00.520 ==> default: -> value=-device, 00:02:00.520 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:02:00.520 ==> default: -> value=-device, 00:02:00.520 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:02:00.520 ==> default: -> value=-drive, 00:02:00.520 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-1-drive0, 00:02:00.520 ==> default: -> value=-device, 00:02:00.520 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:00.520 ==> default: -> value=-device, 00:02:00.520 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:02:00.520 ==> default: -> value=-drive, 00:02:00.520 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:02:00.520 ==> default: -> value=-device, 00:02:00.520 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:00.520 ==> default: -> value=-drive, 00:02:00.520 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:02:00.520 ==> default: -> value=-device, 00:02:00.520 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:00.520 ==> default: -> value=-drive, 00:02:00.520 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:02:00.520 ==> default: -> value=-device, 00:02:00.520 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:00.520 ==> default: -> value=-device, 00:02:00.520 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:02:00.520 ==> default: -> value=-device, 00:02:00.520 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:02:00.520 ==> default: -> value=-drive, 00:02:00.520 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:02:00.520 ==> default: -> value=-device, 00:02:00.520 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:00.781 ==> default: Creating shared folders metadata... 00:02:00.781 ==> default: Starting domain. 00:02:02.166 ==> default: Waiting for domain to get an IP address... 00:02:20.273 ==> default: Waiting for SSH to become available... 00:02:20.273 ==> default: Configuring and enabling network interfaces... 00:02:23.577 default: SSH address: 192.168.121.197:22 00:02:23.577 default: SSH username: vagrant 00:02:23.577 default: SSH auth method: private key 00:02:26.128 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:31.408 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:36.672 ==> default: Mounting SSHFS shared folder... 00:02:37.605 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:37.605 ==> default: Checking Mount.. 00:02:38.539 ==> default: Folder Successfully Mounted! 00:02:38.539 00:02:38.539 SUCCESS! 00:02:38.539 00:02:38.539 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:38.539 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:38.539 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:38.539 00:02:38.548 [Pipeline] } 00:02:38.565 [Pipeline] // stage 00:02:38.575 [Pipeline] dir 00:02:38.575 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:38.577 [Pipeline] { 00:02:38.591 [Pipeline] catchError 00:02:38.593 [Pipeline] { 00:02:38.606 [Pipeline] sh 00:02:38.883 + vagrant ssh-config --host vagrant 00:02:38.883 + sed -ne '/^Host/,$p' 00:02:38.883 + tee ssh_conf 00:02:41.416 Host vagrant 00:02:41.416 HostName 192.168.121.197 00:02:41.416 User vagrant 00:02:41.416 Port 22 00:02:41.416 UserKnownHostsFile /dev/null 00:02:41.416 StrictHostKeyChecking no 00:02:41.416 PasswordAuthentication no 00:02:41.416 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:41.416 IdentitiesOnly yes 00:02:41.416 LogLevel FATAL 00:02:41.416 ForwardAgent yes 00:02:41.416 ForwardX11 yes 00:02:41.416 00:02:41.429 [Pipeline] withEnv 00:02:41.432 [Pipeline] { 00:02:41.445 [Pipeline] sh 00:02:41.723 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:02:41.723 source /etc/os-release 00:02:41.723 [[ -e /image.version ]] && img=$(< /image.version) 00:02:41.723 # Minimal, systemd-like check. 00:02:41.723 if [[ -e /.dockerenv ]]; then 00:02:41.723 # Clear garbage from the node'\''s name: 00:02:41.723 # agt-er_autotest_547-896 -> autotest_547-896 00:02:41.723 # $HOSTNAME is the actual container id 00:02:41.723 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:41.723 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:41.723 # We can assume this is a mount from a host where container is running, 00:02:41.723 # so fetch its hostname to easily identify the target swarm worker. 00:02:41.723 container="$(< /etc/hostname) ($agent)" 00:02:41.723 else 00:02:41.723 # Fallback 00:02:41.723 container=$agent 00:02:41.723 fi 00:02:41.723 fi 00:02:41.723 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:41.723 ' 00:02:41.734 [Pipeline] } 00:02:41.751 [Pipeline] // withEnv 00:02:41.759 [Pipeline] setCustomBuildProperty 00:02:41.773 [Pipeline] stage 00:02:41.776 [Pipeline] { (Tests) 00:02:41.794 [Pipeline] sh 00:02:42.133 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:42.146 [Pipeline] sh 00:02:42.424 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:42.441 [Pipeline] timeout 00:02:42.441 Timeout set to expire in 50 min 00:02:42.444 [Pipeline] { 00:02:42.460 [Pipeline] sh 00:02:42.738 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:02:42.997 HEAD is now at b18e1bd62 version: v24.09.1-pre 00:02:43.010 [Pipeline] sh 00:02:43.288 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:02:43.558 [Pipeline] sh 00:02:43.835 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:44.108 [Pipeline] sh 00:02:44.386 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:02:44.645 ++ readlink -f spdk_repo 00:02:44.645 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:44.645 + [[ -n /home/vagrant/spdk_repo ]] 00:02:44.645 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:44.645 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:44.645 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:44.645 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:44.645 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:44.645 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:44.645 + cd /home/vagrant/spdk_repo 00:02:44.645 + source /etc/os-release 00:02:44.645 ++ NAME='Fedora Linux' 00:02:44.645 ++ VERSION='39 (Cloud Edition)' 00:02:44.645 ++ ID=fedora 00:02:44.645 ++ VERSION_ID=39 00:02:44.645 ++ VERSION_CODENAME= 00:02:44.645 ++ PLATFORM_ID=platform:f39 00:02:44.645 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:44.645 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:44.645 ++ LOGO=fedora-logo-icon 00:02:44.645 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:44.645 ++ HOME_URL=https://fedoraproject.org/ 00:02:44.645 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:44.645 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:44.645 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:44.645 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:44.645 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:44.645 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:44.645 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:44.645 ++ SUPPORT_END=2024-11-12 00:02:44.645 ++ VARIANT='Cloud Edition' 00:02:44.645 ++ VARIANT_ID=cloud 00:02:44.645 + uname -a 00:02:44.645 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:44.645 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:44.902 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:45.161 Hugepages 00:02:45.161 node hugesize free / total 00:02:45.161 node0 1048576kB 0 / 0 00:02:45.161 node0 2048kB 0 / 0 00:02:45.161 00:02:45.161 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:45.161 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:45.161 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:45.161 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:45.161 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:02:45.161 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:45.161 + rm -f /tmp/spdk-ld-path 00:02:45.161 + source autorun-spdk.conf 00:02:45.161 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:45.161 ++ SPDK_TEST_NVME=1 00:02:45.161 ++ SPDK_TEST_FTL=1 00:02:45.161 ++ SPDK_TEST_ISAL=1 00:02:45.161 ++ SPDK_RUN_ASAN=1 00:02:45.161 ++ SPDK_RUN_UBSAN=1 00:02:45.161 ++ SPDK_TEST_XNVME=1 00:02:45.161 ++ SPDK_TEST_NVME_FDP=1 00:02:45.161 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:45.161 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:45.161 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:45.161 ++ RUN_NIGHTLY=1 00:02:45.161 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:45.161 + [[ -n '' ]] 00:02:45.161 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:45.161 + for M in /var/spdk/build-*-manifest.txt 00:02:45.161 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:45.161 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:45.161 + for M in /var/spdk/build-*-manifest.txt 00:02:45.161 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:45.161 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:45.161 + for M in /var/spdk/build-*-manifest.txt 00:02:45.161 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:45.161 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:45.161 ++ uname 00:02:45.161 + [[ Linux == \L\i\n\u\x ]] 00:02:45.161 + sudo dmesg -T 00:02:45.161 + sudo dmesg --clear 00:02:45.161 + dmesg_pid=5762 00:02:45.161 + [[ Fedora Linux == FreeBSD ]] 00:02:45.161 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:45.161 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:45.161 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:45.161 + [[ -x /usr/src/fio-static/fio ]] 00:02:45.161 + sudo dmesg -Tw 00:02:45.161 + export FIO_BIN=/usr/src/fio-static/fio 00:02:45.161 + FIO_BIN=/usr/src/fio-static/fio 00:02:45.161 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:45.161 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:45.161 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:45.161 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:45.161 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:45.161 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:45.161 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:45.161 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:45.161 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:45.161 Test configuration: 00:02:45.161 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:45.161 SPDK_TEST_NVME=1 00:02:45.161 SPDK_TEST_FTL=1 00:02:45.161 SPDK_TEST_ISAL=1 00:02:45.161 SPDK_RUN_ASAN=1 00:02:45.161 SPDK_RUN_UBSAN=1 00:02:45.161 SPDK_TEST_XNVME=1 00:02:45.161 SPDK_TEST_NVME_FDP=1 00:02:45.161 SPDK_TEST_NATIVE_DPDK=v23.11 00:02:45.161 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:45.161 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:45.420 RUN_NIGHTLY=1 13:51:23 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:02:45.420 13:51:23 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:45.420 13:51:23 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:45.420 13:51:23 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:45.420 13:51:23 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:45.420 13:51:23 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:45.420 13:51:23 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:45.420 13:51:23 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:45.420 13:51:23 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:45.420 13:51:23 -- paths/export.sh@5 -- $ export PATH 00:02:45.420 13:51:23 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:45.420 13:51:23 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:45.420 13:51:23 -- common/autobuild_common.sh@479 -- $ date +%s 00:02:45.420 13:51:23 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1731851483.XXXXXX 00:02:45.420 13:51:23 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1731851483.FrAntu 00:02:45.420 13:51:23 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:02:45.420 13:51:23 -- common/autobuild_common.sh@485 -- $ '[' -n v23.11 ']' 00:02:45.420 13:51:23 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:45.420 13:51:23 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:45.420 13:51:23 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:45.420 13:51:23 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:45.420 13:51:23 -- common/autobuild_common.sh@495 -- $ get_config_params 00:02:45.420 13:51:23 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:45.420 13:51:23 -- common/autotest_common.sh@10 -- $ set +x 00:02:45.420 13:51:23 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:02:45.421 13:51:23 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:02:45.421 13:51:23 -- pm/common@17 -- $ local monitor 00:02:45.421 13:51:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:45.421 13:51:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:45.421 13:51:23 -- pm/common@25 -- $ sleep 1 00:02:45.421 13:51:23 -- pm/common@21 -- $ date +%s 00:02:45.421 13:51:23 -- pm/common@21 -- $ date +%s 00:02:45.421 13:51:23 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731851483 00:02:45.421 13:51:23 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731851483 00:02:45.421 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731851483_collect-cpu-load.pm.log 00:02:45.421 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731851483_collect-vmstat.pm.log 00:02:46.355 13:51:24 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:02:46.355 13:51:24 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:46.355 13:51:24 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:46.355 13:51:24 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:46.355 13:51:24 -- spdk/autobuild.sh@16 -- $ date -u 00:02:46.355 Sun Nov 17 01:51:24 PM UTC 2024 00:02:46.355 13:51:24 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:46.355 v24.09-1-gb18e1bd62 00:02:46.355 13:51:24 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:46.355 13:51:24 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:46.355 13:51:24 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:46.355 13:51:24 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:46.355 13:51:24 -- common/autotest_common.sh@10 -- $ set +x 00:02:46.355 ************************************ 00:02:46.355 START TEST asan 00:02:46.355 ************************************ 00:02:46.355 using asan 00:02:46.355 13:51:24 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:02:46.355 00:02:46.355 real 0m0.000s 00:02:46.355 user 0m0.000s 00:02:46.355 sys 0m0.000s 00:02:46.355 13:51:24 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:46.355 13:51:24 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:46.355 ************************************ 00:02:46.355 END TEST asan 00:02:46.355 ************************************ 00:02:46.355 13:51:24 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:46.355 13:51:24 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:46.355 13:51:24 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:46.355 13:51:24 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:46.355 13:51:24 -- common/autotest_common.sh@10 -- $ set +x 00:02:46.356 ************************************ 00:02:46.356 START TEST ubsan 00:02:46.356 ************************************ 00:02:46.356 using ubsan 00:02:46.356 13:51:24 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:02:46.356 00:02:46.356 real 0m0.000s 00:02:46.356 user 0m0.000s 00:02:46.356 sys 0m0.000s 00:02:46.356 13:51:24 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:46.356 ************************************ 00:02:46.356 END TEST ubsan 00:02:46.356 13:51:24 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:46.356 ************************************ 00:02:46.356 13:51:24 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:02:46.356 13:51:24 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:46.356 13:51:24 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:46.356 13:51:24 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:02:46.356 13:51:24 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:46.356 13:51:24 -- common/autotest_common.sh@10 -- $ set +x 00:02:46.356 ************************************ 00:02:46.356 START TEST build_native_dpdk 00:02:46.356 ************************************ 00:02:46.356 13:51:24 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:02:46.356 13:51:24 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:46.356 13:51:24 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:46.356 13:51:24 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:46.356 13:51:24 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:46.356 13:51:24 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:46.356 13:51:24 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:46.356 13:51:24 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:46.356 13:51:24 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:46.356 13:51:24 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:46.356 13:51:24 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:46.356 13:51:24 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:46.356 13:51:24 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:46.356 13:51:24 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:46.356 13:51:24 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:46.356 13:51:24 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:46.356 13:51:24 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:46.356 13:51:24 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:46.356 13:51:24 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:46.356 13:51:24 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:46.356 13:51:24 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:46.356 eeb0605f11 version: 23.11.0 00:02:46.356 238778122a doc: update release notes for 23.11 00:02:46.356 46aa6b3cfc doc: fix description of RSS features 00:02:46.356 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:46.356 7e421ae345 devtools: support skipping forbid rule check 00:02:46.356 13:51:24 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:46.356 13:51:24 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:46.356 13:51:24 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:02:46.356 13:51:24 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:46.356 13:51:24 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:46.356 13:51:24 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:46.356 13:51:24 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:46.356 13:51:24 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:46.356 13:51:24 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:46.356 13:51:24 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:46.356 13:51:24 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:46.356 13:51:24 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:46.356 13:51:24 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:46.356 13:51:24 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:46.356 13:51:24 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:46.356 13:51:24 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:02:46.356 13:51:24 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:46.356 13:51:24 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:02:46.356 13:51:24 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:02:46.356 13:51:24 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:46.356 13:51:24 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:46.356 13:51:24 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:46.356 13:51:24 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:46.356 13:51:24 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:46.356 13:51:24 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:46.356 13:51:24 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:46.356 13:51:24 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:46.356 13:51:24 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:46.356 13:51:24 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:46.356 13:51:24 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:46.356 13:51:24 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:46.356 13:51:24 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:46.356 13:51:24 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:46.356 13:51:24 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:46.356 13:51:24 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:46.356 13:51:24 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:46.356 13:51:24 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:46.356 13:51:24 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:46.356 13:51:24 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:46.356 13:51:24 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:46.356 13:51:24 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:46.356 13:51:24 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:46.356 13:51:24 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:46.356 13:51:24 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:46.356 13:51:24 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:46.356 13:51:24 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:46.356 patching file config/rte_config.h 00:02:46.356 Hunk #1 succeeded at 60 (offset 1 line). 00:02:46.356 13:51:24 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0 00:02:46.356 13:51:24 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:02:46.356 13:51:24 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:46.356 13:51:24 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:46.356 13:51:24 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:46.356 13:51:24 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:46.356 13:51:24 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:46.356 13:51:24 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:46.356 13:51:24 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:46.356 13:51:24 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:46.356 13:51:24 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:46.356 13:51:24 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:46.356 13:51:24 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:46.356 13:51:24 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:46.356 13:51:24 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:46.356 13:51:24 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:46.356 13:51:24 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:46.356 13:51:24 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:46.356 13:51:24 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:46.356 13:51:24 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:46.356 13:51:24 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:46.614 13:51:24 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:46.614 13:51:24 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:46.614 13:51:24 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:46.614 13:51:24 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:46.614 13:51:24 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:46.614 13:51:24 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:46.614 13:51:24 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:46.614 13:51:24 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:02:46.614 13:51:24 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:46.614 patching file lib/pcapng/rte_pcapng.c 00:02:46.614 13:51:24 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 23.11.0 24.07.0 00:02:46.614 13:51:24 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 23.11.0 '>=' 24.07.0 00:02:46.614 13:51:24 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:46.614 13:51:24 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:46.614 13:51:24 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:46.614 13:51:24 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:46.614 13:51:24 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:46.614 13:51:24 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:46.614 13:51:24 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:46.614 13:51:24 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:46.614 13:51:24 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:46.614 13:51:24 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:46.614 13:51:24 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:46.614 13:51:24 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:46.614 13:51:24 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:46.614 13:51:24 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:46.614 13:51:24 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:46.614 13:51:24 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:46.614 13:51:24 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:46.614 13:51:24 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:46.614 13:51:24 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:46.614 13:51:24 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:46.614 13:51:24 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:46.614 13:51:24 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:46.614 13:51:24 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:46.614 13:51:24 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:46.614 13:51:24 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:46.614 13:51:24 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:46.614 13:51:24 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:02:46.614 13:51:24 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:02:46.614 13:51:24 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:02:46.615 13:51:24 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:02:46.615 13:51:24 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:46.615 13:51:24 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:50.796 The Meson build system 00:02:50.796 Version: 1.5.0 00:02:50.796 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:50.796 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:50.796 Build type: native build 00:02:50.796 Program cat found: YES (/usr/bin/cat) 00:02:50.796 Project name: DPDK 00:02:50.796 Project version: 23.11.0 00:02:50.796 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:50.796 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:50.796 Host machine cpu family: x86_64 00:02:50.796 Host machine cpu: x86_64 00:02:50.796 Message: ## Building in Developer Mode ## 00:02:50.796 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:50.796 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:50.796 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:50.796 Program python3 found: YES (/usr/bin/python3) 00:02:50.796 Program cat found: YES (/usr/bin/cat) 00:02:50.796 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:50.796 Compiler for C supports arguments -march=native: YES 00:02:50.796 Checking for size of "void *" : 8 00:02:50.796 Checking for size of "void *" : 8 (cached) 00:02:50.796 Library m found: YES 00:02:50.796 Library numa found: YES 00:02:50.796 Has header "numaif.h" : YES 00:02:50.796 Library fdt found: NO 00:02:50.796 Library execinfo found: NO 00:02:50.796 Has header "execinfo.h" : YES 00:02:50.796 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:50.796 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:50.796 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:50.796 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:50.796 Run-time dependency openssl found: YES 3.1.1 00:02:50.796 Run-time dependency libpcap found: YES 1.10.4 00:02:50.796 Has header "pcap.h" with dependency libpcap: YES 00:02:50.796 Compiler for C supports arguments -Wcast-qual: YES 00:02:50.796 Compiler for C supports arguments -Wdeprecated: YES 00:02:50.796 Compiler for C supports arguments -Wformat: YES 00:02:50.796 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:50.796 Compiler for C supports arguments -Wformat-security: NO 00:02:50.796 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:50.796 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:50.796 Compiler for C supports arguments -Wnested-externs: YES 00:02:50.796 Compiler for C supports arguments -Wold-style-definition: YES 00:02:50.796 Compiler for C supports arguments -Wpointer-arith: YES 00:02:50.796 Compiler for C supports arguments -Wsign-compare: YES 00:02:50.796 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:50.796 Compiler for C supports arguments -Wundef: YES 00:02:50.796 Compiler for C supports arguments -Wwrite-strings: YES 00:02:50.796 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:50.796 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:50.796 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:50.796 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:50.796 Program objdump found: YES (/usr/bin/objdump) 00:02:50.796 Compiler for C supports arguments -mavx512f: YES 00:02:50.796 Checking if "AVX512 checking" compiles: YES 00:02:50.796 Fetching value of define "__SSE4_2__" : 1 00:02:50.796 Fetching value of define "__AES__" : 1 00:02:50.796 Fetching value of define "__AVX__" : 1 00:02:50.796 Fetching value of define "__AVX2__" : 1 00:02:50.796 Fetching value of define "__AVX512BW__" : 1 00:02:50.796 Fetching value of define "__AVX512CD__" : 1 00:02:50.796 Fetching value of define "__AVX512DQ__" : 1 00:02:50.796 Fetching value of define "__AVX512F__" : 1 00:02:50.796 Fetching value of define "__AVX512VL__" : 1 00:02:50.796 Fetching value of define "__PCLMUL__" : 1 00:02:50.796 Fetching value of define "__RDRND__" : 1 00:02:50.796 Fetching value of define "__RDSEED__" : 1 00:02:50.796 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:50.796 Fetching value of define "__znver1__" : (undefined) 00:02:50.796 Fetching value of define "__znver2__" : (undefined) 00:02:50.796 Fetching value of define "__znver3__" : (undefined) 00:02:50.796 Fetching value of define "__znver4__" : (undefined) 00:02:50.796 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:50.796 Message: lib/log: Defining dependency "log" 00:02:50.796 Message: lib/kvargs: Defining dependency "kvargs" 00:02:50.796 Message: lib/telemetry: Defining dependency "telemetry" 00:02:50.796 Checking for function "getentropy" : NO 00:02:50.796 Message: lib/eal: Defining dependency "eal" 00:02:50.796 Message: lib/ring: Defining dependency "ring" 00:02:50.796 Message: lib/rcu: Defining dependency "rcu" 00:02:50.796 Message: lib/mempool: Defining dependency "mempool" 00:02:50.796 Message: lib/mbuf: Defining dependency "mbuf" 00:02:50.796 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:50.796 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:50.796 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:50.796 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:50.796 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:50.796 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:50.796 Compiler for C supports arguments -mpclmul: YES 00:02:50.796 Compiler for C supports arguments -maes: YES 00:02:50.796 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:50.796 Compiler for C supports arguments -mavx512bw: YES 00:02:50.796 Compiler for C supports arguments -mavx512dq: YES 00:02:50.796 Compiler for C supports arguments -mavx512vl: YES 00:02:50.796 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:50.796 Compiler for C supports arguments -mavx2: YES 00:02:50.796 Compiler for C supports arguments -mavx: YES 00:02:50.796 Message: lib/net: Defining dependency "net" 00:02:50.796 Message: lib/meter: Defining dependency "meter" 00:02:50.796 Message: lib/ethdev: Defining dependency "ethdev" 00:02:50.796 Message: lib/pci: Defining dependency "pci" 00:02:50.796 Message: lib/cmdline: Defining dependency "cmdline" 00:02:50.796 Message: lib/metrics: Defining dependency "metrics" 00:02:50.796 Message: lib/hash: Defining dependency "hash" 00:02:50.796 Message: lib/timer: Defining dependency "timer" 00:02:50.796 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:50.796 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:50.796 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:50.796 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:50.796 Message: lib/acl: Defining dependency "acl" 00:02:50.796 Message: lib/bbdev: Defining dependency "bbdev" 00:02:50.796 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:50.796 Run-time dependency libelf found: YES 0.191 00:02:50.796 Message: lib/bpf: Defining dependency "bpf" 00:02:50.796 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:50.796 Message: lib/compressdev: Defining dependency "compressdev" 00:02:50.796 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:50.796 Message: lib/distributor: Defining dependency "distributor" 00:02:50.796 Message: lib/dmadev: Defining dependency "dmadev" 00:02:50.796 Message: lib/efd: Defining dependency "efd" 00:02:50.796 Message: lib/eventdev: Defining dependency "eventdev" 00:02:50.796 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:50.796 Message: lib/gpudev: Defining dependency "gpudev" 00:02:50.796 Message: lib/gro: Defining dependency "gro" 00:02:50.796 Message: lib/gso: Defining dependency "gso" 00:02:50.796 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:50.796 Message: lib/jobstats: Defining dependency "jobstats" 00:02:50.796 Message: lib/latencystats: Defining dependency "latencystats" 00:02:50.796 Message: lib/lpm: Defining dependency "lpm" 00:02:50.796 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:50.796 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:50.796 Fetching value of define "__AVX512IFMA__" : 1 00:02:50.796 Message: lib/member: Defining dependency "member" 00:02:50.796 Message: lib/pcapng: Defining dependency "pcapng" 00:02:50.796 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:50.796 Message: lib/power: Defining dependency "power" 00:02:50.796 Message: lib/rawdev: Defining dependency "rawdev" 00:02:50.796 Message: lib/regexdev: Defining dependency "regexdev" 00:02:50.796 Message: lib/mldev: Defining dependency "mldev" 00:02:50.796 Message: lib/rib: Defining dependency "rib" 00:02:50.796 Message: lib/reorder: Defining dependency "reorder" 00:02:50.796 Message: lib/sched: Defining dependency "sched" 00:02:50.796 Message: lib/security: Defining dependency "security" 00:02:50.796 Message: lib/stack: Defining dependency "stack" 00:02:50.796 Has header "linux/userfaultfd.h" : YES 00:02:50.796 Has header "linux/vduse.h" : YES 00:02:50.796 Message: lib/vhost: Defining dependency "vhost" 00:02:50.796 Message: lib/ipsec: Defining dependency "ipsec" 00:02:50.796 Message: lib/pdcp: Defining dependency "pdcp" 00:02:50.796 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:50.796 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:50.796 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:50.796 Message: lib/fib: Defining dependency "fib" 00:02:50.796 Message: lib/port: Defining dependency "port" 00:02:50.796 Message: lib/pdump: Defining dependency "pdump" 00:02:50.797 Message: lib/table: Defining dependency "table" 00:02:50.797 Message: lib/pipeline: Defining dependency "pipeline" 00:02:50.797 Message: lib/graph: Defining dependency "graph" 00:02:50.797 Message: lib/node: Defining dependency "node" 00:02:50.797 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:50.797 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:50.797 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:50.797 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:52.213 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:52.213 Compiler for C supports arguments -Wno-unused-value: YES 00:02:52.213 Compiler for C supports arguments -Wno-format: YES 00:02:52.213 Compiler for C supports arguments -Wno-format-security: YES 00:02:52.213 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:52.213 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:52.213 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:52.213 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:52.213 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:52.213 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:52.213 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:52.213 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:52.213 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:52.213 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:52.213 Has header "sys/epoll.h" : YES 00:02:52.213 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:52.213 Configuring doxy-api-html.conf using configuration 00:02:52.213 Configuring doxy-api-man.conf using configuration 00:02:52.213 Program mandb found: YES (/usr/bin/mandb) 00:02:52.213 Program sphinx-build found: NO 00:02:52.213 Configuring rte_build_config.h using configuration 00:02:52.213 Message: 00:02:52.213 ================= 00:02:52.213 Applications Enabled 00:02:52.213 ================= 00:02:52.213 00:02:52.213 apps: 00:02:52.213 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:52.213 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:52.213 test-pmd, test-regex, test-sad, test-security-perf, 00:02:52.213 00:02:52.213 Message: 00:02:52.213 ================= 00:02:52.213 Libraries Enabled 00:02:52.213 ================= 00:02:52.213 00:02:52.213 libs: 00:02:52.213 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:52.213 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:02:52.213 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:02:52.213 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:02:52.213 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:02:52.213 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:02:52.213 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:02:52.213 00:02:52.213 00:02:52.213 Message: 00:02:52.213 =============== 00:02:52.213 Drivers Enabled 00:02:52.213 =============== 00:02:52.213 00:02:52.213 common: 00:02:52.213 00:02:52.213 bus: 00:02:52.213 pci, vdev, 00:02:52.213 mempool: 00:02:52.213 ring, 00:02:52.213 dma: 00:02:52.213 00:02:52.213 net: 00:02:52.213 i40e, 00:02:52.213 raw: 00:02:52.213 00:02:52.213 crypto: 00:02:52.213 00:02:52.213 compress: 00:02:52.213 00:02:52.213 regex: 00:02:52.213 00:02:52.213 ml: 00:02:52.213 00:02:52.213 vdpa: 00:02:52.213 00:02:52.213 event: 00:02:52.213 00:02:52.213 baseband: 00:02:52.213 00:02:52.213 gpu: 00:02:52.213 00:02:52.213 00:02:52.213 Message: 00:02:52.213 ================= 00:02:52.213 Content Skipped 00:02:52.213 ================= 00:02:52.213 00:02:52.213 apps: 00:02:52.213 00:02:52.213 libs: 00:02:52.213 00:02:52.213 drivers: 00:02:52.213 common/cpt: not in enabled drivers build config 00:02:52.213 common/dpaax: not in enabled drivers build config 00:02:52.213 common/iavf: not in enabled drivers build config 00:02:52.213 common/idpf: not in enabled drivers build config 00:02:52.213 common/mvep: not in enabled drivers build config 00:02:52.213 common/octeontx: not in enabled drivers build config 00:02:52.213 bus/auxiliary: not in enabled drivers build config 00:02:52.213 bus/cdx: not in enabled drivers build config 00:02:52.213 bus/dpaa: not in enabled drivers build config 00:02:52.213 bus/fslmc: not in enabled drivers build config 00:02:52.213 bus/ifpga: not in enabled drivers build config 00:02:52.213 bus/platform: not in enabled drivers build config 00:02:52.214 bus/vmbus: not in enabled drivers build config 00:02:52.214 common/cnxk: not in enabled drivers build config 00:02:52.214 common/mlx5: not in enabled drivers build config 00:02:52.214 common/nfp: not in enabled drivers build config 00:02:52.214 common/qat: not in enabled drivers build config 00:02:52.214 common/sfc_efx: not in enabled drivers build config 00:02:52.214 mempool/bucket: not in enabled drivers build config 00:02:52.214 mempool/cnxk: not in enabled drivers build config 00:02:52.214 mempool/dpaa: not in enabled drivers build config 00:02:52.214 mempool/dpaa2: not in enabled drivers build config 00:02:52.214 mempool/octeontx: not in enabled drivers build config 00:02:52.214 mempool/stack: not in enabled drivers build config 00:02:52.214 dma/cnxk: not in enabled drivers build config 00:02:52.214 dma/dpaa: not in enabled drivers build config 00:02:52.214 dma/dpaa2: not in enabled drivers build config 00:02:52.214 dma/hisilicon: not in enabled drivers build config 00:02:52.214 dma/idxd: not in enabled drivers build config 00:02:52.214 dma/ioat: not in enabled drivers build config 00:02:52.214 dma/skeleton: not in enabled drivers build config 00:02:52.214 net/af_packet: not in enabled drivers build config 00:02:52.214 net/af_xdp: not in enabled drivers build config 00:02:52.214 net/ark: not in enabled drivers build config 00:02:52.214 net/atlantic: not in enabled drivers build config 00:02:52.214 net/avp: not in enabled drivers build config 00:02:52.214 net/axgbe: not in enabled drivers build config 00:02:52.214 net/bnx2x: not in enabled drivers build config 00:02:52.214 net/bnxt: not in enabled drivers build config 00:02:52.214 net/bonding: not in enabled drivers build config 00:02:52.214 net/cnxk: not in enabled drivers build config 00:02:52.214 net/cpfl: not in enabled drivers build config 00:02:52.214 net/cxgbe: not in enabled drivers build config 00:02:52.214 net/dpaa: not in enabled drivers build config 00:02:52.214 net/dpaa2: not in enabled drivers build config 00:02:52.214 net/e1000: not in enabled drivers build config 00:02:52.214 net/ena: not in enabled drivers build config 00:02:52.214 net/enetc: not in enabled drivers build config 00:02:52.214 net/enetfec: not in enabled drivers build config 00:02:52.214 net/enic: not in enabled drivers build config 00:02:52.214 net/failsafe: not in enabled drivers build config 00:02:52.214 net/fm10k: not in enabled drivers build config 00:02:52.214 net/gve: not in enabled drivers build config 00:02:52.214 net/hinic: not in enabled drivers build config 00:02:52.214 net/hns3: not in enabled drivers build config 00:02:52.214 net/iavf: not in enabled drivers build config 00:02:52.214 net/ice: not in enabled drivers build config 00:02:52.214 net/idpf: not in enabled drivers build config 00:02:52.214 net/igc: not in enabled drivers build config 00:02:52.214 net/ionic: not in enabled drivers build config 00:02:52.214 net/ipn3ke: not in enabled drivers build config 00:02:52.214 net/ixgbe: not in enabled drivers build config 00:02:52.214 net/mana: not in enabled drivers build config 00:02:52.214 net/memif: not in enabled drivers build config 00:02:52.214 net/mlx4: not in enabled drivers build config 00:02:52.214 net/mlx5: not in enabled drivers build config 00:02:52.214 net/mvneta: not in enabled drivers build config 00:02:52.214 net/mvpp2: not in enabled drivers build config 00:02:52.214 net/netvsc: not in enabled drivers build config 00:02:52.214 net/nfb: not in enabled drivers build config 00:02:52.214 net/nfp: not in enabled drivers build config 00:02:52.214 net/ngbe: not in enabled drivers build config 00:02:52.214 net/null: not in enabled drivers build config 00:02:52.214 net/octeontx: not in enabled drivers build config 00:02:52.214 net/octeon_ep: not in enabled drivers build config 00:02:52.214 net/pcap: not in enabled drivers build config 00:02:52.214 net/pfe: not in enabled drivers build config 00:02:52.214 net/qede: not in enabled drivers build config 00:02:52.214 net/ring: not in enabled drivers build config 00:02:52.214 net/sfc: not in enabled drivers build config 00:02:52.214 net/softnic: not in enabled drivers build config 00:02:52.214 net/tap: not in enabled drivers build config 00:02:52.214 net/thunderx: not in enabled drivers build config 00:02:52.214 net/txgbe: not in enabled drivers build config 00:02:52.214 net/vdev_netvsc: not in enabled drivers build config 00:02:52.214 net/vhost: not in enabled drivers build config 00:02:52.214 net/virtio: not in enabled drivers build config 00:02:52.214 net/vmxnet3: not in enabled drivers build config 00:02:52.214 raw/cnxk_bphy: not in enabled drivers build config 00:02:52.214 raw/cnxk_gpio: not in enabled drivers build config 00:02:52.214 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:52.214 raw/ifpga: not in enabled drivers build config 00:02:52.214 raw/ntb: not in enabled drivers build config 00:02:52.214 raw/skeleton: not in enabled drivers build config 00:02:52.214 crypto/armv8: not in enabled drivers build config 00:02:52.214 crypto/bcmfs: not in enabled drivers build config 00:02:52.214 crypto/caam_jr: not in enabled drivers build config 00:02:52.214 crypto/ccp: not in enabled drivers build config 00:02:52.214 crypto/cnxk: not in enabled drivers build config 00:02:52.214 crypto/dpaa_sec: not in enabled drivers build config 00:02:52.214 crypto/dpaa2_sec: not in enabled drivers build config 00:02:52.214 crypto/ipsec_mb: not in enabled drivers build config 00:02:52.214 crypto/mlx5: not in enabled drivers build config 00:02:52.214 crypto/mvsam: not in enabled drivers build config 00:02:52.214 crypto/nitrox: not in enabled drivers build config 00:02:52.214 crypto/null: not in enabled drivers build config 00:02:52.214 crypto/octeontx: not in enabled drivers build config 00:02:52.214 crypto/openssl: not in enabled drivers build config 00:02:52.214 crypto/scheduler: not in enabled drivers build config 00:02:52.214 crypto/uadk: not in enabled drivers build config 00:02:52.214 crypto/virtio: not in enabled drivers build config 00:02:52.214 compress/isal: not in enabled drivers build config 00:02:52.214 compress/mlx5: not in enabled drivers build config 00:02:52.214 compress/octeontx: not in enabled drivers build config 00:02:52.214 compress/zlib: not in enabled drivers build config 00:02:52.214 regex/mlx5: not in enabled drivers build config 00:02:52.214 regex/cn9k: not in enabled drivers build config 00:02:52.214 ml/cnxk: not in enabled drivers build config 00:02:52.214 vdpa/ifc: not in enabled drivers build config 00:02:52.214 vdpa/mlx5: not in enabled drivers build config 00:02:52.214 vdpa/nfp: not in enabled drivers build config 00:02:52.214 vdpa/sfc: not in enabled drivers build config 00:02:52.214 event/cnxk: not in enabled drivers build config 00:02:52.214 event/dlb2: not in enabled drivers build config 00:02:52.214 event/dpaa: not in enabled drivers build config 00:02:52.214 event/dpaa2: not in enabled drivers build config 00:02:52.214 event/dsw: not in enabled drivers build config 00:02:52.214 event/opdl: not in enabled drivers build config 00:02:52.214 event/skeleton: not in enabled drivers build config 00:02:52.214 event/sw: not in enabled drivers build config 00:02:52.214 event/octeontx: not in enabled drivers build config 00:02:52.214 baseband/acc: not in enabled drivers build config 00:02:52.214 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:52.214 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:52.214 baseband/la12xx: not in enabled drivers build config 00:02:52.214 baseband/null: not in enabled drivers build config 00:02:52.214 baseband/turbo_sw: not in enabled drivers build config 00:02:52.214 gpu/cuda: not in enabled drivers build config 00:02:52.214 00:02:52.214 00:02:52.214 Build targets in project: 215 00:02:52.214 00:02:52.214 DPDK 23.11.0 00:02:52.214 00:02:52.214 User defined options 00:02:52.214 libdir : lib 00:02:52.214 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:52.214 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:52.214 c_link_args : 00:02:52.214 enable_docs : false 00:02:52.214 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:52.214 enable_kmods : false 00:02:52.214 machine : native 00:02:52.214 tests : false 00:02:52.214 00:02:52.214 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:52.214 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:52.214 13:51:30 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:52.214 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:52.214 [1/705] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:52.214 [2/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:52.214 [3/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:52.214 [4/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:52.214 [5/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:52.214 [6/705] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:52.214 [7/705] Linking static target lib/librte_kvargs.a 00:02:52.214 [8/705] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:52.214 [9/705] Linking static target lib/librte_log.a 00:02:52.214 [10/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:52.473 [11/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:52.473 [12/705] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.473 [13/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:52.473 [14/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:52.473 [15/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:52.473 [16/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:52.732 [17/705] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.732 [18/705] Linking target lib/librte_log.so.24.0 00:02:52.732 [19/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:52.732 [20/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:52.732 [21/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:52.732 [22/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:52.732 [23/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:52.991 [24/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:52.991 [25/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:52.991 [26/705] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:52.991 [27/705] Linking target lib/librte_kvargs.so.24.0 00:02:52.991 [28/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:52.991 [29/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:52.991 [30/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:52.991 [31/705] Linking static target lib/librte_telemetry.a 00:02:52.991 [32/705] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:53.248 [33/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:53.248 [34/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:53.248 [35/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:53.248 [36/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:53.248 [37/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:53.248 [38/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:53.248 [39/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:53.248 [40/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:53.248 [41/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:53.248 [42/705] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.505 [43/705] Linking target lib/librte_telemetry.so.24.0 00:02:53.505 [44/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:53.505 [45/705] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:53.505 [46/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:53.505 [47/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:53.505 [48/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:53.762 [49/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:53.762 [50/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:53.762 [51/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:53.762 [52/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:53.762 [53/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:53.762 [54/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:53.762 [55/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:53.762 [56/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:53.762 [57/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:53.762 [58/705] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:54.020 [59/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:54.020 [60/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:54.020 [61/705] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:54.020 [62/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:54.020 [63/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:54.020 [64/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:54.020 [65/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:54.020 [66/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:54.020 [67/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:54.020 [68/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:54.278 [69/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:54.278 [70/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:54.278 [71/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:54.278 [72/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:54.278 [73/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:54.278 [74/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:54.278 [75/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:54.278 [76/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:54.278 [77/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:54.536 [78/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:54.536 [79/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:54.536 [80/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:54.536 [81/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:54.536 [82/705] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:54.536 [83/705] Linking static target lib/librte_ring.a 00:02:54.536 [84/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:54.793 [85/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:54.793 [86/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:54.793 [87/705] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.793 [88/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:54.793 [89/705] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:54.793 [90/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:54.793 [91/705] Linking static target lib/librte_mempool.a 00:02:55.051 [92/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:55.051 [93/705] Linking static target lib/librte_eal.a 00:02:55.051 [94/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:55.051 [95/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:55.051 [96/705] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:55.051 [97/705] Linking static target lib/librte_rcu.a 00:02:55.051 [98/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:55.051 [99/705] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:55.051 [100/705] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:55.051 [101/705] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:55.310 [102/705] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.310 [103/705] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:55.310 [104/705] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.310 [105/705] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:55.310 [106/705] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:55.310 [107/705] Linking static target lib/librte_meter.a 00:02:55.310 [108/705] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:55.310 [109/705] Linking static target lib/librte_net.a 00:02:55.310 [110/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:55.568 [111/705] Linking static target lib/librte_mbuf.a 00:02:55.568 [112/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:55.568 [113/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:55.568 [114/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:55.568 [115/705] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.568 [116/705] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.568 [117/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:55.825 [118/705] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.825 [119/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:55.825 [120/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:56.083 [121/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:56.083 [122/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:56.083 [123/705] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:56.083 [124/705] Linking static target lib/librte_pci.a 00:02:56.083 [125/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:56.083 [126/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:56.341 [127/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:56.341 [128/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:56.341 [129/705] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.341 [130/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:56.341 [131/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:56.341 [132/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:56.341 [133/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:56.341 [134/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:56.599 [135/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:56.599 [136/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:56.599 [137/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:56.599 [138/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:56.599 [139/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:56.599 [140/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:56.599 [141/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:56.599 [142/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:56.599 [143/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:56.599 [144/705] Linking static target lib/librte_cmdline.a 00:02:56.858 [145/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:56.858 [146/705] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:56.858 [147/705] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:56.858 [148/705] Linking static target lib/librte_metrics.a 00:02:56.858 [149/705] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:57.116 [150/705] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.116 [151/705] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.116 [152/705] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:57.116 [153/705] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:57.116 [154/705] Linking static target lib/librte_timer.a 00:02:57.116 [155/705] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:57.373 [156/705] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:57.373 [157/705] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.374 [158/705] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:57.374 [159/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:57.632 [160/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:57.632 [161/705] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:57.632 [162/705] Linking static target lib/librte_bitratestats.a 00:02:57.891 [163/705] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:57.891 [164/705] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.891 [165/705] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:57.891 [166/705] Linking static target lib/librte_bbdev.a 00:02:58.151 [167/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:58.151 [168/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:58.151 [169/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:58.151 [170/705] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:58.409 [171/705] Linking static target lib/librte_hash.a 00:02:58.409 [172/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:58.409 [173/705] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:58.409 [174/705] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.409 [175/705] Linking static target lib/acl/libavx2_tmp.a 00:02:58.409 [176/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:58.409 [177/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:58.409 [178/705] Linking static target lib/librte_ethdev.a 00:02:58.667 [179/705] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:58.667 [180/705] Linking static target lib/librte_cfgfile.a 00:02:58.667 [181/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:58.667 [182/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:58.667 [183/705] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.667 [184/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:58.926 [185/705] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.926 [186/705] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.926 [187/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:58.926 [188/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:58.926 [189/705] Linking target lib/librte_eal.so.24.0 00:02:58.926 [190/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:58.926 [191/705] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:58.926 [192/705] Linking target lib/librte_ring.so.24.0 00:02:59.185 [193/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:59.185 [194/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:59.185 [195/705] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:59.185 [196/705] Linking target lib/librte_meter.so.24.0 00:02:59.185 [197/705] Linking target lib/librte_pci.so.24.0 00:02:59.185 [198/705] Linking target lib/librte_rcu.so.24.0 00:02:59.185 [199/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:59.185 [200/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:59.185 [201/705] Linking target lib/librte_mempool.so.24.0 00:02:59.185 [202/705] Linking target lib/librte_timer.so.24.0 00:02:59.185 [203/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:59.185 [204/705] Linking static target lib/librte_acl.a 00:02:59.185 [205/705] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:59.185 [206/705] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:59.185 [207/705] Linking static target lib/librte_bpf.a 00:02:59.185 [208/705] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:59.185 [209/705] Linking static target lib/librte_compressdev.a 00:02:59.185 [210/705] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:59.185 [211/705] Linking target lib/librte_cfgfile.so.24.0 00:02:59.185 [212/705] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:59.443 [213/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:59.443 [214/705] Linking target lib/librte_mbuf.so.24.0 00:02:59.443 [215/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:59.443 [216/705] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:59.443 [217/705] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.443 [218/705] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.443 [219/705] Linking target lib/librte_net.so.24.0 00:02:59.443 [220/705] Linking target lib/librte_bbdev.so.24.0 00:02:59.443 [221/705] Linking target lib/librte_acl.so.24.0 00:02:59.443 [222/705] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:59.443 [223/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:59.443 [224/705] Linking static target lib/librte_distributor.a 00:02:59.700 [225/705] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:59.700 [226/705] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:02:59.700 [227/705] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.700 [228/705] Linking target lib/librte_cmdline.so.24.0 00:02:59.700 [229/705] Linking target lib/librte_hash.so.24.0 00:02:59.700 [230/705] Linking target lib/librte_compressdev.so.24.0 00:02:59.701 [231/705] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:59.701 [232/705] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.701 [233/705] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:59.701 [234/705] Linking target lib/librte_distributor.so.24.0 00:02:59.959 [235/705] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:59.959 [236/705] Linking static target lib/librte_dmadev.a 00:02:59.959 [237/705] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:03:00.216 [238/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:03:00.216 [239/705] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.216 [240/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:03:00.216 [241/705] Linking target lib/librte_dmadev.so.24.0 00:03:00.216 [242/705] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:03:00.474 [243/705] Linking static target lib/librte_efd.a 00:03:00.474 [244/705] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:03:00.474 [245/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:00.474 [246/705] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.474 [247/705] Linking static target lib/librte_cryptodev.a 00:03:00.474 [248/705] Linking target lib/librte_efd.so.24.0 00:03:00.732 [249/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:03:00.732 [250/705] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:03:00.732 [251/705] Linking static target lib/librte_dispatcher.a 00:03:00.732 [252/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:03:00.732 [253/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:03:00.732 [254/705] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:03:00.732 [255/705] Linking static target lib/librte_gpudev.a 00:03:01.000 [256/705] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:03:01.000 [257/705] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.000 [258/705] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:03:01.000 [259/705] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:03:01.259 [260/705] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:03:01.259 [261/705] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.259 [262/705] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:03:01.259 [263/705] Linking target lib/librte_cryptodev.so.24.0 00:03:01.259 [264/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:03:01.259 [265/705] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.517 [266/705] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:03:01.517 [267/705] Linking target lib/librte_gpudev.so.24.0 00:03:01.517 [268/705] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:03:01.517 [269/705] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:03:01.517 [270/705] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:03:01.517 [271/705] Linking static target lib/librte_gro.a 00:03:01.517 [272/705] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:03:01.517 [273/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:03:01.517 [274/705] Linking static target lib/librte_eventdev.a 00:03:01.517 [275/705] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:03:01.517 [276/705] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:03:01.517 [277/705] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.775 [278/705] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:03:01.775 [279/705] Linking static target lib/librte_gso.a 00:03:01.775 [280/705] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.775 [281/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:03:01.775 [282/705] Linking target lib/librte_ethdev.so.24.0 00:03:01.775 [283/705] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.775 [284/705] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:03:01.775 [285/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:03:01.775 [286/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:03:01.775 [287/705] Linking target lib/librte_metrics.so.24.0 00:03:01.775 [288/705] Linking target lib/librte_bpf.so.24.0 00:03:01.775 [289/705] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:03:01.775 [290/705] Linking target lib/librte_gro.so.24.0 00:03:01.775 [291/705] Linking static target lib/librte_jobstats.a 00:03:02.032 [292/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:03:02.032 [293/705] Linking target lib/librte_gso.so.24.0 00:03:02.032 [294/705] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:03:02.032 [295/705] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:03:02.032 [296/705] Linking target lib/librte_bitratestats.so.24.0 00:03:02.032 [297/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:03:02.032 [298/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:03:02.032 [299/705] Linking static target lib/librte_ip_frag.a 00:03:02.032 [300/705] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:03:02.032 [301/705] Linking static target lib/librte_latencystats.a 00:03:02.033 [302/705] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.290 [303/705] Linking target lib/librte_jobstats.so.24.0 00:03:02.290 [304/705] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.290 [305/705] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:03:02.290 [306/705] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:03:02.290 [307/705] Linking target lib/librte_ip_frag.so.24.0 00:03:02.290 [308/705] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.290 [309/705] Linking target lib/librte_latencystats.so.24.0 00:03:02.290 [310/705] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:02.290 [311/705] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:03:02.290 [312/705] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:02.549 [313/705] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:03:02.549 [314/705] Linking static target lib/librte_lpm.a 00:03:02.549 [315/705] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:02.549 [316/705] Compiling C object lib/librte_member.a.p/member_rte_member_sketch_avx512.c.o 00:03:02.549 [317/705] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:03:02.549 [318/705] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:02.549 [319/705] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:02.806 [320/705] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:03:02.806 [321/705] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:02.806 [322/705] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.806 [323/705] Linking target lib/librte_lpm.so.24.0 00:03:02.806 [324/705] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:03:02.806 [325/705] Linking static target lib/librte_pcapng.a 00:03:02.806 [326/705] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.806 [327/705] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:02.806 [328/705] Linking target lib/librte_eventdev.so.24.0 00:03:02.806 [329/705] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:02.806 [330/705] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:03:03.064 [331/705] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.064 [332/705] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:03:03.064 [333/705] Linking target lib/librte_dispatcher.so.24.0 00:03:03.064 [334/705] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:03.064 [335/705] Linking target lib/librte_pcapng.so.24.0 00:03:03.064 [336/705] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:03.064 [337/705] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:03.064 [338/705] Linking static target lib/librte_power.a 00:03:03.064 [339/705] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:03:03.064 [340/705] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:03:03.323 [341/705] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:03:03.323 [342/705] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:03:03.323 [343/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:03:03.323 [344/705] Linking static target lib/librte_regexdev.a 00:03:03.323 [345/705] Linking static target lib/librte_rawdev.a 00:03:03.323 [346/705] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:03:03.323 [347/705] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:03:03.323 [348/705] Linking static target lib/librte_member.a 00:03:03.323 [349/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:03:03.582 [350/705] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.582 [351/705] Linking target lib/librte_power.so.24.0 00:03:03.582 [352/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:03:03.582 [353/705] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.582 [354/705] Linking static target lib/librte_mldev.a 00:03:03.582 [355/705] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:03:03.582 [356/705] Linking target lib/librte_member.so.24.0 00:03:03.582 [357/705] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:03:03.582 [358/705] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.582 [359/705] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:03:03.582 [360/705] Linking target lib/librte_rawdev.so.24.0 00:03:03.841 [361/705] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:03.841 [362/705] Linking static target lib/librte_reorder.a 00:03:03.841 [363/705] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:03:03.841 [364/705] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.841 [365/705] Linking target lib/librte_regexdev.so.24.0 00:03:03.841 [366/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:03:03.841 [367/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:03:03.841 [368/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:03:03.841 [369/705] Linking static target lib/librte_stack.a 00:03:03.841 [370/705] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:03.841 [371/705] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:03:03.841 [372/705] Linking static target lib/librte_rib.a 00:03:03.841 [373/705] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.841 [374/705] Linking target lib/librte_reorder.so.24.0 00:03:04.169 [375/705] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.169 [376/705] Linking target lib/librte_stack.so.24.0 00:03:04.169 [377/705] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:03:04.169 [378/705] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:04.169 [379/705] Linking static target lib/librte_security.a 00:03:04.169 [380/705] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:04.169 [381/705] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.169 [382/705] Linking target lib/librte_rib.so.24.0 00:03:04.428 [383/705] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:04.428 [384/705] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:04.428 [385/705] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.428 [386/705] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:03:04.428 [387/705] Linking target lib/librte_mldev.so.24.0 00:03:04.428 [388/705] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.428 [389/705] Linking target lib/librte_security.so.24.0 00:03:04.428 [390/705] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:03:04.428 [391/705] Linking static target lib/librte_sched.a 00:03:04.428 [392/705] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:03:04.686 [393/705] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:04.686 [394/705] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:04.686 [395/705] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.944 [396/705] Linking target lib/librte_sched.so.24.0 00:03:04.944 [397/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:04.944 [398/705] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:03:04.944 [399/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:03:04.944 [400/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:05.203 [401/705] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:03:05.203 [402/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:03:05.203 [403/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:03:05.203 [404/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:03:05.203 [405/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:03:05.461 [406/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:03:05.461 [407/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:03:05.461 [408/705] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:03:05.461 [409/705] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:03:05.461 [410/705] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:03:05.461 [411/705] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:03:05.720 [412/705] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:03:05.720 [413/705] Linking static target lib/librte_ipsec.a 00:03:05.720 [414/705] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:03:05.720 [415/705] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:03:05.978 [416/705] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.978 [417/705] Linking target lib/librte_ipsec.so.24.0 00:03:05.978 [418/705] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:03:05.978 [419/705] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:03:05.978 [420/705] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:03:05.978 [421/705] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:03:05.978 [422/705] Linking static target lib/librte_fib.a 00:03:05.978 [423/705] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:03:06.236 [424/705] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:03:06.236 [425/705] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.236 [426/705] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:03:06.237 [427/705] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:03:06.237 [428/705] Linking target lib/librte_fib.so.24.0 00:03:06.494 [429/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:03:06.494 [430/705] Linking static target lib/librte_pdcp.a 00:03:06.753 [431/705] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:03:06.753 [432/705] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:03:06.753 [433/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:03:06.753 [434/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:03:06.753 [435/705] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.753 [436/705] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:03:06.753 [437/705] Linking target lib/librte_pdcp.so.24.0 00:03:06.753 [438/705] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:03:07.011 [439/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:03:07.011 [440/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:03:07.268 [441/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:03:07.268 [442/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:03:07.268 [443/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:03:07.268 [444/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:03:07.268 [445/705] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:03:07.268 [446/705] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:03:07.268 [447/705] Linking static target lib/librte_pdump.a 00:03:07.268 [448/705] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:03:07.268 [449/705] Linking static target lib/librte_port.a 00:03:07.526 [450/705] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:03:07.526 [451/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:03:07.526 [452/705] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.526 [453/705] Linking target lib/librte_pdump.so.24.0 00:03:07.784 [454/705] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.784 [455/705] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:03:07.784 [456/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:03:07.784 [457/705] Linking target lib/librte_port.so.24.0 00:03:07.784 [458/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:03:07.784 [459/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:07.784 [460/705] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:03:07.784 [461/705] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:03:08.041 [462/705] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:03:08.041 [463/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:03:08.041 [464/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:03:08.041 [465/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:03:08.041 [466/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:03:08.041 [467/705] Linking static target lib/librte_table.a 00:03:08.300 [468/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:03:08.558 [469/705] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:03:08.558 [470/705] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.558 [471/705] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:03:08.558 [472/705] Linking target lib/librte_table.so.24.0 00:03:08.558 [473/705] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:03:08.558 [474/705] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:03:08.816 [475/705] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:03:08.816 [476/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:03:08.816 [477/705] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:03:08.816 [478/705] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:03:08.816 [479/705] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:03:08.816 [480/705] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:03:09.075 [481/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:03:09.075 [482/705] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:03:09.333 [483/705] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:03:09.333 [484/705] Linking static target lib/librte_graph.a 00:03:09.333 [485/705] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:09.333 [486/705] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:09.333 [487/705] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:03:09.333 [488/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:09.591 [489/705] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:03:09.591 [490/705] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.591 [491/705] Linking target lib/librte_graph.so.24.0 00:03:09.591 [492/705] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:03:09.591 [493/705] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:09.849 [494/705] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:09.849 [495/705] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:03:09.849 [496/705] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:03:09.849 [497/705] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:03:09.849 [498/705] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:10.107 [499/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:10.107 [500/705] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:10.107 [501/705] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:10.107 [502/705] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:03:10.107 [503/705] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:10.364 [504/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:10.364 [505/705] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:10.364 [506/705] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:03:10.364 [507/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:10.364 [508/705] Linking static target lib/librte_node.a 00:03:10.364 [509/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:10.364 [510/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:10.623 [511/705] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.623 [512/705] Linking target lib/librte_node.so.24.0 00:03:10.623 [513/705] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:10.623 [514/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:10.623 [515/705] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:10.623 [516/705] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:10.623 [517/705] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:10.623 [518/705] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:10.623 [519/705] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:10.623 [520/705] Linking static target drivers/librte_bus_vdev.a 00:03:10.623 [521/705] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:10.623 [522/705] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:10.623 [523/705] Linking static target drivers/librte_bus_pci.a 00:03:10.881 [524/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:10.881 [525/705] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:10.881 [526/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:10.881 [527/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:10.881 [528/705] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.881 [529/705] Linking target drivers/librte_bus_vdev.so.24.0 00:03:10.881 [530/705] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:03:11.139 [531/705] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:11.139 [532/705] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.139 [533/705] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:11.139 [534/705] Linking target drivers/librte_bus_pci.so.24.0 00:03:11.139 [535/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:11.139 [536/705] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:03:11.139 [537/705] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:11.139 [538/705] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:11.139 [539/705] Linking static target drivers/librte_mempool_ring.a 00:03:11.139 [540/705] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:11.139 [541/705] Linking target drivers/librte_mempool_ring.so.24.0 00:03:11.397 [542/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:11.397 [543/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:11.656 [544/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:11.656 [545/705] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:11.973 [546/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:12.254 [547/705] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:03:12.254 [548/705] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:03:12.254 [549/705] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:12.254 [550/705] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:12.254 [551/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:12.512 [552/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:12.512 [553/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:12.770 [554/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:12.770 [555/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:12.771 [556/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:03:12.771 [557/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:13.028 [558/705] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:03:13.028 [559/705] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:03:13.028 [560/705] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:13.286 [561/705] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:03:13.286 [562/705] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:03:13.286 [563/705] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:03:13.544 [564/705] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:03:13.544 [565/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:13.544 [566/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:13.544 [567/705] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:03:13.803 [568/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:13.803 [569/705] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:03:13.803 [570/705] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:03:13.803 [571/705] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:03:13.803 [572/705] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:03:13.803 [573/705] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:03:14.061 [574/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:14.061 [575/705] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:14.061 [576/705] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:14.061 [577/705] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:14.061 [578/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:14.061 [579/705] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:14.319 [580/705] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:14.319 [581/705] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:14.319 [582/705] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:14.319 [583/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:14.319 [584/705] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:14.319 [585/705] Linking static target drivers/librte_net_i40e.a 00:03:14.577 [586/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:14.577 [587/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:14.577 [588/705] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:14.835 [589/705] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.835 [590/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:14.835 [591/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:14.835 [592/705] Linking target drivers/librte_net_i40e.so.24.0 00:03:14.835 [593/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:15.093 [594/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:15.093 [595/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:15.093 [596/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:15.350 [597/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:15.350 [598/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:15.350 [599/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:15.350 [600/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:15.610 [601/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:15.610 [602/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:15.610 [603/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:15.610 [604/705] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:15.610 [605/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:15.610 [606/705] Linking static target lib/librte_vhost.a 00:03:15.610 [607/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:15.869 [608/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:15.869 [609/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:15.869 [610/705] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:03:15.869 [611/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:16.127 [612/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:16.127 [613/705] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:03:16.127 [614/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:16.127 [615/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:16.386 [616/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:16.386 [617/705] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.386 [618/705] Linking target lib/librte_vhost.so.24.0 00:03:16.645 [619/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:16.904 [620/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:16.904 [621/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:16.904 [622/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:16.904 [623/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:16.904 [624/705] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:16.904 [625/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:16.904 [626/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:16.904 [627/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:03:17.163 [628/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:17.163 [629/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:03:17.163 [630/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:03:17.163 [631/705] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:17.163 [632/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:03:17.422 [633/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:03:17.422 [634/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:03:17.422 [635/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:03:17.422 [636/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:03:17.422 [637/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:03:17.681 [638/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:03:17.681 [639/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:03:17.681 [640/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:17.681 [641/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:17.681 [642/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:17.941 [643/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:17.941 [644/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:17.941 [645/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:17.941 [646/705] Linking static target lib/librte_pipeline.a 00:03:17.941 [647/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:17.941 [648/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:17.941 [649/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:17.941 [650/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:17.941 [651/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:18.201 [652/705] Linking target app/dpdk-dumpcap 00:03:18.201 [653/705] Linking target app/dpdk-graph 00:03:18.201 [654/705] Linking target app/dpdk-pdump 00:03:18.201 [655/705] Linking target app/dpdk-proc-info 00:03:18.201 [656/705] Linking target app/dpdk-test-acl 00:03:18.459 [657/705] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:18.459 [658/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:18.459 [659/705] Linking target app/dpdk-test-cmdline 00:03:18.459 [660/705] Linking target app/dpdk-test-compress-perf 00:03:18.459 [661/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:03:18.459 [662/705] Linking target app/dpdk-test-crypto-perf 00:03:18.717 [663/705] Linking target app/dpdk-test-dma-perf 00:03:18.717 [664/705] Linking target app/dpdk-test-eventdev 00:03:18.717 [665/705] Linking target app/dpdk-test-flow-perf 00:03:18.717 [666/705] Linking target app/dpdk-test-fib 00:03:18.717 [667/705] Linking target app/dpdk-test-gpudev 00:03:18.717 [668/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:18.717 [669/705] Linking target app/dpdk-test-pipeline 00:03:18.717 [670/705] Linking target app/dpdk-test-mldev 00:03:18.976 [671/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:03:18.976 [672/705] Linking target app/dpdk-test-bbdev 00:03:18.976 [673/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:19.233 [674/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:19.491 [675/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:19.491 [676/705] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:19.491 [677/705] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:19.491 [678/705] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:19.491 [679/705] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:19.756 [680/705] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:19.756 [681/705] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.756 [682/705] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:03:19.756 [683/705] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:19.756 [684/705] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:19.756 [685/705] Linking target lib/librte_pipeline.so.24.0 00:03:20.032 [686/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:20.032 [687/705] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:20.032 [688/705] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:20.032 [689/705] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:20.291 [690/705] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:20.291 [691/705] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:20.549 [692/705] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:20.808 [693/705] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:20.808 [694/705] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:20.808 [695/705] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:20.808 [696/705] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:20.808 [697/705] Linking target app/dpdk-test-sad 00:03:20.808 [698/705] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:21.067 [699/705] Linking target app/dpdk-test-regex 00:03:21.067 [700/705] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:21.067 [701/705] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:21.067 [702/705] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:21.325 [703/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:21.325 [704/705] Linking target app/dpdk-test-security-perf 00:03:21.583 [705/705] Linking target app/dpdk-testpmd 00:03:21.583 13:51:59 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:03:21.583 13:51:59 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:21.583 13:51:59 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:21.583 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:21.583 [0/1] Installing files. 00:03:21.845 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:21.845 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:21.845 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:21.846 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:21.846 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:21.846 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:21.846 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:21.846 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:21.846 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:21.846 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:21.846 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:21.846 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:21.846 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:21.846 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:21.846 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:21.846 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:21.846 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:21.846 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:21.846 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:21.846 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:21.846 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:21.846 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:21.846 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:21.846 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:21.846 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:21.846 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:21.846 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:21.846 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:21.846 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:21.846 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:21.846 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:21.846 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:21.846 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:21.846 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:21.846 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:21.846 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:21.846 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:21.846 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:21.846 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:21.846 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:21.846 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:21.846 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:21.846 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:21.846 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:21.846 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:21.846 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:21.846 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:21.846 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:21.846 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:21.846 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:21.846 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:21.846 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:21.846 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:21.846 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:21.846 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:21.846 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:21.846 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:21.846 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:21.846 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:21.846 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:21.846 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:21.846 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:21.846 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:21.846 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:21.846 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:21.846 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:21.846 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:21.846 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:21.846 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:21.846 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:21.846 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:21.846 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:21.846 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:21.846 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:21.847 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:21.848 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:21.848 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:21.848 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:21.848 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:21.848 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:21.848 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:21.848 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:21.848 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:21.848 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:21.848 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:21.848 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:21.848 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:21.848 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:21.848 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:21.848 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:21.848 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:21.848 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:21.848 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:21.848 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:21.848 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:21.848 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:21.848 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:21.848 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:21.848 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:21.848 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:21.848 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:21.848 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:21.848 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:21.848 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:21.848 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:21.848 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:21.848 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:21.848 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.848 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.848 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.848 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.848 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.848 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.848 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.848 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.848 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.848 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.848 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.848 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.848 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.848 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.848 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.848 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.848 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.848 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.848 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.848 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.848 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.848 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.848 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.848 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.848 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.848 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.848 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.848 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.848 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.848 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.848 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.848 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.848 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.848 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:21.848 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:21.848 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:21.848 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:21.849 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:21.850 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:21.850 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:21.850 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:21.850 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:21.850 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:21.850 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:21.850 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:21.850 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:21.850 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:21.850 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:21.850 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:21.850 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:21.850 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:21.850 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:21.850 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:21.850 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:21.850 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:21.850 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:21.850 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:21.850 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:21.850 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:21.850 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:21.850 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:21.850 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:21.850 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:21.850 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:21.850 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:21.850 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:21.850 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:21.850 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:21.850 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:21.850 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:21.850 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:21.850 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:21.850 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:21.850 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:21.850 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:21.850 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:21.850 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:21.850 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:21.850 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:21.850 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:21.850 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:21.850 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:21.850 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:21.850 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:21.850 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:21.850 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:21.850 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:21.850 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:21.850 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:21.850 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:21.850 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:21.850 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:21.850 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:21.850 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:21.850 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:21.850 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:21.850 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:21.850 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:21.850 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:21.850 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:21.850 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:21.850 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:21.850 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:21.850 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:21.850 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:21.850 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:21.851 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:21.851 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:21.851 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:21.851 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:21.851 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:21.851 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:21.851 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:21.851 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.851 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.852 Installing lib/librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.852 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.852 Installing lib/librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.852 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.852 Installing lib/librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.852 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.852 Installing lib/librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.852 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.852 Installing lib/librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.113 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.113 Installing lib/librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.113 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.113 Installing drivers/librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:22.113 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.113 Installing drivers/librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:22.113 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.113 Installing drivers/librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:22.113 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.113 Installing drivers/librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:22.113 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:22.113 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:22.113 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:22.113 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:22.113 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:22.113 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:22.113 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:22.113 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:22.113 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:22.113 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:22.113 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:22.113 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:22.113 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:22.113 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:22.113 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:22.113 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:22.113 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:22.113 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:22.113 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:22.113 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:22.113 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.113 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.113 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.113 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.113 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:22.113 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:22.113 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:22.113 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:22.113 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:22.113 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:22.113 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:22.113 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:22.113 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.114 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.115 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:22.116 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:22.116 Installing symlink pointing to librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:03:22.116 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:03:22.116 Installing symlink pointing to librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:03:22.116 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:22.116 Installing symlink pointing to librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:03:22.116 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:22.116 Installing symlink pointing to librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:03:22.116 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:22.116 Installing symlink pointing to librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:03:22.117 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:22.117 Installing symlink pointing to librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:03:22.117 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:22.117 Installing symlink pointing to librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:03:22.117 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:22.117 Installing symlink pointing to librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:03:22.117 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:22.117 Installing symlink pointing to librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:03:22.117 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:22.117 Installing symlink pointing to librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:03:22.117 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:22.117 Installing symlink pointing to librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:03:22.117 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:22.117 Installing symlink pointing to librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:03:22.117 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:22.117 Installing symlink pointing to librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:03:22.117 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:22.117 Installing symlink pointing to librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:03:22.117 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:22.117 Installing symlink pointing to librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:03:22.117 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:22.117 Installing symlink pointing to librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:03:22.117 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:22.117 Installing symlink pointing to librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:03:22.117 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:22.117 Installing symlink pointing to librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:03:22.117 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:22.117 Installing symlink pointing to librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:03:22.117 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:22.117 Installing symlink pointing to librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:03:22.117 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:22.117 Installing symlink pointing to librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:03:22.117 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:22.117 Installing symlink pointing to librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:03:22.117 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:22.117 Installing symlink pointing to librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:03:22.117 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:22.117 Installing symlink pointing to librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:03:22.117 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:22.117 Installing symlink pointing to librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:03:22.117 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:22.117 Installing symlink pointing to librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:03:22.117 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:22.117 Installing symlink pointing to librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:03:22.117 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:22.117 Installing symlink pointing to librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:03:22.117 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:03:22.117 Installing symlink pointing to librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:03:22.117 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:22.117 Installing symlink pointing to librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:03:22.117 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:22.117 Installing symlink pointing to librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:03:22.117 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:22.117 Installing symlink pointing to librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:03:22.117 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:22.117 Installing symlink pointing to librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:03:22.117 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:22.117 Installing symlink pointing to librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:03:22.117 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:22.117 Installing symlink pointing to librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:03:22.117 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:22.117 Installing symlink pointing to librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:03:22.117 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:22.117 Installing symlink pointing to librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:03:22.117 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:22.117 Installing symlink pointing to librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:03:22.117 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:22.117 Installing symlink pointing to librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:03:22.117 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:22.117 Installing symlink pointing to librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:03:22.117 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:22.117 Installing symlink pointing to librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:03:22.117 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:03:22.117 Installing symlink pointing to librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:03:22.117 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:22.117 Installing symlink pointing to librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:03:22.117 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:22.117 Installing symlink pointing to librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:03:22.117 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:22.117 Installing symlink pointing to librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:03:22.117 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:22.117 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:03:22.117 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:03:22.117 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:03:22.117 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:03:22.117 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:03:22.117 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:03:22.117 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:03:22.117 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:03:22.117 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:03:22.117 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:03:22.117 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:03:22.117 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:03:22.117 Installing symlink pointing to librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:03:22.117 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:22.117 Installing symlink pointing to librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:03:22.117 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:22.117 Installing symlink pointing to librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:03:22.117 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:22.117 Installing symlink pointing to librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:03:22.117 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:03:22.117 Installing symlink pointing to librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:03:22.117 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:22.117 Installing symlink pointing to librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:03:22.117 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:22.117 Installing symlink pointing to librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:03:22.117 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:22.117 Installing symlink pointing to librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:03:22.117 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:22.118 Installing symlink pointing to librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:03:22.118 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:22.118 Installing symlink pointing to librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:03:22.118 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:22.118 Installing symlink pointing to librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:03:22.118 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:22.118 Installing symlink pointing to librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:03:22.118 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:03:22.118 Installing symlink pointing to librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:03:22.118 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:03:22.118 Installing symlink pointing to librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:03:22.118 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:03:22.118 Installing symlink pointing to librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:03:22.118 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:03:22.118 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:03:22.118 13:52:00 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:03:22.118 13:52:00 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:22.118 00:03:22.118 real 0m35.751s 00:03:22.118 user 4m11.055s 00:03:22.118 sys 0m35.250s 00:03:22.118 13:52:00 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:22.118 ************************************ 00:03:22.118 END TEST build_native_dpdk 00:03:22.118 ************************************ 00:03:22.118 13:52:00 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:22.118 13:52:00 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:22.118 13:52:00 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:22.118 13:52:00 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:22.118 13:52:00 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:22.118 13:52:00 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:22.118 13:52:00 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:22.118 13:52:00 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:22.118 13:52:00 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme --with-shared 00:03:22.377 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:22.377 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.377 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:22.377 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:22.634 Using 'verbs' RDMA provider 00:03:33.540 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:43.541 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:43.541 Creating mk/config.mk...done. 00:03:43.541 Creating mk/cc.flags.mk...done. 00:03:43.541 Type 'make' to build. 00:03:43.541 13:52:21 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:43.541 13:52:21 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:43.541 13:52:21 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:43.541 13:52:21 -- common/autotest_common.sh@10 -- $ set +x 00:03:43.541 ************************************ 00:03:43.541 START TEST make 00:03:43.541 ************************************ 00:03:43.541 13:52:21 make -- common/autotest_common.sh@1125 -- $ make -j10 00:03:43.800 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:03:43.800 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:03:43.800 meson setup builddir \ 00:03:43.800 -Dwith-libaio=enabled \ 00:03:43.800 -Dwith-liburing=enabled \ 00:03:43.800 -Dwith-libvfn=disabled \ 00:03:43.800 -Dwith-spdk=false && \ 00:03:43.800 meson compile -C builddir && \ 00:03:43.800 cd -) 00:03:43.800 make[1]: Nothing to be done for 'all'. 00:03:45.702 The Meson build system 00:03:45.702 Version: 1.5.0 00:03:45.702 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:03:45.702 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:45.702 Build type: native build 00:03:45.702 Project name: xnvme 00:03:45.702 Project version: 0.7.3 00:03:45.702 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:45.702 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:45.702 Host machine cpu family: x86_64 00:03:45.702 Host machine cpu: x86_64 00:03:45.702 Message: host_machine.system: linux 00:03:45.702 Compiler for C supports arguments -Wno-missing-braces: YES 00:03:45.702 Compiler for C supports arguments -Wno-cast-function-type: YES 00:03:45.702 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:45.702 Run-time dependency threads found: YES 00:03:45.702 Has header "setupapi.h" : NO 00:03:45.702 Has header "linux/blkzoned.h" : YES 00:03:45.702 Has header "linux/blkzoned.h" : YES (cached) 00:03:45.702 Has header "libaio.h" : YES 00:03:45.702 Library aio found: YES 00:03:45.702 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:45.702 Run-time dependency liburing found: YES 2.2 00:03:45.702 Dependency libvfn skipped: feature with-libvfn disabled 00:03:45.702 Run-time dependency appleframeworks found: NO (tried framework) 00:03:45.702 Run-time dependency appleframeworks found: NO (tried framework) 00:03:45.702 Configuring xnvme_config.h using configuration 00:03:45.702 Configuring xnvme.spec using configuration 00:03:45.702 Run-time dependency bash-completion found: YES 2.11 00:03:45.702 Message: Bash-completions: /usr/share/bash-completion/completions 00:03:45.702 Program cp found: YES (/usr/bin/cp) 00:03:45.702 Has header "winsock2.h" : NO 00:03:45.702 Has header "dbghelp.h" : NO 00:03:45.702 Library rpcrt4 found: NO 00:03:45.702 Library rt found: YES 00:03:45.702 Checking for function "clock_gettime" with dependency -lrt: YES 00:03:45.702 Found CMake: /usr/bin/cmake (3.27.7) 00:03:45.702 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:03:45.702 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:03:45.702 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:03:45.702 Build targets in project: 32 00:03:45.702 00:03:45.702 xnvme 0.7.3 00:03:45.702 00:03:45.702 User defined options 00:03:45.702 with-libaio : enabled 00:03:45.702 with-liburing: enabled 00:03:45.702 with-libvfn : disabled 00:03:45.702 with-spdk : false 00:03:45.702 00:03:45.702 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:46.270 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:03:46.270 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:03:46.270 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:03:46.270 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:03:46.270 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:03:46.270 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:03:46.270 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:03:46.270 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:03:46.270 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:03:46.270 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:03:46.270 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:03:46.270 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:03:46.270 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:03:46.270 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:03:46.270 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:03:46.270 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:03:46.270 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:03:46.270 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:03:46.270 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:03:46.270 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:03:46.270 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:03:46.270 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:03:46.529 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:03:46.529 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:03:46.529 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:03:46.529 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:03:46.529 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:03:46.529 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:03:46.529 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:03:46.529 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:03:46.529 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:03:46.529 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:03:46.529 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:03:46.529 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:03:46.529 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:03:46.529 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:03:46.529 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:03:46.529 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:03:46.529 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:03:46.529 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:03:46.529 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:03:46.529 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:03:46.529 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:03:46.529 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:03:46.529 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:03:46.529 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:03:46.529 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:03:46.529 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:03:46.529 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:03:46.529 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:03:46.529 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:03:46.529 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:03:46.529 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:03:46.529 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:03:46.529 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:03:46.529 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:03:46.529 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:03:46.529 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:03:46.529 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:03:46.529 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:03:46.788 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:03:46.788 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:03:46.788 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:03:46.788 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:03:46.788 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:03:46.788 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:03:46.788 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:03:46.788 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:03:46.788 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:03:46.788 [69/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:03:46.788 [70/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:03:46.788 [71/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:03:46.788 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:03:46.788 [73/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:03:46.788 [74/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:03:46.788 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:03:46.788 [76/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:03:46.788 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:03:46.788 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:03:46.788 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:03:46.788 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:03:46.788 [81/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:03:47.049 [82/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:03:47.049 [83/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:03:47.049 [84/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:03:47.049 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:03:47.049 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:03:47.049 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:03:47.049 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:03:47.049 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:03:47.049 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:03:47.049 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:03:47.049 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:03:47.049 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:03:47.049 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:03:47.049 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:03:47.049 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:03:47.049 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:03:47.049 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:03:47.049 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:03:47.049 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:03:47.049 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:03:47.049 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:03:47.049 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:03:47.049 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:03:47.049 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:03:47.049 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:03:47.049 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:03:47.049 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:03:47.049 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:03:47.049 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:03:47.049 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:03:47.049 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:03:47.049 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:03:47.049 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:03:47.356 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:03:47.356 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:03:47.356 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:03:47.356 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:03:47.356 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:03:47.356 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:03:47.356 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:03:47.356 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:03:47.356 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:03:47.356 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:03:47.356 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:03:47.356 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:03:47.356 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:03:47.356 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:03:47.356 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:03:47.356 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:03:47.356 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:03:47.356 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:03:47.356 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:03:47.356 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:03:47.356 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:03:47.356 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:03:47.356 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:03:47.356 [138/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:03:47.356 [139/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:03:47.356 [140/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:03:47.356 [141/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:03:47.615 [142/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:03:47.615 [143/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:03:47.615 [144/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:03:47.615 [145/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:03:47.615 [146/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:03:47.615 [147/203] Linking target lib/libxnvme.so 00:03:47.615 [148/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:03:47.615 [149/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:03:47.615 [150/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:03:47.615 [151/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:03:47.615 [152/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:03:47.615 [153/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:03:47.615 [154/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:03:47.615 [155/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:03:47.615 [156/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:03:47.615 [157/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:03:47.615 [158/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:03:47.615 [159/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:03:47.873 [160/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:03:47.873 [161/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:03:47.873 [162/203] Compiling C object tools/kvs.p/kvs.c.o 00:03:47.873 [163/203] Compiling C object tools/xdd.p/xdd.c.o 00:03:47.873 [164/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:03:47.874 [165/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:03:47.874 [166/203] Compiling C object tools/lblk.p/lblk.c.o 00:03:47.874 [167/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:03:47.874 [168/203] Compiling C object tools/zoned.p/zoned.c.o 00:03:47.874 [169/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:03:47.874 [170/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:03:47.874 [171/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:03:47.874 [172/203] Linking static target lib/libxnvme.a 00:03:47.874 [173/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:03:47.874 [174/203] Linking target tests/xnvme_tests_buf 00:03:47.874 [175/203] Linking target tests/xnvme_tests_async_intf 00:03:47.874 [176/203] Linking target tests/xnvme_tests_cli 00:03:47.874 [177/203] Linking target tests/xnvme_tests_enum 00:03:47.874 [178/203] Linking target tests/xnvme_tests_scc 00:03:47.874 [179/203] Linking target tests/xnvme_tests_xnvme_file 00:03:47.874 [180/203] Linking target tests/xnvme_tests_lblk 00:03:47.874 [181/203] Linking target tests/xnvme_tests_ioworker 00:03:47.874 [182/203] Linking target tests/xnvme_tests_xnvme_cli 00:03:47.874 [183/203] Linking target tests/xnvme_tests_znd_append 00:03:47.874 [184/203] Linking target tests/xnvme_tests_znd_explicit_open 00:03:47.874 [185/203] Linking target tests/xnvme_tests_znd_zrwa 00:03:48.132 [186/203] Linking target tests/xnvme_tests_znd_state 00:03:48.132 [187/203] Linking target tests/xnvme_tests_kvs 00:03:48.132 [188/203] Linking target tools/lblk 00:03:48.132 [189/203] Linking target tests/xnvme_tests_map 00:03:48.132 [190/203] Linking target examples/xnvme_dev 00:03:48.132 [191/203] Linking target tools/xdd 00:03:48.132 [192/203] Linking target examples/xnvme_hello 00:03:48.132 [193/203] Linking target tools/zoned 00:03:48.132 [194/203] Linking target tools/kvs 00:03:48.132 [195/203] Linking target tools/xnvme_file 00:03:48.133 [196/203] Linking target examples/xnvme_enum 00:03:48.133 [197/203] Linking target examples/xnvme_io_async 00:03:48.133 [198/203] Linking target examples/xnvme_single_async 00:03:48.133 [199/203] Linking target examples/zoned_io_async 00:03:48.133 [200/203] Linking target examples/xnvme_single_sync 00:03:48.133 [201/203] Linking target examples/zoned_io_sync 00:03:48.133 [202/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:03:48.133 [203/203] Linking target tools/xnvme 00:03:48.133 INFO: autodetecting backend as ninja 00:03:48.133 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:48.133 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:04:20.209 CC lib/ut/ut.o 00:04:20.209 CC lib/log/log.o 00:04:20.209 CC lib/log/log_flags.o 00:04:20.209 CC lib/log/log_deprecated.o 00:04:20.209 CC lib/ut_mock/mock.o 00:04:20.209 LIB libspdk_ut.a 00:04:20.209 LIB libspdk_log.a 00:04:20.209 LIB libspdk_ut_mock.a 00:04:20.209 SO libspdk_ut.so.2.0 00:04:20.209 SO libspdk_log.so.7.0 00:04:20.209 SO libspdk_ut_mock.so.6.0 00:04:20.209 SYMLINK libspdk_ut.so 00:04:20.209 SYMLINK libspdk_log.so 00:04:20.209 SYMLINK libspdk_ut_mock.so 00:04:20.209 CC lib/dma/dma.o 00:04:20.209 CC lib/ioat/ioat.o 00:04:20.209 CC lib/util/base64.o 00:04:20.209 CC lib/util/bit_array.o 00:04:20.209 CC lib/util/cpuset.o 00:04:20.209 CXX lib/trace_parser/trace.o 00:04:20.209 CC lib/util/crc32.o 00:04:20.209 CC lib/util/crc32c.o 00:04:20.209 CC lib/util/crc16.o 00:04:20.209 CC lib/vfio_user/host/vfio_user_pci.o 00:04:20.209 CC lib/util/crc32_ieee.o 00:04:20.209 CC lib/util/crc64.o 00:04:20.209 CC lib/util/dif.o 00:04:20.209 CC lib/util/fd.o 00:04:20.209 LIB libspdk_dma.a 00:04:20.209 CC lib/util/fd_group.o 00:04:20.209 SO libspdk_dma.so.5.0 00:04:20.209 CC lib/util/file.o 00:04:20.209 CC lib/util/hexlify.o 00:04:20.209 CC lib/vfio_user/host/vfio_user.o 00:04:20.209 SYMLINK libspdk_dma.so 00:04:20.209 CC lib/util/iov.o 00:04:20.209 LIB libspdk_ioat.a 00:04:20.209 CC lib/util/math.o 00:04:20.209 SO libspdk_ioat.so.7.0 00:04:20.209 CC lib/util/net.o 00:04:20.209 SYMLINK libspdk_ioat.so 00:04:20.209 CC lib/util/pipe.o 00:04:20.209 CC lib/util/strerror_tls.o 00:04:20.209 CC lib/util/string.o 00:04:20.209 CC lib/util/uuid.o 00:04:20.209 CC lib/util/xor.o 00:04:20.209 CC lib/util/zipf.o 00:04:20.209 LIB libspdk_vfio_user.a 00:04:20.209 CC lib/util/md5.o 00:04:20.209 SO libspdk_vfio_user.so.5.0 00:04:20.209 SYMLINK libspdk_vfio_user.so 00:04:20.209 LIB libspdk_util.a 00:04:20.209 LIB libspdk_trace_parser.a 00:04:20.209 SO libspdk_trace_parser.so.6.0 00:04:20.209 SO libspdk_util.so.10.0 00:04:20.209 SYMLINK libspdk_trace_parser.so 00:04:20.209 SYMLINK libspdk_util.so 00:04:20.209 CC lib/conf/conf.o 00:04:20.209 CC lib/rdma_provider/common.o 00:04:20.209 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:20.209 CC lib/json/json_parse.o 00:04:20.209 CC lib/rdma_utils/rdma_utils.o 00:04:20.209 CC lib/json/json_write.o 00:04:20.209 CC lib/json/json_util.o 00:04:20.209 CC lib/vmd/vmd.o 00:04:20.209 CC lib/idxd/idxd.o 00:04:20.209 CC lib/env_dpdk/env.o 00:04:20.209 CC lib/env_dpdk/memory.o 00:04:20.209 LIB libspdk_rdma_provider.a 00:04:20.209 SO libspdk_rdma_provider.so.6.0 00:04:20.209 LIB libspdk_conf.a 00:04:20.209 CC lib/vmd/led.o 00:04:20.209 SO libspdk_conf.so.6.0 00:04:20.209 CC lib/env_dpdk/pci.o 00:04:20.209 SYMLINK libspdk_rdma_provider.so 00:04:20.209 CC lib/env_dpdk/init.o 00:04:20.209 LIB libspdk_rdma_utils.a 00:04:20.209 LIB libspdk_json.a 00:04:20.209 SYMLINK libspdk_conf.so 00:04:20.209 CC lib/env_dpdk/threads.o 00:04:20.209 SO libspdk_rdma_utils.so.1.0 00:04:20.209 SO libspdk_json.so.6.0 00:04:20.209 SYMLINK libspdk_rdma_utils.so 00:04:20.209 SYMLINK libspdk_json.so 00:04:20.209 CC lib/idxd/idxd_user.o 00:04:20.209 CC lib/env_dpdk/pci_ioat.o 00:04:20.209 CC lib/idxd/idxd_kernel.o 00:04:20.209 CC lib/env_dpdk/pci_virtio.o 00:04:20.209 CC lib/env_dpdk/pci_vmd.o 00:04:20.209 CC lib/env_dpdk/pci_idxd.o 00:04:20.209 CC lib/env_dpdk/pci_event.o 00:04:20.209 CC lib/env_dpdk/sigbus_handler.o 00:04:20.209 CC lib/env_dpdk/pci_dpdk.o 00:04:20.209 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:20.209 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:20.209 LIB libspdk_idxd.a 00:04:20.209 SO libspdk_idxd.so.12.1 00:04:20.209 LIB libspdk_vmd.a 00:04:20.209 SO libspdk_vmd.so.6.0 00:04:20.209 SYMLINK libspdk_idxd.so 00:04:20.209 CC lib/jsonrpc/jsonrpc_server.o 00:04:20.209 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:20.209 CC lib/jsonrpc/jsonrpc_client.o 00:04:20.209 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:20.209 SYMLINK libspdk_vmd.so 00:04:20.209 LIB libspdk_jsonrpc.a 00:04:20.209 SO libspdk_jsonrpc.so.6.0 00:04:20.209 SYMLINK libspdk_jsonrpc.so 00:04:20.209 LIB libspdk_env_dpdk.a 00:04:20.209 SO libspdk_env_dpdk.so.15.0 00:04:20.209 CC lib/rpc/rpc.o 00:04:20.209 SYMLINK libspdk_env_dpdk.so 00:04:20.209 LIB libspdk_rpc.a 00:04:20.209 SO libspdk_rpc.so.6.0 00:04:20.209 SYMLINK libspdk_rpc.so 00:04:20.209 CC lib/keyring/keyring_rpc.o 00:04:20.209 CC lib/keyring/keyring.o 00:04:20.209 CC lib/notify/notify.o 00:04:20.209 CC lib/notify/notify_rpc.o 00:04:20.209 CC lib/trace/trace.o 00:04:20.209 CC lib/trace/trace_flags.o 00:04:20.209 CC lib/trace/trace_rpc.o 00:04:20.209 LIB libspdk_notify.a 00:04:20.209 LIB libspdk_keyring.a 00:04:20.209 SO libspdk_notify.so.6.0 00:04:20.209 SO libspdk_keyring.so.2.0 00:04:20.209 SYMLINK libspdk_notify.so 00:04:20.209 SYMLINK libspdk_keyring.so 00:04:20.209 LIB libspdk_trace.a 00:04:20.209 SO libspdk_trace.so.11.0 00:04:20.209 SYMLINK libspdk_trace.so 00:04:20.209 CC lib/sock/sock.o 00:04:20.209 CC lib/sock/sock_rpc.o 00:04:20.209 CC lib/thread/thread.o 00:04:20.209 CC lib/thread/iobuf.o 00:04:20.467 LIB libspdk_sock.a 00:04:20.467 SO libspdk_sock.so.10.0 00:04:20.467 SYMLINK libspdk_sock.so 00:04:20.726 CC lib/nvme/nvme_ctrlr.o 00:04:20.726 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:20.726 CC lib/nvme/nvme_ns_cmd.o 00:04:20.726 CC lib/nvme/nvme_ns.o 00:04:20.726 CC lib/nvme/nvme_fabric.o 00:04:20.726 CC lib/nvme/nvme_qpair.o 00:04:20.726 CC lib/nvme/nvme_pcie_common.o 00:04:20.726 CC lib/nvme/nvme_pcie.o 00:04:20.726 CC lib/nvme/nvme.o 00:04:21.293 CC lib/nvme/nvme_quirks.o 00:04:21.293 CC lib/nvme/nvme_transport.o 00:04:21.293 CC lib/nvme/nvme_discovery.o 00:04:21.293 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:21.293 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:21.566 CC lib/nvme/nvme_tcp.o 00:04:21.566 CC lib/nvme/nvme_opal.o 00:04:21.566 LIB libspdk_thread.a 00:04:21.566 SO libspdk_thread.so.10.1 00:04:21.566 SYMLINK libspdk_thread.so 00:04:21.566 CC lib/nvme/nvme_io_msg.o 00:04:21.566 CC lib/nvme/nvme_poll_group.o 00:04:21.566 CC lib/nvme/nvme_zns.o 00:04:21.824 CC lib/nvme/nvme_stubs.o 00:04:21.824 CC lib/nvme/nvme_auth.o 00:04:21.824 CC lib/nvme/nvme_cuse.o 00:04:22.082 CC lib/nvme/nvme_rdma.o 00:04:22.082 CC lib/accel/accel.o 00:04:22.082 CC lib/accel/accel_rpc.o 00:04:22.082 CC lib/blob/blobstore.o 00:04:22.082 CC lib/accel/accel_sw.o 00:04:22.340 CC lib/init/json_config.o 00:04:22.340 CC lib/init/subsystem.o 00:04:22.598 CC lib/blob/request.o 00:04:22.598 CC lib/init/subsystem_rpc.o 00:04:22.598 CC lib/virtio/virtio.o 00:04:22.598 CC lib/init/rpc.o 00:04:22.598 CC lib/blob/zeroes.o 00:04:22.855 LIB libspdk_init.a 00:04:22.855 CC lib/fsdev/fsdev.o 00:04:22.855 SO libspdk_init.so.6.0 00:04:22.855 CC lib/blob/blob_bs_dev.o 00:04:22.855 SYMLINK libspdk_init.so 00:04:22.855 CC lib/virtio/virtio_vhost_user.o 00:04:22.855 CC lib/fsdev/fsdev_io.o 00:04:22.855 CC lib/fsdev/fsdev_rpc.o 00:04:22.855 CC lib/virtio/virtio_vfio_user.o 00:04:22.855 CC lib/virtio/virtio_pci.o 00:04:23.113 CC lib/event/app.o 00:04:23.113 CC lib/event/log_rpc.o 00:04:23.113 CC lib/event/reactor.o 00:04:23.113 CC lib/event/app_rpc.o 00:04:23.113 CC lib/event/scheduler_static.o 00:04:23.113 LIB libspdk_virtio.a 00:04:23.372 LIB libspdk_nvme.a 00:04:23.372 SO libspdk_virtio.so.7.0 00:04:23.372 SYMLINK libspdk_virtio.so 00:04:23.372 LIB libspdk_accel.a 00:04:23.372 SO libspdk_accel.so.16.0 00:04:23.372 SO libspdk_nvme.so.14.0 00:04:23.372 LIB libspdk_fsdev.a 00:04:23.372 SO libspdk_fsdev.so.1.0 00:04:23.372 SYMLINK libspdk_accel.so 00:04:23.372 SYMLINK libspdk_fsdev.so 00:04:23.630 SYMLINK libspdk_nvme.so 00:04:23.630 CC lib/bdev/bdev_rpc.o 00:04:23.630 CC lib/bdev/bdev.o 00:04:23.630 CC lib/bdev/bdev_zone.o 00:04:23.630 CC lib/bdev/part.o 00:04:23.630 CC lib/bdev/scsi_nvme.o 00:04:23.630 LIB libspdk_event.a 00:04:23.630 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:23.630 SO libspdk_event.so.14.0 00:04:23.630 SYMLINK libspdk_event.so 00:04:24.196 LIB libspdk_fuse_dispatcher.a 00:04:24.196 SO libspdk_fuse_dispatcher.so.1.0 00:04:24.196 SYMLINK libspdk_fuse_dispatcher.so 00:04:25.131 LIB libspdk_blob.a 00:04:25.131 SO libspdk_blob.so.11.0 00:04:25.131 SYMLINK libspdk_blob.so 00:04:25.390 CC lib/lvol/lvol.o 00:04:25.390 CC lib/blobfs/tree.o 00:04:25.390 CC lib/blobfs/blobfs.o 00:04:26.323 LIB libspdk_blobfs.a 00:04:26.323 SO libspdk_blobfs.so.10.0 00:04:26.323 LIB libspdk_lvol.a 00:04:26.323 SYMLINK libspdk_blobfs.so 00:04:26.323 SO libspdk_lvol.so.10.0 00:04:26.323 SYMLINK libspdk_lvol.so 00:04:26.323 LIB libspdk_bdev.a 00:04:26.323 SO libspdk_bdev.so.16.0 00:04:26.583 SYMLINK libspdk_bdev.so 00:04:26.583 CC lib/ftl/ftl_core.o 00:04:26.583 CC lib/ftl/ftl_init.o 00:04:26.583 CC lib/ftl/ftl_layout.o 00:04:26.583 CC lib/ftl/ftl_debug.o 00:04:26.583 CC lib/ftl/ftl_io.o 00:04:26.583 CC lib/ftl/ftl_sb.o 00:04:26.583 CC lib/scsi/dev.o 00:04:26.583 CC lib/nvmf/ctrlr.o 00:04:26.583 CC lib/ublk/ublk.o 00:04:26.583 CC lib/nbd/nbd.o 00:04:26.841 CC lib/ftl/ftl_l2p.o 00:04:26.841 CC lib/ftl/ftl_l2p_flat.o 00:04:26.841 CC lib/scsi/lun.o 00:04:26.841 CC lib/nvmf/ctrlr_discovery.o 00:04:26.841 CC lib/nvmf/ctrlr_bdev.o 00:04:26.841 CC lib/nvmf/subsystem.o 00:04:27.098 CC lib/scsi/port.o 00:04:27.098 CC lib/nvmf/nvmf.o 00:04:27.098 CC lib/ftl/ftl_nv_cache.o 00:04:27.098 CC lib/nbd/nbd_rpc.o 00:04:27.098 CC lib/scsi/scsi.o 00:04:27.098 CC lib/ublk/ublk_rpc.o 00:04:27.098 LIB libspdk_nbd.a 00:04:27.098 SO libspdk_nbd.so.7.0 00:04:27.356 CC lib/scsi/scsi_bdev.o 00:04:27.356 CC lib/scsi/scsi_pr.o 00:04:27.356 SYMLINK libspdk_nbd.so 00:04:27.356 CC lib/ftl/ftl_band.o 00:04:27.356 LIB libspdk_ublk.a 00:04:27.356 SO libspdk_ublk.so.3.0 00:04:27.356 CC lib/nvmf/nvmf_rpc.o 00:04:27.356 SYMLINK libspdk_ublk.so 00:04:27.356 CC lib/nvmf/transport.o 00:04:27.614 CC lib/nvmf/tcp.o 00:04:27.615 CC lib/scsi/scsi_rpc.o 00:04:27.615 CC lib/ftl/ftl_band_ops.o 00:04:27.615 CC lib/scsi/task.o 00:04:27.615 CC lib/ftl/ftl_writer.o 00:04:27.874 CC lib/nvmf/stubs.o 00:04:27.874 LIB libspdk_scsi.a 00:04:27.874 SO libspdk_scsi.so.9.0 00:04:27.874 CC lib/ftl/ftl_rq.o 00:04:27.874 CC lib/nvmf/mdns_server.o 00:04:27.874 CC lib/nvmf/rdma.o 00:04:27.874 SYMLINK libspdk_scsi.so 00:04:27.874 CC lib/nvmf/auth.o 00:04:28.134 CC lib/ftl/ftl_reloc.o 00:04:28.134 CC lib/ftl/ftl_l2p_cache.o 00:04:28.134 CC lib/iscsi/conn.o 00:04:28.134 CC lib/iscsi/init_grp.o 00:04:28.134 CC lib/iscsi/iscsi.o 00:04:28.460 CC lib/ftl/ftl_p2l.o 00:04:28.460 CC lib/iscsi/param.o 00:04:28.460 CC lib/iscsi/portal_grp.o 00:04:28.461 CC lib/iscsi/tgt_node.o 00:04:28.720 CC lib/iscsi/iscsi_subsystem.o 00:04:28.720 CC lib/iscsi/iscsi_rpc.o 00:04:28.720 CC lib/ftl/ftl_p2l_log.o 00:04:28.720 CC lib/iscsi/task.o 00:04:28.720 CC lib/ftl/mngt/ftl_mngt.o 00:04:28.720 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:28.720 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:28.978 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:28.978 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:28.978 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:28.978 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:28.978 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:28.978 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:28.978 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:28.978 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:28.978 CC lib/vhost/vhost.o 00:04:28.978 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:28.978 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:28.978 CC lib/vhost/vhost_rpc.o 00:04:29.239 CC lib/vhost/vhost_scsi.o 00:04:29.239 CC lib/ftl/utils/ftl_conf.o 00:04:29.239 CC lib/ftl/utils/ftl_md.o 00:04:29.239 CC lib/ftl/utils/ftl_mempool.o 00:04:29.239 CC lib/ftl/utils/ftl_bitmap.o 00:04:29.239 CC lib/ftl/utils/ftl_property.o 00:04:29.500 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:29.500 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:29.500 CC lib/vhost/vhost_blk.o 00:04:29.500 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:29.500 CC lib/vhost/rte_vhost_user.o 00:04:29.500 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:29.500 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:29.500 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:29.759 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:29.759 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:29.759 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:29.759 LIB libspdk_iscsi.a 00:04:29.759 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:29.759 SO libspdk_iscsi.so.8.0 00:04:29.759 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:29.759 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:29.759 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:29.759 CC lib/ftl/base/ftl_base_dev.o 00:04:29.759 SYMLINK libspdk_iscsi.so 00:04:29.759 CC lib/ftl/base/ftl_base_bdev.o 00:04:29.759 CC lib/ftl/ftl_trace.o 00:04:30.017 LIB libspdk_ftl.a 00:04:30.275 LIB libspdk_nvmf.a 00:04:30.275 SO libspdk_ftl.so.9.0 00:04:30.275 SO libspdk_nvmf.so.19.0 00:04:30.275 LIB libspdk_vhost.a 00:04:30.275 SO libspdk_vhost.so.8.0 00:04:30.275 SYMLINK libspdk_ftl.so 00:04:30.533 SYMLINK libspdk_vhost.so 00:04:30.533 SYMLINK libspdk_nvmf.so 00:04:30.533 CC module/env_dpdk/env_dpdk_rpc.o 00:04:30.792 CC module/accel/error/accel_error.o 00:04:30.792 CC module/blob/bdev/blob_bdev.o 00:04:30.792 CC module/sock/posix/posix.o 00:04:30.792 CC module/fsdev/aio/fsdev_aio.o 00:04:30.792 CC module/accel/dsa/accel_dsa.o 00:04:30.792 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:30.792 CC module/accel/ioat/accel_ioat.o 00:04:30.792 CC module/accel/iaa/accel_iaa.o 00:04:30.792 CC module/keyring/file/keyring.o 00:04:30.792 LIB libspdk_env_dpdk_rpc.a 00:04:30.792 SO libspdk_env_dpdk_rpc.so.6.0 00:04:30.792 SYMLINK libspdk_env_dpdk_rpc.so 00:04:30.792 CC module/accel/ioat/accel_ioat_rpc.o 00:04:30.792 CC module/keyring/file/keyring_rpc.o 00:04:30.792 CC module/accel/error/accel_error_rpc.o 00:04:30.792 LIB libspdk_scheduler_dynamic.a 00:04:30.792 SO libspdk_scheduler_dynamic.so.4.0 00:04:30.792 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:30.792 CC module/accel/iaa/accel_iaa_rpc.o 00:04:30.792 LIB libspdk_blob_bdev.a 00:04:31.057 SYMLINK libspdk_scheduler_dynamic.so 00:04:31.057 SO libspdk_blob_bdev.so.11.0 00:04:31.057 LIB libspdk_keyring_file.a 00:04:31.057 LIB libspdk_accel_ioat.a 00:04:31.057 SO libspdk_keyring_file.so.2.0 00:04:31.057 LIB libspdk_accel_error.a 00:04:31.057 SO libspdk_accel_ioat.so.6.0 00:04:31.057 SYMLINK libspdk_blob_bdev.so 00:04:31.057 CC module/accel/dsa/accel_dsa_rpc.o 00:04:31.057 SYMLINK libspdk_keyring_file.so 00:04:31.057 SO libspdk_accel_error.so.2.0 00:04:31.057 SYMLINK libspdk_accel_ioat.so 00:04:31.057 LIB libspdk_accel_iaa.a 00:04:31.057 CC module/fsdev/aio/linux_aio_mgr.o 00:04:31.057 SYMLINK libspdk_accel_error.so 00:04:31.057 SO libspdk_accel_iaa.so.3.0 00:04:31.057 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:31.057 LIB libspdk_accel_dsa.a 00:04:31.057 CC module/scheduler/gscheduler/gscheduler.o 00:04:31.057 SO libspdk_accel_dsa.so.5.0 00:04:31.057 SYMLINK libspdk_accel_iaa.so 00:04:31.057 CC module/keyring/linux/keyring.o 00:04:31.057 CC module/keyring/linux/keyring_rpc.o 00:04:31.057 SYMLINK libspdk_accel_dsa.so 00:04:31.318 LIB libspdk_fsdev_aio.a 00:04:31.318 LIB libspdk_scheduler_dpdk_governor.a 00:04:31.318 CC module/bdev/delay/vbdev_delay.o 00:04:31.318 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:31.318 SO libspdk_fsdev_aio.so.1.0 00:04:31.318 CC module/blobfs/bdev/blobfs_bdev.o 00:04:31.318 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:31.318 LIB libspdk_scheduler_gscheduler.a 00:04:31.318 LIB libspdk_keyring_linux.a 00:04:31.318 LIB libspdk_sock_posix.a 00:04:31.318 SO libspdk_scheduler_gscheduler.so.4.0 00:04:31.318 CC module/bdev/error/vbdev_error.o 00:04:31.318 SO libspdk_keyring_linux.so.1.0 00:04:31.318 SYMLINK libspdk_fsdev_aio.so 00:04:31.318 SO libspdk_sock_posix.so.6.0 00:04:31.318 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:31.318 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:31.318 CC module/bdev/error/vbdev_error_rpc.o 00:04:31.318 SYMLINK libspdk_keyring_linux.so 00:04:31.318 SYMLINK libspdk_scheduler_gscheduler.so 00:04:31.318 CC module/bdev/gpt/gpt.o 00:04:31.318 SYMLINK libspdk_sock_posix.so 00:04:31.318 CC module/bdev/gpt/vbdev_gpt.o 00:04:31.579 CC module/bdev/malloc/bdev_malloc.o 00:04:31.579 LIB libspdk_blobfs_bdev.a 00:04:31.579 CC module/bdev/lvol/vbdev_lvol.o 00:04:31.579 SO libspdk_blobfs_bdev.so.6.0 00:04:31.579 CC module/bdev/null/bdev_null.o 00:04:31.579 LIB libspdk_bdev_delay.a 00:04:31.579 CC module/bdev/null/bdev_null_rpc.o 00:04:31.579 SO libspdk_bdev_delay.so.6.0 00:04:31.579 LIB libspdk_bdev_error.a 00:04:31.579 CC module/bdev/nvme/bdev_nvme.o 00:04:31.579 SYMLINK libspdk_blobfs_bdev.so 00:04:31.579 CC module/bdev/passthru/vbdev_passthru.o 00:04:31.579 SO libspdk_bdev_error.so.6.0 00:04:31.579 SYMLINK libspdk_bdev_delay.so 00:04:31.579 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:31.579 LIB libspdk_bdev_gpt.a 00:04:31.579 SYMLINK libspdk_bdev_error.so 00:04:31.579 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:31.579 SO libspdk_bdev_gpt.so.6.0 00:04:31.579 CC module/bdev/raid/bdev_raid.o 00:04:31.579 SYMLINK libspdk_bdev_gpt.so 00:04:31.838 LIB libspdk_bdev_null.a 00:04:31.838 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:31.838 SO libspdk_bdev_null.so.6.0 00:04:31.838 CC module/bdev/split/vbdev_split.o 00:04:31.838 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:31.838 SYMLINK libspdk_bdev_null.so 00:04:31.838 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:31.838 CC module/bdev/split/vbdev_split_rpc.o 00:04:31.838 LIB libspdk_bdev_passthru.a 00:04:31.838 SO libspdk_bdev_passthru.so.6.0 00:04:31.838 LIB libspdk_bdev_malloc.a 00:04:31.838 CC module/bdev/xnvme/bdev_xnvme.o 00:04:31.838 SO libspdk_bdev_malloc.so.6.0 00:04:31.838 LIB libspdk_bdev_lvol.a 00:04:31.838 SYMLINK libspdk_bdev_passthru.so 00:04:31.838 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:04:32.099 SO libspdk_bdev_lvol.so.6.0 00:04:32.099 SYMLINK libspdk_bdev_malloc.so 00:04:32.099 LIB libspdk_bdev_split.a 00:04:32.099 SO libspdk_bdev_split.so.6.0 00:04:32.099 SYMLINK libspdk_bdev_lvol.so 00:04:32.099 SYMLINK libspdk_bdev_split.so 00:04:32.099 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:32.099 CC module/bdev/nvme/nvme_rpc.o 00:04:32.099 LIB libspdk_bdev_xnvme.a 00:04:32.099 CC module/bdev/aio/bdev_aio.o 00:04:32.099 SO libspdk_bdev_xnvme.so.3.0 00:04:32.099 CC module/bdev/ftl/bdev_ftl.o 00:04:32.099 CC module/bdev/iscsi/bdev_iscsi.o 00:04:32.099 LIB libspdk_bdev_zone_block.a 00:04:32.099 SYMLINK libspdk_bdev_xnvme.so 00:04:32.099 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:32.099 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:32.099 SO libspdk_bdev_zone_block.so.6.0 00:04:32.099 SYMLINK libspdk_bdev_zone_block.so 00:04:32.361 CC module/bdev/aio/bdev_aio_rpc.o 00:04:32.361 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:32.361 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:32.361 CC module/bdev/raid/bdev_raid_rpc.o 00:04:32.361 LIB libspdk_bdev_aio.a 00:04:32.361 SO libspdk_bdev_aio.so.6.0 00:04:32.361 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:32.361 SYMLINK libspdk_bdev_aio.so 00:04:32.361 CC module/bdev/nvme/bdev_mdns_client.o 00:04:32.361 CC module/bdev/raid/bdev_raid_sb.o 00:04:32.361 LIB libspdk_bdev_iscsi.a 00:04:32.361 CC module/bdev/nvme/vbdev_opal.o 00:04:32.361 CC module/bdev/raid/raid0.o 00:04:32.623 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:32.623 SO libspdk_bdev_iscsi.so.6.0 00:04:32.623 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:32.623 CC module/bdev/raid/raid1.o 00:04:32.623 SYMLINK libspdk_bdev_iscsi.so 00:04:32.623 CC module/bdev/raid/concat.o 00:04:32.623 LIB libspdk_bdev_ftl.a 00:04:32.623 SO libspdk_bdev_ftl.so.6.0 00:04:32.623 LIB libspdk_bdev_virtio.a 00:04:32.623 SYMLINK libspdk_bdev_ftl.so 00:04:32.623 SO libspdk_bdev_virtio.so.6.0 00:04:32.623 SYMLINK libspdk_bdev_virtio.so 00:04:32.883 LIB libspdk_bdev_raid.a 00:04:32.883 SO libspdk_bdev_raid.so.6.0 00:04:32.883 SYMLINK libspdk_bdev_raid.so 00:04:33.450 LIB libspdk_bdev_nvme.a 00:04:33.450 SO libspdk_bdev_nvme.so.7.0 00:04:33.708 SYMLINK libspdk_bdev_nvme.so 00:04:33.966 CC module/event/subsystems/vmd/vmd.o 00:04:33.966 CC module/event/subsystems/keyring/keyring.o 00:04:33.966 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:33.966 CC module/event/subsystems/sock/sock.o 00:04:33.966 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:33.966 CC module/event/subsystems/scheduler/scheduler.o 00:04:33.966 CC module/event/subsystems/fsdev/fsdev.o 00:04:33.966 CC module/event/subsystems/iobuf/iobuf.o 00:04:33.966 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:33.966 LIB libspdk_event_keyring.a 00:04:33.966 LIB libspdk_event_sock.a 00:04:33.966 LIB libspdk_event_vmd.a 00:04:33.966 LIB libspdk_event_fsdev.a 00:04:33.966 SO libspdk_event_keyring.so.1.0 00:04:34.224 SO libspdk_event_sock.so.5.0 00:04:34.224 LIB libspdk_event_vhost_blk.a 00:04:34.224 SO libspdk_event_fsdev.so.1.0 00:04:34.224 SO libspdk_event_vmd.so.6.0 00:04:34.224 LIB libspdk_event_scheduler.a 00:04:34.224 LIB libspdk_event_iobuf.a 00:04:34.224 SO libspdk_event_vhost_blk.so.3.0 00:04:34.224 SO libspdk_event_scheduler.so.4.0 00:04:34.224 SYMLINK libspdk_event_keyring.so 00:04:34.224 SYMLINK libspdk_event_fsdev.so 00:04:34.224 SYMLINK libspdk_event_sock.so 00:04:34.224 SO libspdk_event_iobuf.so.3.0 00:04:34.224 SYMLINK libspdk_event_vmd.so 00:04:34.224 SYMLINK libspdk_event_vhost_blk.so 00:04:34.224 SYMLINK libspdk_event_scheduler.so 00:04:34.224 SYMLINK libspdk_event_iobuf.so 00:04:34.483 CC module/event/subsystems/accel/accel.o 00:04:34.483 LIB libspdk_event_accel.a 00:04:34.740 SO libspdk_event_accel.so.6.0 00:04:34.740 SYMLINK libspdk_event_accel.so 00:04:34.997 CC module/event/subsystems/bdev/bdev.o 00:04:34.997 LIB libspdk_event_bdev.a 00:04:34.997 SO libspdk_event_bdev.so.6.0 00:04:35.255 SYMLINK libspdk_event_bdev.so 00:04:35.255 CC module/event/subsystems/nbd/nbd.o 00:04:35.255 CC module/event/subsystems/scsi/scsi.o 00:04:35.255 CC module/event/subsystems/ublk/ublk.o 00:04:35.255 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:35.255 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:35.255 LIB libspdk_event_nbd.a 00:04:35.513 LIB libspdk_event_ublk.a 00:04:35.513 SO libspdk_event_nbd.so.6.0 00:04:35.513 LIB libspdk_event_scsi.a 00:04:35.513 SO libspdk_event_ublk.so.3.0 00:04:35.513 SO libspdk_event_scsi.so.6.0 00:04:35.513 SYMLINK libspdk_event_nbd.so 00:04:35.513 SYMLINK libspdk_event_ublk.so 00:04:35.513 SYMLINK libspdk_event_scsi.so 00:04:35.513 LIB libspdk_event_nvmf.a 00:04:35.513 SO libspdk_event_nvmf.so.6.0 00:04:35.513 SYMLINK libspdk_event_nvmf.so 00:04:35.771 CC module/event/subsystems/iscsi/iscsi.o 00:04:35.771 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:35.771 LIB libspdk_event_vhost_scsi.a 00:04:35.771 LIB libspdk_event_iscsi.a 00:04:35.771 SO libspdk_event_vhost_scsi.so.3.0 00:04:35.771 SO libspdk_event_iscsi.so.6.0 00:04:35.771 SYMLINK libspdk_event_vhost_scsi.so 00:04:35.771 SYMLINK libspdk_event_iscsi.so 00:04:36.029 SO libspdk.so.6.0 00:04:36.029 SYMLINK libspdk.so 00:04:36.287 CXX app/trace/trace.o 00:04:36.287 CC app/trace_record/trace_record.o 00:04:36.287 CC app/spdk_lspci/spdk_lspci.o 00:04:36.287 CC app/spdk_nvme_identify/identify.o 00:04:36.287 CC app/spdk_nvme_perf/perf.o 00:04:36.287 CC app/iscsi_tgt/iscsi_tgt.o 00:04:36.287 CC app/nvmf_tgt/nvmf_main.o 00:04:36.287 CC app/spdk_tgt/spdk_tgt.o 00:04:36.287 CC examples/util/zipf/zipf.o 00:04:36.287 LINK spdk_lspci 00:04:36.287 CC test/thread/poller_perf/poller_perf.o 00:04:36.287 LINK spdk_trace_record 00:04:36.287 LINK zipf 00:04:36.287 LINK nvmf_tgt 00:04:36.287 LINK iscsi_tgt 00:04:36.545 LINK poller_perf 00:04:36.545 LINK spdk_tgt 00:04:36.545 CC app/spdk_nvme_discover/discovery_aer.o 00:04:36.545 LINK spdk_trace 00:04:36.545 CC app/spdk_top/spdk_top.o 00:04:36.545 CC examples/ioat/perf/perf.o 00:04:36.545 CC examples/vmd/led/led.o 00:04:36.545 CC examples/vmd/lsvmd/lsvmd.o 00:04:36.545 LINK spdk_nvme_discover 00:04:36.803 TEST_HEADER include/spdk/accel.h 00:04:36.803 TEST_HEADER include/spdk/accel_module.h 00:04:36.803 TEST_HEADER include/spdk/assert.h 00:04:36.803 TEST_HEADER include/spdk/barrier.h 00:04:36.803 TEST_HEADER include/spdk/base64.h 00:04:36.803 TEST_HEADER include/spdk/bdev.h 00:04:36.803 TEST_HEADER include/spdk/bdev_module.h 00:04:36.803 TEST_HEADER include/spdk/bdev_zone.h 00:04:36.803 CC test/dma/test_dma/test_dma.o 00:04:36.803 TEST_HEADER include/spdk/bit_array.h 00:04:36.803 TEST_HEADER include/spdk/bit_pool.h 00:04:36.803 TEST_HEADER include/spdk/blob_bdev.h 00:04:36.803 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:36.803 TEST_HEADER include/spdk/blobfs.h 00:04:36.803 TEST_HEADER include/spdk/blob.h 00:04:36.803 TEST_HEADER include/spdk/conf.h 00:04:36.803 TEST_HEADER include/spdk/config.h 00:04:36.803 TEST_HEADER include/spdk/cpuset.h 00:04:36.803 TEST_HEADER include/spdk/crc16.h 00:04:36.803 TEST_HEADER include/spdk/crc32.h 00:04:36.803 TEST_HEADER include/spdk/crc64.h 00:04:36.803 TEST_HEADER include/spdk/dif.h 00:04:36.803 TEST_HEADER include/spdk/dma.h 00:04:36.803 TEST_HEADER include/spdk/endian.h 00:04:36.803 TEST_HEADER include/spdk/env_dpdk.h 00:04:36.803 TEST_HEADER include/spdk/env.h 00:04:36.803 TEST_HEADER include/spdk/event.h 00:04:36.803 TEST_HEADER include/spdk/fd_group.h 00:04:36.803 TEST_HEADER include/spdk/fd.h 00:04:36.803 TEST_HEADER include/spdk/file.h 00:04:36.803 TEST_HEADER include/spdk/fsdev.h 00:04:36.803 TEST_HEADER include/spdk/fsdev_module.h 00:04:36.803 TEST_HEADER include/spdk/ftl.h 00:04:36.803 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:36.803 TEST_HEADER include/spdk/gpt_spec.h 00:04:36.803 TEST_HEADER include/spdk/hexlify.h 00:04:36.803 TEST_HEADER include/spdk/histogram_data.h 00:04:36.803 TEST_HEADER include/spdk/idxd.h 00:04:36.803 TEST_HEADER include/spdk/idxd_spec.h 00:04:36.803 TEST_HEADER include/spdk/init.h 00:04:36.803 TEST_HEADER include/spdk/ioat.h 00:04:36.803 TEST_HEADER include/spdk/ioat_spec.h 00:04:36.803 TEST_HEADER include/spdk/iscsi_spec.h 00:04:36.803 TEST_HEADER include/spdk/json.h 00:04:36.803 TEST_HEADER include/spdk/jsonrpc.h 00:04:36.803 TEST_HEADER include/spdk/keyring.h 00:04:36.803 TEST_HEADER include/spdk/keyring_module.h 00:04:36.803 TEST_HEADER include/spdk/likely.h 00:04:36.803 TEST_HEADER include/spdk/log.h 00:04:36.803 TEST_HEADER include/spdk/lvol.h 00:04:36.803 TEST_HEADER include/spdk/md5.h 00:04:36.803 TEST_HEADER include/spdk/memory.h 00:04:36.803 TEST_HEADER include/spdk/mmio.h 00:04:36.803 TEST_HEADER include/spdk/nbd.h 00:04:36.803 LINK lsvmd 00:04:36.803 CC test/app/bdev_svc/bdev_svc.o 00:04:36.803 TEST_HEADER include/spdk/net.h 00:04:36.803 TEST_HEADER include/spdk/notify.h 00:04:36.803 LINK led 00:04:36.803 TEST_HEADER include/spdk/nvme.h 00:04:36.803 TEST_HEADER include/spdk/nvme_intel.h 00:04:36.803 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:36.803 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:36.803 TEST_HEADER include/spdk/nvme_spec.h 00:04:36.803 TEST_HEADER include/spdk/nvme_zns.h 00:04:36.803 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:36.803 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:36.803 TEST_HEADER include/spdk/nvmf.h 00:04:36.803 TEST_HEADER include/spdk/nvmf_spec.h 00:04:36.803 TEST_HEADER include/spdk/nvmf_transport.h 00:04:36.803 TEST_HEADER include/spdk/opal.h 00:04:36.803 TEST_HEADER include/spdk/opal_spec.h 00:04:36.803 TEST_HEADER include/spdk/pci_ids.h 00:04:36.803 TEST_HEADER include/spdk/pipe.h 00:04:36.803 TEST_HEADER include/spdk/queue.h 00:04:36.803 TEST_HEADER include/spdk/reduce.h 00:04:36.803 TEST_HEADER include/spdk/rpc.h 00:04:36.803 TEST_HEADER include/spdk/scheduler.h 00:04:36.803 TEST_HEADER include/spdk/scsi.h 00:04:36.803 TEST_HEADER include/spdk/scsi_spec.h 00:04:36.803 TEST_HEADER include/spdk/sock.h 00:04:36.803 TEST_HEADER include/spdk/stdinc.h 00:04:36.803 LINK ioat_perf 00:04:36.803 TEST_HEADER include/spdk/string.h 00:04:36.803 TEST_HEADER include/spdk/thread.h 00:04:36.803 TEST_HEADER include/spdk/trace.h 00:04:36.803 TEST_HEADER include/spdk/trace_parser.h 00:04:36.803 TEST_HEADER include/spdk/tree.h 00:04:36.803 TEST_HEADER include/spdk/ublk.h 00:04:36.803 TEST_HEADER include/spdk/util.h 00:04:36.803 TEST_HEADER include/spdk/uuid.h 00:04:36.803 TEST_HEADER include/spdk/version.h 00:04:36.803 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:36.803 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:36.803 TEST_HEADER include/spdk/vhost.h 00:04:36.803 TEST_HEADER include/spdk/vmd.h 00:04:36.803 TEST_HEADER include/spdk/xor.h 00:04:36.803 TEST_HEADER include/spdk/zipf.h 00:04:36.803 CXX test/cpp_headers/accel.o 00:04:36.803 CC app/spdk_dd/spdk_dd.o 00:04:37.061 LINK bdev_svc 00:04:37.061 LINK spdk_nvme_identify 00:04:37.061 CXX test/cpp_headers/accel_module.o 00:04:37.061 CC examples/ioat/verify/verify.o 00:04:37.061 LINK spdk_nvme_perf 00:04:37.061 CC app/fio/nvme/fio_plugin.o 00:04:37.061 LINK test_dma 00:04:37.061 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:37.061 CXX test/cpp_headers/assert.o 00:04:37.061 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:37.061 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:37.320 CC test/app/histogram_perf/histogram_perf.o 00:04:37.320 LINK verify 00:04:37.320 LINK spdk_dd 00:04:37.320 CXX test/cpp_headers/barrier.o 00:04:37.320 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:37.320 CC app/vhost/vhost.o 00:04:37.320 LINK histogram_perf 00:04:37.320 CXX test/cpp_headers/base64.o 00:04:37.320 LINK spdk_top 00:04:37.579 CC examples/idxd/perf/perf.o 00:04:37.579 LINK nvme_fuzz 00:04:37.579 CXX test/cpp_headers/bdev.o 00:04:37.579 LINK vhost 00:04:37.579 CC test/event/event_perf/event_perf.o 00:04:37.579 CC test/env/mem_callbacks/mem_callbacks.o 00:04:37.579 LINK spdk_nvme 00:04:37.579 CXX test/cpp_headers/bdev_module.o 00:04:37.579 CC test/nvme/aer/aer.o 00:04:37.579 CC test/rpc_client/rpc_client_test.o 00:04:37.579 LINK vhost_fuzz 00:04:37.837 LINK event_perf 00:04:37.837 CC test/accel/dif/dif.o 00:04:37.837 CC app/fio/bdev/fio_plugin.o 00:04:37.837 LINK idxd_perf 00:04:37.837 CXX test/cpp_headers/bdev_zone.o 00:04:37.837 LINK rpc_client_test 00:04:37.837 CC test/event/reactor/reactor.o 00:04:37.837 LINK aer 00:04:38.096 CXX test/cpp_headers/bit_array.o 00:04:38.096 LINK reactor 00:04:38.096 CC test/blobfs/mkfs/mkfs.o 00:04:38.096 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:38.096 CC test/event/reactor_perf/reactor_perf.o 00:04:38.096 LINK mem_callbacks 00:04:38.096 CXX test/cpp_headers/bit_pool.o 00:04:38.096 CC test/nvme/reset/reset.o 00:04:38.096 LINK reactor_perf 00:04:38.096 LINK mkfs 00:04:38.096 LINK interrupt_tgt 00:04:38.354 CXX test/cpp_headers/blob_bdev.o 00:04:38.354 CC test/env/vtophys/vtophys.o 00:04:38.354 CC examples/thread/thread/thread_ex.o 00:04:38.354 LINK spdk_bdev 00:04:38.354 CC test/event/app_repeat/app_repeat.o 00:04:38.354 LINK vtophys 00:04:38.354 LINK reset 00:04:38.354 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:38.354 CXX test/cpp_headers/blobfs_bdev.o 00:04:38.354 CC test/event/scheduler/scheduler.o 00:04:38.354 LINK app_repeat 00:04:38.613 LINK dif 00:04:38.613 LINK thread 00:04:38.613 LINK env_dpdk_post_init 00:04:38.613 CXX test/cpp_headers/blobfs.o 00:04:38.613 CC test/lvol/esnap/esnap.o 00:04:38.613 CC test/nvme/sgl/sgl.o 00:04:38.613 CC examples/sock/hello_world/hello_sock.o 00:04:38.613 LINK scheduler 00:04:38.613 CXX test/cpp_headers/blob.o 00:04:38.613 CC test/app/jsoncat/jsoncat.o 00:04:38.613 CC test/app/stub/stub.o 00:04:38.613 CC test/env/memory/memory_ut.o 00:04:38.871 CXX test/cpp_headers/conf.o 00:04:38.871 LINK jsoncat 00:04:38.871 CC test/bdev/bdevio/bdevio.o 00:04:38.871 LINK hello_sock 00:04:38.871 LINK stub 00:04:38.871 LINK sgl 00:04:38.871 LINK iscsi_fuzz 00:04:38.871 CXX test/cpp_headers/config.o 00:04:38.871 CXX test/cpp_headers/cpuset.o 00:04:38.871 CC examples/accel/perf/accel_perf.o 00:04:39.130 CC test/nvme/e2edp/nvme_dp.o 00:04:39.130 CXX test/cpp_headers/crc16.o 00:04:39.130 CC examples/blob/hello_world/hello_blob.o 00:04:39.130 CC examples/nvme/hello_world/hello_world.o 00:04:39.130 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:39.130 LINK bdevio 00:04:39.130 CC examples/blob/cli/blobcli.o 00:04:39.130 CXX test/cpp_headers/crc32.o 00:04:39.390 LINK nvme_dp 00:04:39.390 LINK hello_blob 00:04:39.390 LINK hello_world 00:04:39.390 CXX test/cpp_headers/crc64.o 00:04:39.390 LINK hello_fsdev 00:04:39.390 CC test/env/pci/pci_ut.o 00:04:39.390 LINK accel_perf 00:04:39.390 CXX test/cpp_headers/dif.o 00:04:39.390 CC test/nvme/overhead/overhead.o 00:04:39.390 CC test/nvme/err_injection/err_injection.o 00:04:39.649 CC examples/nvme/reconnect/reconnect.o 00:04:39.649 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:39.649 CXX test/cpp_headers/dma.o 00:04:39.649 CC test/nvme/startup/startup.o 00:04:39.649 LINK blobcli 00:04:39.649 LINK err_injection 00:04:39.649 CXX test/cpp_headers/endian.o 00:04:39.649 LINK overhead 00:04:39.908 LINK startup 00:04:39.908 LINK pci_ut 00:04:39.908 LINK memory_ut 00:04:39.908 LINK reconnect 00:04:39.908 CXX test/cpp_headers/env_dpdk.o 00:04:39.908 CC test/nvme/reserve/reserve.o 00:04:39.908 CC test/nvme/simple_copy/simple_copy.o 00:04:39.908 LINK nvme_manage 00:04:39.908 CC examples/bdev/hello_world/hello_bdev.o 00:04:39.908 CC examples/bdev/bdevperf/bdevperf.o 00:04:39.908 CC test/nvme/connect_stress/connect_stress.o 00:04:39.908 CXX test/cpp_headers/env.o 00:04:40.167 CC test/nvme/boot_partition/boot_partition.o 00:04:40.167 CC test/nvme/compliance/nvme_compliance.o 00:04:40.167 LINK reserve 00:04:40.167 LINK simple_copy 00:04:40.167 CXX test/cpp_headers/event.o 00:04:40.167 CC examples/nvme/arbitration/arbitration.o 00:04:40.167 LINK boot_partition 00:04:40.167 LINK connect_stress 00:04:40.167 LINK hello_bdev 00:04:40.167 CC test/nvme/fused_ordering/fused_ordering.o 00:04:40.426 CXX test/cpp_headers/fd_group.o 00:04:40.426 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:40.426 CC test/nvme/fdp/fdp.o 00:04:40.426 CC test/nvme/cuse/cuse.o 00:04:40.426 LINK nvme_compliance 00:04:40.426 CC examples/nvme/hotplug/hotplug.o 00:04:40.426 CXX test/cpp_headers/fd.o 00:04:40.426 LINK arbitration 00:04:40.426 LINK fused_ordering 00:04:40.426 LINK doorbell_aers 00:04:40.426 CXX test/cpp_headers/file.o 00:04:40.684 CXX test/cpp_headers/fsdev.o 00:04:40.684 CXX test/cpp_headers/fsdev_module.o 00:04:40.684 CXX test/cpp_headers/ftl.o 00:04:40.684 LINK hotplug 00:04:40.684 CXX test/cpp_headers/fuse_dispatcher.o 00:04:40.684 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:40.684 CXX test/cpp_headers/gpt_spec.o 00:04:40.684 LINK fdp 00:04:40.684 CXX test/cpp_headers/hexlify.o 00:04:40.684 CC examples/nvme/abort/abort.o 00:04:40.684 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:40.684 CXX test/cpp_headers/histogram_data.o 00:04:40.943 LINK bdevperf 00:04:40.943 CXX test/cpp_headers/idxd.o 00:04:40.943 CXX test/cpp_headers/idxd_spec.o 00:04:40.943 LINK cmb_copy 00:04:40.943 CXX test/cpp_headers/init.o 00:04:40.943 CXX test/cpp_headers/ioat.o 00:04:40.943 CXX test/cpp_headers/ioat_spec.o 00:04:40.943 LINK pmr_persistence 00:04:40.943 CXX test/cpp_headers/iscsi_spec.o 00:04:40.943 CXX test/cpp_headers/json.o 00:04:40.943 CXX test/cpp_headers/jsonrpc.o 00:04:40.943 CXX test/cpp_headers/keyring.o 00:04:40.943 LINK abort 00:04:40.943 CXX test/cpp_headers/keyring_module.o 00:04:41.201 CXX test/cpp_headers/likely.o 00:04:41.201 CXX test/cpp_headers/log.o 00:04:41.201 CXX test/cpp_headers/lvol.o 00:04:41.201 CXX test/cpp_headers/md5.o 00:04:41.201 CXX test/cpp_headers/memory.o 00:04:41.201 CXX test/cpp_headers/mmio.o 00:04:41.201 CXX test/cpp_headers/nbd.o 00:04:41.201 CXX test/cpp_headers/net.o 00:04:41.201 CXX test/cpp_headers/notify.o 00:04:41.201 CXX test/cpp_headers/nvme.o 00:04:41.201 CXX test/cpp_headers/nvme_intel.o 00:04:41.201 CXX test/cpp_headers/nvme_ocssd.o 00:04:41.201 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:41.201 CXX test/cpp_headers/nvme_spec.o 00:04:41.201 CC examples/nvmf/nvmf/nvmf.o 00:04:41.459 CXX test/cpp_headers/nvme_zns.o 00:04:41.459 CXX test/cpp_headers/nvmf_cmd.o 00:04:41.459 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:41.459 CXX test/cpp_headers/nvmf.o 00:04:41.459 CXX test/cpp_headers/nvmf_spec.o 00:04:41.459 CXX test/cpp_headers/nvmf_transport.o 00:04:41.459 CXX test/cpp_headers/opal.o 00:04:41.459 CXX test/cpp_headers/opal_spec.o 00:04:41.459 CXX test/cpp_headers/pci_ids.o 00:04:41.459 CXX test/cpp_headers/pipe.o 00:04:41.459 CXX test/cpp_headers/queue.o 00:04:41.459 CXX test/cpp_headers/reduce.o 00:04:41.459 LINK nvmf 00:04:41.459 CXX test/cpp_headers/rpc.o 00:04:41.459 CXX test/cpp_headers/scheduler.o 00:04:41.459 CXX test/cpp_headers/scsi.o 00:04:41.717 CXX test/cpp_headers/scsi_spec.o 00:04:41.717 LINK cuse 00:04:41.718 CXX test/cpp_headers/sock.o 00:04:41.718 CXX test/cpp_headers/stdinc.o 00:04:41.718 CXX test/cpp_headers/string.o 00:04:41.718 CXX test/cpp_headers/thread.o 00:04:41.718 CXX test/cpp_headers/trace.o 00:04:41.718 CXX test/cpp_headers/trace_parser.o 00:04:41.718 CXX test/cpp_headers/tree.o 00:04:41.718 CXX test/cpp_headers/ublk.o 00:04:41.718 CXX test/cpp_headers/util.o 00:04:41.718 CXX test/cpp_headers/uuid.o 00:04:41.718 CXX test/cpp_headers/version.o 00:04:41.718 CXX test/cpp_headers/vfio_user_pci.o 00:04:41.718 CXX test/cpp_headers/vfio_user_spec.o 00:04:41.718 CXX test/cpp_headers/vhost.o 00:04:41.718 CXX test/cpp_headers/vmd.o 00:04:41.718 CXX test/cpp_headers/xor.o 00:04:41.718 CXX test/cpp_headers/zipf.o 00:04:43.664 LINK esnap 00:04:43.664 00:04:43.664 real 1m0.250s 00:04:43.664 user 5m4.992s 00:04:43.664 sys 0m48.535s 00:04:43.664 13:53:21 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:04:43.664 13:53:21 make -- common/autotest_common.sh@10 -- $ set +x 00:04:43.664 ************************************ 00:04:43.664 END TEST make 00:04:43.664 ************************************ 00:04:43.923 13:53:21 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:43.923 13:53:21 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:43.923 13:53:21 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:43.923 13:53:21 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:43.923 13:53:21 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:43.923 13:53:21 -- pm/common@44 -- $ pid=5795 00:04:43.923 13:53:21 -- pm/common@50 -- $ kill -TERM 5795 00:04:43.923 13:53:21 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:43.923 13:53:21 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:43.923 13:53:21 -- pm/common@44 -- $ pid=5796 00:04:43.923 13:53:21 -- pm/common@50 -- $ kill -TERM 5796 00:04:43.923 13:53:22 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:43.923 13:53:22 -- common/autotest_common.sh@1681 -- # lcov --version 00:04:43.923 13:53:22 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:43.923 13:53:22 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:43.923 13:53:22 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:43.923 13:53:22 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:43.923 13:53:22 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:43.923 13:53:22 -- scripts/common.sh@336 -- # IFS=.-: 00:04:43.923 13:53:22 -- scripts/common.sh@336 -- # read -ra ver1 00:04:43.923 13:53:22 -- scripts/common.sh@337 -- # IFS=.-: 00:04:43.923 13:53:22 -- scripts/common.sh@337 -- # read -ra ver2 00:04:43.923 13:53:22 -- scripts/common.sh@338 -- # local 'op=<' 00:04:43.923 13:53:22 -- scripts/common.sh@340 -- # ver1_l=2 00:04:43.923 13:53:22 -- scripts/common.sh@341 -- # ver2_l=1 00:04:43.923 13:53:22 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:43.923 13:53:22 -- scripts/common.sh@344 -- # case "$op" in 00:04:43.923 13:53:22 -- scripts/common.sh@345 -- # : 1 00:04:43.923 13:53:22 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:43.923 13:53:22 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:43.923 13:53:22 -- scripts/common.sh@365 -- # decimal 1 00:04:43.923 13:53:22 -- scripts/common.sh@353 -- # local d=1 00:04:43.923 13:53:22 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:43.923 13:53:22 -- scripts/common.sh@355 -- # echo 1 00:04:43.923 13:53:22 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:43.923 13:53:22 -- scripts/common.sh@366 -- # decimal 2 00:04:43.924 13:53:22 -- scripts/common.sh@353 -- # local d=2 00:04:43.924 13:53:22 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:43.924 13:53:22 -- scripts/common.sh@355 -- # echo 2 00:04:43.924 13:53:22 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:43.924 13:53:22 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:43.924 13:53:22 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:43.924 13:53:22 -- scripts/common.sh@368 -- # return 0 00:04:43.924 13:53:22 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:43.924 13:53:22 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:43.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.924 --rc genhtml_branch_coverage=1 00:04:43.924 --rc genhtml_function_coverage=1 00:04:43.924 --rc genhtml_legend=1 00:04:43.924 --rc geninfo_all_blocks=1 00:04:43.924 --rc geninfo_unexecuted_blocks=1 00:04:43.924 00:04:43.924 ' 00:04:43.924 13:53:22 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:43.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.924 --rc genhtml_branch_coverage=1 00:04:43.924 --rc genhtml_function_coverage=1 00:04:43.924 --rc genhtml_legend=1 00:04:43.924 --rc geninfo_all_blocks=1 00:04:43.924 --rc geninfo_unexecuted_blocks=1 00:04:43.924 00:04:43.924 ' 00:04:43.924 13:53:22 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:43.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.924 --rc genhtml_branch_coverage=1 00:04:43.924 --rc genhtml_function_coverage=1 00:04:43.924 --rc genhtml_legend=1 00:04:43.924 --rc geninfo_all_blocks=1 00:04:43.924 --rc geninfo_unexecuted_blocks=1 00:04:43.924 00:04:43.924 ' 00:04:43.924 13:53:22 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:43.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.924 --rc genhtml_branch_coverage=1 00:04:43.924 --rc genhtml_function_coverage=1 00:04:43.924 --rc genhtml_legend=1 00:04:43.924 --rc geninfo_all_blocks=1 00:04:43.924 --rc geninfo_unexecuted_blocks=1 00:04:43.924 00:04:43.924 ' 00:04:43.924 13:53:22 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:43.924 13:53:22 -- nvmf/common.sh@7 -- # uname -s 00:04:43.924 13:53:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:43.924 13:53:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:43.924 13:53:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:43.924 13:53:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:43.924 13:53:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:43.924 13:53:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:43.924 13:53:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:43.924 13:53:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:43.924 13:53:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:43.924 13:53:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:43.924 13:53:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:49cce2c4-f077-41b8-9fb2-2aa7b66f16e7 00:04:43.924 13:53:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=49cce2c4-f077-41b8-9fb2-2aa7b66f16e7 00:04:43.924 13:53:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:43.924 13:53:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:43.924 13:53:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:43.924 13:53:22 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:43.924 13:53:22 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:43.924 13:53:22 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:43.924 13:53:22 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:43.924 13:53:22 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:43.924 13:53:22 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:43.924 13:53:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.924 13:53:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.924 13:53:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.924 13:53:22 -- paths/export.sh@5 -- # export PATH 00:04:43.924 13:53:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.924 13:53:22 -- nvmf/common.sh@51 -- # : 0 00:04:43.924 13:53:22 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:43.924 13:53:22 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:43.924 13:53:22 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:43.924 13:53:22 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:43.924 13:53:22 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:43.924 13:53:22 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:43.924 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:43.924 13:53:22 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:43.924 13:53:22 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:43.924 13:53:22 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:43.924 13:53:22 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:43.924 13:53:22 -- spdk/autotest.sh@32 -- # uname -s 00:04:43.924 13:53:22 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:43.924 13:53:22 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:43.924 13:53:22 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:43.924 13:53:22 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:43.924 13:53:22 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:43.924 13:53:22 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:43.924 13:53:22 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:43.924 13:53:22 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:43.924 13:53:22 -- spdk/autotest.sh@48 -- # udevadm_pid=66937 00:04:43.924 13:53:22 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:43.924 13:53:22 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:43.924 13:53:22 -- pm/common@17 -- # local monitor 00:04:43.924 13:53:22 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:43.924 13:53:22 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:43.924 13:53:22 -- pm/common@25 -- # sleep 1 00:04:43.924 13:53:22 -- pm/common@21 -- # date +%s 00:04:43.924 13:53:22 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731851602 00:04:43.924 13:53:22 -- pm/common@21 -- # date +%s 00:04:43.924 13:53:22 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731851602 00:04:43.924 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731851602_collect-vmstat.pm.log 00:04:43.924 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731851602_collect-cpu-load.pm.log 00:04:45.302 13:53:23 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:45.302 13:53:23 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:45.302 13:53:23 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:45.302 13:53:23 -- common/autotest_common.sh@10 -- # set +x 00:04:45.302 13:53:23 -- spdk/autotest.sh@59 -- # create_test_list 00:04:45.302 13:53:23 -- common/autotest_common.sh@748 -- # xtrace_disable 00:04:45.302 13:53:23 -- common/autotest_common.sh@10 -- # set +x 00:04:45.302 13:53:23 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:45.302 13:53:23 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:45.302 13:53:23 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:45.302 13:53:23 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:45.302 13:53:23 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:45.302 13:53:23 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:45.302 13:53:23 -- common/autotest_common.sh@1455 -- # uname 00:04:45.302 13:53:23 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:45.302 13:53:23 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:45.302 13:53:23 -- common/autotest_common.sh@1475 -- # uname 00:04:45.302 13:53:23 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:45.302 13:53:23 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:45.302 13:53:23 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:45.302 lcov: LCOV version 1.15 00:04:45.302 13:53:23 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:00.177 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:00.177 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:15.087 13:53:52 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:15.087 13:53:52 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:15.087 13:53:52 -- common/autotest_common.sh@10 -- # set +x 00:05:15.087 13:53:52 -- spdk/autotest.sh@78 -- # rm -f 00:05:15.087 13:53:52 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:15.087 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:15.349 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:15.349 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:15.349 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:05:15.349 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:05:15.349 13:53:53 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:15.349 13:53:53 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:15.349 13:53:53 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:15.349 13:53:53 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:15.349 13:53:53 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:15.349 13:53:53 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:15.349 13:53:53 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:15.349 13:53:53 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:15.349 13:53:53 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:15.349 13:53:53 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:15.349 13:53:53 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:05:15.349 13:53:53 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:05:15.349 13:53:53 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:15.349 13:53:53 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:15.349 13:53:53 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:15.349 13:53:53 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:05:15.349 13:53:53 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:05:15.349 13:53:53 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:15.349 13:53:53 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:15.349 13:53:53 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:15.349 13:53:53 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:05:15.349 13:53:53 -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:05:15.349 13:53:53 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:05:15.349 13:53:53 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:15.349 13:53:53 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:15.349 13:53:53 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:05:15.349 13:53:53 -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:05:15.349 13:53:53 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:05:15.349 13:53:53 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:15.349 13:53:53 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:15.349 13:53:53 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:05:15.349 13:53:53 -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:05:15.349 13:53:53 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:05:15.349 13:53:53 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:15.349 13:53:53 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:15.349 13:53:53 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:05:15.349 13:53:53 -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:05:15.349 13:53:53 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:15.349 13:53:53 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:15.349 13:53:53 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:15.349 13:53:53 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:15.349 13:53:53 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:15.349 13:53:53 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:15.349 13:53:53 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:15.349 13:53:53 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:15.649 No valid GPT data, bailing 00:05:15.649 13:53:53 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:15.649 13:53:53 -- scripts/common.sh@394 -- # pt= 00:05:15.649 13:53:53 -- scripts/common.sh@395 -- # return 1 00:05:15.649 13:53:53 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:15.649 1+0 records in 00:05:15.649 1+0 records out 00:05:15.649 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107311 s, 97.7 MB/s 00:05:15.649 13:53:53 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:15.649 13:53:53 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:15.649 13:53:53 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:15.649 13:53:53 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:15.649 13:53:53 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:15.649 No valid GPT data, bailing 00:05:15.649 13:53:53 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:15.649 13:53:53 -- scripts/common.sh@394 -- # pt= 00:05:15.649 13:53:53 -- scripts/common.sh@395 -- # return 1 00:05:15.649 13:53:53 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:15.649 1+0 records in 00:05:15.649 1+0 records out 00:05:15.649 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00295573 s, 355 MB/s 00:05:15.649 13:53:53 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:15.649 13:53:53 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:15.649 13:53:53 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:05:15.649 13:53:53 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:05:15.649 13:53:53 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:05:15.649 No valid GPT data, bailing 00:05:15.649 13:53:53 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:15.649 13:53:53 -- scripts/common.sh@394 -- # pt= 00:05:15.649 13:53:53 -- scripts/common.sh@395 -- # return 1 00:05:15.649 13:53:53 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:05:15.649 1+0 records in 00:05:15.649 1+0 records out 00:05:15.649 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00398239 s, 263 MB/s 00:05:15.649 13:53:53 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:15.649 13:53:53 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:15.649 13:53:53 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:05:15.649 13:53:53 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:05:15.649 13:53:53 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:05:15.649 No valid GPT data, bailing 00:05:15.649 13:53:53 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:05:15.649 13:53:53 -- scripts/common.sh@394 -- # pt= 00:05:15.649 13:53:53 -- scripts/common.sh@395 -- # return 1 00:05:15.649 13:53:53 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:05:15.649 1+0 records in 00:05:15.649 1+0 records out 00:05:15.649 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00415249 s, 253 MB/s 00:05:15.649 13:53:53 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:15.649 13:53:53 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:15.649 13:53:53 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:05:15.649 13:53:53 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:05:15.649 13:53:53 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:05:15.649 No valid GPT data, bailing 00:05:15.649 13:53:53 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:05:15.649 13:53:53 -- scripts/common.sh@394 -- # pt= 00:05:15.649 13:53:53 -- scripts/common.sh@395 -- # return 1 00:05:15.649 13:53:53 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:05:15.649 1+0 records in 00:05:15.649 1+0 records out 00:05:15.649 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00412209 s, 254 MB/s 00:05:15.649 13:53:53 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:15.649 13:53:53 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:15.649 13:53:53 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:05:15.649 13:53:53 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:05:15.649 13:53:53 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:05:15.929 No valid GPT data, bailing 00:05:15.929 13:53:53 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:15.929 13:53:53 -- scripts/common.sh@394 -- # pt= 00:05:15.929 13:53:53 -- scripts/common.sh@395 -- # return 1 00:05:15.929 13:53:53 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:05:15.929 1+0 records in 00:05:15.929 1+0 records out 00:05:15.929 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00378255 s, 277 MB/s 00:05:15.929 13:53:53 -- spdk/autotest.sh@105 -- # sync 00:05:15.929 13:53:54 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:15.929 13:53:54 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:15.929 13:53:54 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:17.317 13:53:55 -- spdk/autotest.sh@111 -- # uname -s 00:05:17.317 13:53:55 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:17.317 13:53:55 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:17.317 13:53:55 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:17.888 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:18.461 Hugepages 00:05:18.461 node hugesize free / total 00:05:18.461 node0 1048576kB 0 / 0 00:05:18.461 node0 2048kB 0 / 0 00:05:18.461 00:05:18.461 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:18.461 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:18.461 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:18.461 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:18.723 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:05:18.723 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:05:18.723 13:53:56 -- spdk/autotest.sh@117 -- # uname -s 00:05:18.723 13:53:56 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:18.723 13:53:56 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:18.723 13:53:56 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:19.296 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:19.870 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:19.870 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:19.870 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:19.870 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:19.870 13:53:58 -- common/autotest_common.sh@1515 -- # sleep 1 00:05:20.815 13:53:59 -- common/autotest_common.sh@1516 -- # bdfs=() 00:05:20.815 13:53:59 -- common/autotest_common.sh@1516 -- # local bdfs 00:05:20.815 13:53:59 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:05:20.815 13:53:59 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:05:20.815 13:53:59 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:20.815 13:53:59 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:20.815 13:53:59 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:20.815 13:53:59 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:20.815 13:53:59 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:20.815 13:53:59 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:05:20.815 13:53:59 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:20.815 13:53:59 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:21.384 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:21.384 Waiting for block devices as requested 00:05:21.384 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:21.384 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:21.642 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:05:21.642 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:05:26.905 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:05:26.905 13:54:04 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:26.905 13:54:04 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:26.905 13:54:04 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:26.905 13:54:04 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:05:26.905 13:54:04 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:26.905 13:54:04 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:26.905 13:54:04 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:26.906 13:54:04 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:05:26.906 13:54:04 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:05:26.906 13:54:04 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:05:26.906 13:54:04 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:05:26.906 13:54:04 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:26.906 13:54:04 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:26.906 13:54:04 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:26.906 13:54:04 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:26.906 13:54:04 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:26.906 13:54:04 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:26.906 13:54:04 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:05:26.906 13:54:04 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:26.906 13:54:04 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:26.906 13:54:04 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:26.906 13:54:04 -- common/autotest_common.sh@1541 -- # continue 00:05:26.906 13:54:04 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:26.906 13:54:04 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:26.906 13:54:04 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:05:26.906 13:54:04 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:26.906 13:54:04 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:26.906 13:54:04 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:26.906 13:54:04 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:26.906 13:54:04 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:05:26.906 13:54:04 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:05:26.906 13:54:04 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:05:26.906 13:54:04 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:05:26.906 13:54:04 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:26.906 13:54:04 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:26.906 13:54:04 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:26.906 13:54:04 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:26.906 13:54:04 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:26.906 13:54:04 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:26.906 13:54:04 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:05:26.906 13:54:04 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:26.906 13:54:04 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:26.906 13:54:04 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:26.906 13:54:04 -- common/autotest_common.sh@1541 -- # continue 00:05:26.906 13:54:04 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:26.906 13:54:04 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:05:26.906 13:54:04 -- common/autotest_common.sh@1485 -- # grep 0000:00:12.0/nvme/nvme 00:05:26.906 13:54:04 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:26.906 13:54:04 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:26.906 13:54:04 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:05:26.906 13:54:04 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:26.906 13:54:04 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme2 00:05:26.906 13:54:04 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme2 00:05:26.906 13:54:04 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme2 ]] 00:05:26.906 13:54:04 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme2 00:05:26.906 13:54:04 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:26.906 13:54:04 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:26.906 13:54:04 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:26.906 13:54:04 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:26.906 13:54:04 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:26.906 13:54:04 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:26.906 13:54:04 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme2 00:05:26.906 13:54:04 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:26.906 13:54:04 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:26.906 13:54:04 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:26.906 13:54:04 -- common/autotest_common.sh@1541 -- # continue 00:05:26.906 13:54:04 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:26.906 13:54:04 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:05:26.906 13:54:04 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:26.906 13:54:04 -- common/autotest_common.sh@1485 -- # grep 0000:00:13.0/nvme/nvme 00:05:26.906 13:54:04 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:26.906 13:54:04 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:05:26.906 13:54:04 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:26.906 13:54:04 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme3 00:05:26.906 13:54:04 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme3 00:05:26.906 13:54:04 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme3 ]] 00:05:26.906 13:54:04 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme3 00:05:26.906 13:54:04 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:26.906 13:54:04 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:26.906 13:54:04 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:26.906 13:54:04 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:26.906 13:54:04 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:26.906 13:54:04 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme3 00:05:26.906 13:54:04 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:26.906 13:54:04 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:26.906 13:54:04 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:26.906 13:54:04 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:26.906 13:54:04 -- common/autotest_common.sh@1541 -- # continue 00:05:26.906 13:54:04 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:26.906 13:54:04 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:26.906 13:54:04 -- common/autotest_common.sh@10 -- # set +x 00:05:26.906 13:54:05 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:26.906 13:54:05 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:26.906 13:54:05 -- common/autotest_common.sh@10 -- # set +x 00:05:26.906 13:54:05 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:27.164 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:27.731 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:27.731 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:27.731 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:27.731 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:27.731 13:54:05 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:27.731 13:54:05 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:27.731 13:54:05 -- common/autotest_common.sh@10 -- # set +x 00:05:27.731 13:54:06 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:27.731 13:54:06 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:27.731 13:54:06 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:27.731 13:54:06 -- common/autotest_common.sh@1561 -- # bdfs=() 00:05:27.731 13:54:06 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:05:27.731 13:54:06 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:05:27.731 13:54:06 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:05:27.731 13:54:06 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:05:27.731 13:54:06 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:27.731 13:54:06 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:27.731 13:54:06 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:27.731 13:54:06 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:27.731 13:54:06 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:27.990 13:54:06 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:05:27.990 13:54:06 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:27.990 13:54:06 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:27.990 13:54:06 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:27.990 13:54:06 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:27.990 13:54:06 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:27.990 13:54:06 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:27.990 13:54:06 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:27.990 13:54:06 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:27.990 13:54:06 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:27.990 13:54:06 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:27.990 13:54:06 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:05:27.990 13:54:06 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:27.990 13:54:06 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:27.990 13:54:06 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:27.990 13:54:06 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:05:27.990 13:54:06 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:27.990 13:54:06 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:27.990 13:54:06 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:05:27.990 13:54:06 -- common/autotest_common.sh@1570 -- # return 0 00:05:27.990 13:54:06 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:27.990 13:54:06 -- common/autotest_common.sh@1578 -- # return 0 00:05:27.990 13:54:06 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:27.990 13:54:06 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:27.990 13:54:06 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:27.990 13:54:06 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:27.990 13:54:06 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:27.990 13:54:06 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:27.990 13:54:06 -- common/autotest_common.sh@10 -- # set +x 00:05:27.990 13:54:06 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:27.990 13:54:06 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:27.990 13:54:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:27.990 13:54:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:27.990 13:54:06 -- common/autotest_common.sh@10 -- # set +x 00:05:27.990 ************************************ 00:05:27.990 START TEST env 00:05:27.990 ************************************ 00:05:27.990 13:54:06 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:27.990 * Looking for test storage... 00:05:27.990 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:27.990 13:54:06 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:27.990 13:54:06 env -- common/autotest_common.sh@1681 -- # lcov --version 00:05:27.990 13:54:06 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:27.990 13:54:06 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:27.990 13:54:06 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:27.990 13:54:06 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:27.990 13:54:06 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:27.990 13:54:06 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:27.990 13:54:06 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:27.990 13:54:06 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:27.990 13:54:06 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:27.990 13:54:06 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:27.990 13:54:06 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:27.990 13:54:06 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:27.990 13:54:06 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:27.990 13:54:06 env -- scripts/common.sh@344 -- # case "$op" in 00:05:27.990 13:54:06 env -- scripts/common.sh@345 -- # : 1 00:05:27.990 13:54:06 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:27.990 13:54:06 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:27.990 13:54:06 env -- scripts/common.sh@365 -- # decimal 1 00:05:27.990 13:54:06 env -- scripts/common.sh@353 -- # local d=1 00:05:27.990 13:54:06 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:27.990 13:54:06 env -- scripts/common.sh@355 -- # echo 1 00:05:27.990 13:54:06 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:27.990 13:54:06 env -- scripts/common.sh@366 -- # decimal 2 00:05:27.990 13:54:06 env -- scripts/common.sh@353 -- # local d=2 00:05:27.990 13:54:06 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:27.990 13:54:06 env -- scripts/common.sh@355 -- # echo 2 00:05:27.990 13:54:06 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:27.990 13:54:06 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:27.990 13:54:06 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:27.990 13:54:06 env -- scripts/common.sh@368 -- # return 0 00:05:27.990 13:54:06 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:27.991 13:54:06 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:27.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.991 --rc genhtml_branch_coverage=1 00:05:27.991 --rc genhtml_function_coverage=1 00:05:27.991 --rc genhtml_legend=1 00:05:27.991 --rc geninfo_all_blocks=1 00:05:27.991 --rc geninfo_unexecuted_blocks=1 00:05:27.991 00:05:27.991 ' 00:05:27.991 13:54:06 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:27.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.991 --rc genhtml_branch_coverage=1 00:05:27.991 --rc genhtml_function_coverage=1 00:05:27.991 --rc genhtml_legend=1 00:05:27.991 --rc geninfo_all_blocks=1 00:05:27.991 --rc geninfo_unexecuted_blocks=1 00:05:27.991 00:05:27.991 ' 00:05:27.991 13:54:06 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:27.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.991 --rc genhtml_branch_coverage=1 00:05:27.991 --rc genhtml_function_coverage=1 00:05:27.991 --rc genhtml_legend=1 00:05:27.991 --rc geninfo_all_blocks=1 00:05:27.991 --rc geninfo_unexecuted_blocks=1 00:05:27.991 00:05:27.991 ' 00:05:27.991 13:54:06 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:27.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.991 --rc genhtml_branch_coverage=1 00:05:27.991 --rc genhtml_function_coverage=1 00:05:27.991 --rc genhtml_legend=1 00:05:27.991 --rc geninfo_all_blocks=1 00:05:27.991 --rc geninfo_unexecuted_blocks=1 00:05:27.991 00:05:27.991 ' 00:05:27.991 13:54:06 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:27.991 13:54:06 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:27.991 13:54:06 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:27.991 13:54:06 env -- common/autotest_common.sh@10 -- # set +x 00:05:27.991 ************************************ 00:05:27.991 START TEST env_memory 00:05:27.991 ************************************ 00:05:27.991 13:54:06 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:27.991 00:05:27.991 00:05:27.991 CUnit - A unit testing framework for C - Version 2.1-3 00:05:27.991 http://cunit.sourceforge.net/ 00:05:27.991 00:05:27.991 00:05:27.991 Suite: memory 00:05:28.249 Test: alloc and free memory map ...[2024-11-17 13:54:06.316562] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:28.249 passed 00:05:28.249 Test: mem map translation ...[2024-11-17 13:54:06.355702] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:28.249 [2024-11-17 13:54:06.355814] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:28.249 [2024-11-17 13:54:06.355923] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:28.249 [2024-11-17 13:54:06.355941] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:28.249 passed 00:05:28.249 Test: mem map registration ...[2024-11-17 13:54:06.423885] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:28.249 [2024-11-17 13:54:06.423921] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:28.249 passed 00:05:28.249 Test: mem map adjacent registrations ...passed 00:05:28.249 00:05:28.249 Run Summary: Type Total Ran Passed Failed Inactive 00:05:28.249 suites 1 1 n/a 0 0 00:05:28.249 tests 4 4 4 0 0 00:05:28.249 asserts 152 152 152 0 n/a 00:05:28.249 00:05:28.249 Elapsed time = 0.232 seconds 00:05:28.249 ************************************ 00:05:28.249 END TEST env_memory 00:05:28.249 ************************************ 00:05:28.249 00:05:28.249 real 0m0.262s 00:05:28.249 user 0m0.240s 00:05:28.249 sys 0m0.015s 00:05:28.249 13:54:06 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:28.249 13:54:06 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:28.508 13:54:06 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:28.508 13:54:06 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:28.508 13:54:06 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:28.508 13:54:06 env -- common/autotest_common.sh@10 -- # set +x 00:05:28.508 ************************************ 00:05:28.508 START TEST env_vtophys 00:05:28.508 ************************************ 00:05:28.508 13:54:06 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:28.508 EAL: lib.eal log level changed from notice to debug 00:05:28.508 EAL: Detected lcore 0 as core 0 on socket 0 00:05:28.508 EAL: Detected lcore 1 as core 0 on socket 0 00:05:28.508 EAL: Detected lcore 2 as core 0 on socket 0 00:05:28.508 EAL: Detected lcore 3 as core 0 on socket 0 00:05:28.508 EAL: Detected lcore 4 as core 0 on socket 0 00:05:28.508 EAL: Detected lcore 5 as core 0 on socket 0 00:05:28.508 EAL: Detected lcore 6 as core 0 on socket 0 00:05:28.508 EAL: Detected lcore 7 as core 0 on socket 0 00:05:28.508 EAL: Detected lcore 8 as core 0 on socket 0 00:05:28.508 EAL: Detected lcore 9 as core 0 on socket 0 00:05:28.508 EAL: Maximum logical cores by configuration: 128 00:05:28.508 EAL: Detected CPU lcores: 10 00:05:28.508 EAL: Detected NUMA nodes: 1 00:05:28.508 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:28.508 EAL: Detected shared linkage of DPDK 00:05:28.508 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:28.508 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:28.508 EAL: Registered [vdev] bus. 00:05:28.508 EAL: bus.vdev log level changed from disabled to notice 00:05:28.508 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:28.508 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:28.508 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:28.508 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:28.508 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:28.508 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:28.508 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:28.508 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:28.508 EAL: No shared files mode enabled, IPC will be disabled 00:05:28.508 EAL: No shared files mode enabled, IPC is disabled 00:05:28.508 EAL: Selected IOVA mode 'PA' 00:05:28.508 EAL: Probing VFIO support... 00:05:28.508 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:28.508 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:28.508 EAL: Ask a virtual area of 0x2e000 bytes 00:05:28.508 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:28.508 EAL: Setting up physically contiguous memory... 00:05:28.508 EAL: Setting maximum number of open files to 524288 00:05:28.508 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:28.508 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:28.508 EAL: Ask a virtual area of 0x61000 bytes 00:05:28.508 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:28.508 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:28.508 EAL: Ask a virtual area of 0x400000000 bytes 00:05:28.508 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:28.508 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:28.508 EAL: Ask a virtual area of 0x61000 bytes 00:05:28.508 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:28.508 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:28.508 EAL: Ask a virtual area of 0x400000000 bytes 00:05:28.508 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:28.508 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:28.508 EAL: Ask a virtual area of 0x61000 bytes 00:05:28.508 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:28.508 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:28.508 EAL: Ask a virtual area of 0x400000000 bytes 00:05:28.508 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:28.508 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:28.508 EAL: Ask a virtual area of 0x61000 bytes 00:05:28.508 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:28.508 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:28.508 EAL: Ask a virtual area of 0x400000000 bytes 00:05:28.508 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:28.508 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:28.508 EAL: Hugepages will be freed exactly as allocated. 00:05:28.508 EAL: No shared files mode enabled, IPC is disabled 00:05:28.508 EAL: No shared files mode enabled, IPC is disabled 00:05:28.508 EAL: TSC frequency is ~2600000 KHz 00:05:28.508 EAL: Main lcore 0 is ready (tid=7f6650c4aa40;cpuset=[0]) 00:05:28.508 EAL: Trying to obtain current memory policy. 00:05:28.508 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.508 EAL: Restoring previous memory policy: 0 00:05:28.508 EAL: request: mp_malloc_sync 00:05:28.508 EAL: No shared files mode enabled, IPC is disabled 00:05:28.508 EAL: Heap on socket 0 was expanded by 2MB 00:05:28.508 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:28.508 EAL: No shared files mode enabled, IPC is disabled 00:05:28.508 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:28.508 EAL: Mem event callback 'spdk:(nil)' registered 00:05:28.508 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:28.508 00:05:28.508 00:05:28.508 CUnit - A unit testing framework for C - Version 2.1-3 00:05:28.508 http://cunit.sourceforge.net/ 00:05:28.508 00:05:28.508 00:05:28.508 Suite: components_suite 00:05:28.766 Test: vtophys_malloc_test ...passed 00:05:28.766 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:28.766 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.766 EAL: Restoring previous memory policy: 4 00:05:28.766 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.766 EAL: request: mp_malloc_sync 00:05:28.766 EAL: No shared files mode enabled, IPC is disabled 00:05:28.766 EAL: Heap on socket 0 was expanded by 4MB 00:05:28.766 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.766 EAL: request: mp_malloc_sync 00:05:28.766 EAL: No shared files mode enabled, IPC is disabled 00:05:28.766 EAL: Heap on socket 0 was shrunk by 4MB 00:05:28.766 EAL: Trying to obtain current memory policy. 00:05:28.766 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.766 EAL: Restoring previous memory policy: 4 00:05:28.766 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.766 EAL: request: mp_malloc_sync 00:05:28.766 EAL: No shared files mode enabled, IPC is disabled 00:05:28.766 EAL: Heap on socket 0 was expanded by 6MB 00:05:28.766 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.766 EAL: request: mp_malloc_sync 00:05:28.766 EAL: No shared files mode enabled, IPC is disabled 00:05:28.766 EAL: Heap on socket 0 was shrunk by 6MB 00:05:28.766 EAL: Trying to obtain current memory policy. 00:05:28.766 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.766 EAL: Restoring previous memory policy: 4 00:05:28.766 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.766 EAL: request: mp_malloc_sync 00:05:28.766 EAL: No shared files mode enabled, IPC is disabled 00:05:28.766 EAL: Heap on socket 0 was expanded by 10MB 00:05:28.766 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.766 EAL: request: mp_malloc_sync 00:05:28.766 EAL: No shared files mode enabled, IPC is disabled 00:05:28.766 EAL: Heap on socket 0 was shrunk by 10MB 00:05:28.766 EAL: Trying to obtain current memory policy. 00:05:28.766 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.766 EAL: Restoring previous memory policy: 4 00:05:28.766 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.766 EAL: request: mp_malloc_sync 00:05:28.766 EAL: No shared files mode enabled, IPC is disabled 00:05:28.766 EAL: Heap on socket 0 was expanded by 18MB 00:05:28.766 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.766 EAL: request: mp_malloc_sync 00:05:28.766 EAL: No shared files mode enabled, IPC is disabled 00:05:28.766 EAL: Heap on socket 0 was shrunk by 18MB 00:05:28.766 EAL: Trying to obtain current memory policy. 00:05:28.766 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.766 EAL: Restoring previous memory policy: 4 00:05:28.766 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.766 EAL: request: mp_malloc_sync 00:05:28.766 EAL: No shared files mode enabled, IPC is disabled 00:05:28.766 EAL: Heap on socket 0 was expanded by 34MB 00:05:28.766 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.766 EAL: request: mp_malloc_sync 00:05:28.766 EAL: No shared files mode enabled, IPC is disabled 00:05:28.766 EAL: Heap on socket 0 was shrunk by 34MB 00:05:28.766 EAL: Trying to obtain current memory policy. 00:05:28.766 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.766 EAL: Restoring previous memory policy: 4 00:05:28.766 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.766 EAL: request: mp_malloc_sync 00:05:28.766 EAL: No shared files mode enabled, IPC is disabled 00:05:28.766 EAL: Heap on socket 0 was expanded by 66MB 00:05:28.766 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.766 EAL: request: mp_malloc_sync 00:05:28.766 EAL: No shared files mode enabled, IPC is disabled 00:05:28.766 EAL: Heap on socket 0 was shrunk by 66MB 00:05:28.766 EAL: Trying to obtain current memory policy. 00:05:28.767 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:29.024 EAL: Restoring previous memory policy: 4 00:05:29.024 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.024 EAL: request: mp_malloc_sync 00:05:29.024 EAL: No shared files mode enabled, IPC is disabled 00:05:29.024 EAL: Heap on socket 0 was expanded by 130MB 00:05:29.024 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.024 EAL: request: mp_malloc_sync 00:05:29.024 EAL: No shared files mode enabled, IPC is disabled 00:05:29.024 EAL: Heap on socket 0 was shrunk by 130MB 00:05:29.024 EAL: Trying to obtain current memory policy. 00:05:29.024 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:29.024 EAL: Restoring previous memory policy: 4 00:05:29.024 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.024 EAL: request: mp_malloc_sync 00:05:29.024 EAL: No shared files mode enabled, IPC is disabled 00:05:29.024 EAL: Heap on socket 0 was expanded by 258MB 00:05:29.024 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.024 EAL: request: mp_malloc_sync 00:05:29.024 EAL: No shared files mode enabled, IPC is disabled 00:05:29.024 EAL: Heap on socket 0 was shrunk by 258MB 00:05:29.024 EAL: Trying to obtain current memory policy. 00:05:29.024 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:29.024 EAL: Restoring previous memory policy: 4 00:05:29.024 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.024 EAL: request: mp_malloc_sync 00:05:29.024 EAL: No shared files mode enabled, IPC is disabled 00:05:29.024 EAL: Heap on socket 0 was expanded by 514MB 00:05:29.024 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.283 EAL: request: mp_malloc_sync 00:05:29.283 EAL: No shared files mode enabled, IPC is disabled 00:05:29.283 EAL: Heap on socket 0 was shrunk by 514MB 00:05:29.283 EAL: Trying to obtain current memory policy. 00:05:29.283 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:29.283 EAL: Restoring previous memory policy: 4 00:05:29.283 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.283 EAL: request: mp_malloc_sync 00:05:29.283 EAL: No shared files mode enabled, IPC is disabled 00:05:29.283 EAL: Heap on socket 0 was expanded by 1026MB 00:05:29.542 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.542 EAL: request: mp_malloc_sync 00:05:29.542 passed 00:05:29.542 00:05:29.542 Run Summary: Type Total Ran Passed Failed Inactive 00:05:29.542 suites 1 1 n/a 0 0 00:05:29.542 tests 2 2 2 0 0 00:05:29.542 asserts 5358 5358 5358 0 n/a 00:05:29.542 00:05:29.542 Elapsed time = 0.935 seconds 00:05:29.542 EAL: No shared files mode enabled, IPC is disabled 00:05:29.542 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:29.542 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.542 EAL: request: mp_malloc_sync 00:05:29.542 EAL: No shared files mode enabled, IPC is disabled 00:05:29.542 EAL: Heap on socket 0 was shrunk by 2MB 00:05:29.542 EAL: No shared files mode enabled, IPC is disabled 00:05:29.542 EAL: No shared files mode enabled, IPC is disabled 00:05:29.542 EAL: No shared files mode enabled, IPC is disabled 00:05:29.542 ************************************ 00:05:29.542 END TEST env_vtophys 00:05:29.542 ************************************ 00:05:29.542 00:05:29.542 real 0m1.154s 00:05:29.542 user 0m0.458s 00:05:29.542 sys 0m0.564s 00:05:29.542 13:54:07 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:29.542 13:54:07 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:29.542 13:54:07 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:29.542 13:54:07 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:29.542 13:54:07 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:29.542 13:54:07 env -- common/autotest_common.sh@10 -- # set +x 00:05:29.542 ************************************ 00:05:29.542 START TEST env_pci 00:05:29.542 ************************************ 00:05:29.542 13:54:07 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:29.542 00:05:29.542 00:05:29.542 CUnit - A unit testing framework for C - Version 2.1-3 00:05:29.542 http://cunit.sourceforge.net/ 00:05:29.542 00:05:29.542 00:05:29.542 Suite: pci 00:05:29.542 Test: pci_hook ...[2024-11-17 13:54:07.777714] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 69670 has claimed it 00:05:29.542 passed 00:05:29.542 00:05:29.542 Run Summary: Type Total Ran Passed Failed Inactive 00:05:29.542 suites 1 1 n/a 0 0 00:05:29.542 tests 1 1 1 0 0 00:05:29.542 asserts 25 25 25 0 n/a 00:05:29.542 00:05:29.542 Elapsed time = 0.003 seconds 00:05:29.542 EAL: Cannot find device (10000:00:01.0) 00:05:29.542 EAL: Failed to attach device on primary process 00:05:29.542 00:05:29.542 real 0m0.050s 00:05:29.542 user 0m0.024s 00:05:29.542 sys 0m0.026s 00:05:29.542 ************************************ 00:05:29.542 END TEST env_pci 00:05:29.542 ************************************ 00:05:29.542 13:54:07 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:29.542 13:54:07 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:29.542 13:54:07 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:29.542 13:54:07 env -- env/env.sh@15 -- # uname 00:05:29.801 13:54:07 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:29.801 13:54:07 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:29.801 13:54:07 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:29.801 13:54:07 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:29.801 13:54:07 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:29.801 13:54:07 env -- common/autotest_common.sh@10 -- # set +x 00:05:29.801 ************************************ 00:05:29.801 START TEST env_dpdk_post_init 00:05:29.801 ************************************ 00:05:29.801 13:54:07 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:29.801 EAL: Detected CPU lcores: 10 00:05:29.801 EAL: Detected NUMA nodes: 1 00:05:29.801 EAL: Detected shared linkage of DPDK 00:05:29.801 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:29.801 EAL: Selected IOVA mode 'PA' 00:05:29.801 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:29.801 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:29.801 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:29.801 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:05:29.801 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:05:29.801 Starting DPDK initialization... 00:05:29.801 Starting SPDK post initialization... 00:05:29.801 SPDK NVMe probe 00:05:29.801 Attaching to 0000:00:10.0 00:05:29.801 Attaching to 0000:00:11.0 00:05:29.801 Attaching to 0000:00:12.0 00:05:29.801 Attaching to 0000:00:13.0 00:05:29.801 Attached to 0000:00:10.0 00:05:29.801 Attached to 0000:00:11.0 00:05:29.801 Attached to 0000:00:13.0 00:05:29.801 Attached to 0000:00:12.0 00:05:29.801 Cleaning up... 00:05:29.801 ************************************ 00:05:29.801 END TEST env_dpdk_post_init 00:05:29.801 ************************************ 00:05:29.801 00:05:29.801 real 0m0.219s 00:05:29.801 user 0m0.056s 00:05:29.801 sys 0m0.064s 00:05:29.801 13:54:08 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:29.801 13:54:08 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:30.061 13:54:08 env -- env/env.sh@26 -- # uname 00:05:30.061 13:54:08 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:30.061 13:54:08 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:30.061 13:54:08 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:30.061 13:54:08 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:30.061 13:54:08 env -- common/autotest_common.sh@10 -- # set +x 00:05:30.061 ************************************ 00:05:30.061 START TEST env_mem_callbacks 00:05:30.061 ************************************ 00:05:30.061 13:54:08 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:30.061 EAL: Detected CPU lcores: 10 00:05:30.061 EAL: Detected NUMA nodes: 1 00:05:30.061 EAL: Detected shared linkage of DPDK 00:05:30.061 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:30.061 EAL: Selected IOVA mode 'PA' 00:05:30.061 00:05:30.061 00:05:30.061 CUnit - A unit testing framework for C - Version 2.1-3 00:05:30.061 http://cunit.sourceforge.net/ 00:05:30.061 00:05:30.061 00:05:30.061 Suite: memory 00:05:30.061 Test: test ... 00:05:30.061 register 0x200000200000 2097152 00:05:30.061 malloc 3145728 00:05:30.061 register 0x200000400000 4194304 00:05:30.061 buf 0x200000500000 len 3145728 PASSED 00:05:30.061 malloc 64 00:05:30.061 buf 0x2000004fff40 len 64 PASSED 00:05:30.061 malloc 4194304 00:05:30.061 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:30.061 register 0x200000800000 6291456 00:05:30.061 buf 0x200000a00000 len 4194304 PASSED 00:05:30.061 free 0x200000500000 3145728 00:05:30.061 free 0x2000004fff40 64 00:05:30.061 unregister 0x200000400000 4194304 PASSED 00:05:30.061 free 0x200000a00000 4194304 00:05:30.061 unregister 0x200000800000 6291456 PASSED 00:05:30.061 malloc 8388608 00:05:30.061 register 0x200000400000 10485760 00:05:30.061 buf 0x200000600000 len 8388608 PASSED 00:05:30.061 free 0x200000600000 8388608 00:05:30.061 unregister 0x200000400000 10485760 PASSED 00:05:30.061 passed 00:05:30.061 00:05:30.061 Run Summary: Type Total Ran Passed Failed Inactive 00:05:30.061 suites 1 1 n/a 0 0 00:05:30.061 tests 1 1 1 0 0 00:05:30.061 asserts 15 15 15 0 n/a 00:05:30.061 00:05:30.061 Elapsed time = 0.008 seconds 00:05:30.061 00:05:30.061 real 0m0.162s 00:05:30.061 user 0m0.021s 00:05:30.061 sys 0m0.038s 00:05:30.061 13:54:08 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:30.061 ************************************ 00:05:30.061 END TEST env_mem_callbacks 00:05:30.061 ************************************ 00:05:30.061 13:54:08 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:30.061 ************************************ 00:05:30.061 END TEST env 00:05:30.061 ************************************ 00:05:30.061 00:05:30.061 real 0m2.194s 00:05:30.061 user 0m0.959s 00:05:30.061 sys 0m0.899s 00:05:30.061 13:54:08 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:30.061 13:54:08 env -- common/autotest_common.sh@10 -- # set +x 00:05:30.061 13:54:08 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:30.061 13:54:08 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:30.061 13:54:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:30.061 13:54:08 -- common/autotest_common.sh@10 -- # set +x 00:05:30.061 ************************************ 00:05:30.061 START TEST rpc 00:05:30.061 ************************************ 00:05:30.061 13:54:08 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:30.320 * Looking for test storage... 00:05:30.320 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:30.320 13:54:08 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:30.320 13:54:08 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:30.320 13:54:08 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:30.320 13:54:08 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:30.320 13:54:08 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:30.320 13:54:08 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:30.320 13:54:08 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:30.320 13:54:08 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:30.320 13:54:08 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:30.320 13:54:08 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:30.320 13:54:08 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:30.320 13:54:08 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:30.320 13:54:08 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:30.320 13:54:08 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:30.320 13:54:08 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:30.320 13:54:08 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:30.320 13:54:08 rpc -- scripts/common.sh@345 -- # : 1 00:05:30.320 13:54:08 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:30.320 13:54:08 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:30.320 13:54:08 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:30.320 13:54:08 rpc -- scripts/common.sh@353 -- # local d=1 00:05:30.320 13:54:08 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:30.320 13:54:08 rpc -- scripts/common.sh@355 -- # echo 1 00:05:30.320 13:54:08 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:30.320 13:54:08 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:30.320 13:54:08 rpc -- scripts/common.sh@353 -- # local d=2 00:05:30.320 13:54:08 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:30.320 13:54:08 rpc -- scripts/common.sh@355 -- # echo 2 00:05:30.320 13:54:08 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:30.320 13:54:08 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:30.320 13:54:08 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:30.320 13:54:08 rpc -- scripts/common.sh@368 -- # return 0 00:05:30.320 13:54:08 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:30.320 13:54:08 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:30.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.320 --rc genhtml_branch_coverage=1 00:05:30.320 --rc genhtml_function_coverage=1 00:05:30.320 --rc genhtml_legend=1 00:05:30.320 --rc geninfo_all_blocks=1 00:05:30.320 --rc geninfo_unexecuted_blocks=1 00:05:30.320 00:05:30.320 ' 00:05:30.320 13:54:08 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:30.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.320 --rc genhtml_branch_coverage=1 00:05:30.320 --rc genhtml_function_coverage=1 00:05:30.320 --rc genhtml_legend=1 00:05:30.320 --rc geninfo_all_blocks=1 00:05:30.320 --rc geninfo_unexecuted_blocks=1 00:05:30.320 00:05:30.320 ' 00:05:30.320 13:54:08 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:30.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.320 --rc genhtml_branch_coverage=1 00:05:30.320 --rc genhtml_function_coverage=1 00:05:30.320 --rc genhtml_legend=1 00:05:30.320 --rc geninfo_all_blocks=1 00:05:30.320 --rc geninfo_unexecuted_blocks=1 00:05:30.320 00:05:30.320 ' 00:05:30.320 13:54:08 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:30.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.320 --rc genhtml_branch_coverage=1 00:05:30.320 --rc genhtml_function_coverage=1 00:05:30.320 --rc genhtml_legend=1 00:05:30.320 --rc geninfo_all_blocks=1 00:05:30.320 --rc geninfo_unexecuted_blocks=1 00:05:30.320 00:05:30.320 ' 00:05:30.320 13:54:08 rpc -- rpc/rpc.sh@65 -- # spdk_pid=69792 00:05:30.320 13:54:08 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:30.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.320 13:54:08 rpc -- rpc/rpc.sh@67 -- # waitforlisten 69792 00:05:30.320 13:54:08 rpc -- common/autotest_common.sh@831 -- # '[' -z 69792 ']' 00:05:30.320 13:54:08 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.320 13:54:08 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:30.320 13:54:08 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:30.320 13:54:08 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.320 13:54:08 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:30.320 13:54:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.320 [2024-11-17 13:54:08.560042] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:30.320 [2024-11-17 13:54:08.560336] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69792 ] 00:05:30.579 [2024-11-17 13:54:08.704724] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.579 [2024-11-17 13:54:08.735977] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:30.579 [2024-11-17 13:54:08.736173] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 69792' to capture a snapshot of events at runtime. 00:05:30.579 [2024-11-17 13:54:08.736254] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:30.579 [2024-11-17 13:54:08.736288] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:30.579 [2024-11-17 13:54:08.736313] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid69792 for offline analysis/debug. 00:05:30.579 [2024-11-17 13:54:08.736370] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.512 13:54:09 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:31.512 13:54:09 rpc -- common/autotest_common.sh@864 -- # return 0 00:05:31.512 13:54:09 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:31.512 13:54:09 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:31.512 13:54:09 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:31.512 13:54:09 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:31.512 13:54:09 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:31.512 13:54:09 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:31.512 13:54:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.512 ************************************ 00:05:31.512 START TEST rpc_integrity 00:05:31.512 ************************************ 00:05:31.512 13:54:09 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:31.512 13:54:09 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:31.512 13:54:09 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.513 13:54:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.513 13:54:09 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.513 13:54:09 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:31.513 13:54:09 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:31.513 13:54:09 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:31.513 13:54:09 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:31.513 13:54:09 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.513 13:54:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.513 13:54:09 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.513 13:54:09 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:31.513 13:54:09 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:31.513 13:54:09 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.513 13:54:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.513 13:54:09 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.513 13:54:09 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:31.513 { 00:05:31.513 "name": "Malloc0", 00:05:31.513 "aliases": [ 00:05:31.513 "c28b403f-d2d8-4ace-9efd-5a78bf444fd5" 00:05:31.513 ], 00:05:31.513 "product_name": "Malloc disk", 00:05:31.513 "block_size": 512, 00:05:31.513 "num_blocks": 16384, 00:05:31.513 "uuid": "c28b403f-d2d8-4ace-9efd-5a78bf444fd5", 00:05:31.513 "assigned_rate_limits": { 00:05:31.513 "rw_ios_per_sec": 0, 00:05:31.513 "rw_mbytes_per_sec": 0, 00:05:31.513 "r_mbytes_per_sec": 0, 00:05:31.513 "w_mbytes_per_sec": 0 00:05:31.513 }, 00:05:31.513 "claimed": false, 00:05:31.513 "zoned": false, 00:05:31.513 "supported_io_types": { 00:05:31.513 "read": true, 00:05:31.513 "write": true, 00:05:31.513 "unmap": true, 00:05:31.513 "flush": true, 00:05:31.513 "reset": true, 00:05:31.513 "nvme_admin": false, 00:05:31.513 "nvme_io": false, 00:05:31.513 "nvme_io_md": false, 00:05:31.513 "write_zeroes": true, 00:05:31.513 "zcopy": true, 00:05:31.513 "get_zone_info": false, 00:05:31.513 "zone_management": false, 00:05:31.513 "zone_append": false, 00:05:31.513 "compare": false, 00:05:31.513 "compare_and_write": false, 00:05:31.513 "abort": true, 00:05:31.513 "seek_hole": false, 00:05:31.513 "seek_data": false, 00:05:31.513 "copy": true, 00:05:31.513 "nvme_iov_md": false 00:05:31.513 }, 00:05:31.513 "memory_domains": [ 00:05:31.513 { 00:05:31.513 "dma_device_id": "system", 00:05:31.513 "dma_device_type": 1 00:05:31.513 }, 00:05:31.513 { 00:05:31.513 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:31.513 "dma_device_type": 2 00:05:31.513 } 00:05:31.513 ], 00:05:31.513 "driver_specific": {} 00:05:31.513 } 00:05:31.513 ]' 00:05:31.513 13:54:09 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:31.513 13:54:09 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:31.513 13:54:09 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:31.513 13:54:09 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.513 13:54:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.513 [2024-11-17 13:54:09.598318] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:31.513 [2024-11-17 13:54:09.598470] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:31.513 [2024-11-17 13:54:09.598504] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:05:31.513 [2024-11-17 13:54:09.598514] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:31.513 [2024-11-17 13:54:09.600708] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:31.513 [2024-11-17 13:54:09.600742] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:31.513 Passthru0 00:05:31.513 13:54:09 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.513 13:54:09 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:31.513 13:54:09 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.513 13:54:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.513 13:54:09 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.513 13:54:09 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:31.513 { 00:05:31.513 "name": "Malloc0", 00:05:31.513 "aliases": [ 00:05:31.513 "c28b403f-d2d8-4ace-9efd-5a78bf444fd5" 00:05:31.513 ], 00:05:31.513 "product_name": "Malloc disk", 00:05:31.513 "block_size": 512, 00:05:31.513 "num_blocks": 16384, 00:05:31.513 "uuid": "c28b403f-d2d8-4ace-9efd-5a78bf444fd5", 00:05:31.513 "assigned_rate_limits": { 00:05:31.513 "rw_ios_per_sec": 0, 00:05:31.513 "rw_mbytes_per_sec": 0, 00:05:31.513 "r_mbytes_per_sec": 0, 00:05:31.513 "w_mbytes_per_sec": 0 00:05:31.513 }, 00:05:31.513 "claimed": true, 00:05:31.513 "claim_type": "exclusive_write", 00:05:31.513 "zoned": false, 00:05:31.513 "supported_io_types": { 00:05:31.513 "read": true, 00:05:31.513 "write": true, 00:05:31.513 "unmap": true, 00:05:31.513 "flush": true, 00:05:31.513 "reset": true, 00:05:31.513 "nvme_admin": false, 00:05:31.513 "nvme_io": false, 00:05:31.513 "nvme_io_md": false, 00:05:31.513 "write_zeroes": true, 00:05:31.513 "zcopy": true, 00:05:31.513 "get_zone_info": false, 00:05:31.513 "zone_management": false, 00:05:31.513 "zone_append": false, 00:05:31.513 "compare": false, 00:05:31.513 "compare_and_write": false, 00:05:31.513 "abort": true, 00:05:31.513 "seek_hole": false, 00:05:31.513 "seek_data": false, 00:05:31.513 "copy": true, 00:05:31.513 "nvme_iov_md": false 00:05:31.513 }, 00:05:31.513 "memory_domains": [ 00:05:31.513 { 00:05:31.513 "dma_device_id": "system", 00:05:31.513 "dma_device_type": 1 00:05:31.513 }, 00:05:31.513 { 00:05:31.513 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:31.513 "dma_device_type": 2 00:05:31.513 } 00:05:31.513 ], 00:05:31.513 "driver_specific": {} 00:05:31.513 }, 00:05:31.513 { 00:05:31.513 "name": "Passthru0", 00:05:31.513 "aliases": [ 00:05:31.513 "83c5f74e-2bc1-5e34-a3ef-58b894de9c1a" 00:05:31.513 ], 00:05:31.513 "product_name": "passthru", 00:05:31.513 "block_size": 512, 00:05:31.513 "num_blocks": 16384, 00:05:31.513 "uuid": "83c5f74e-2bc1-5e34-a3ef-58b894de9c1a", 00:05:31.513 "assigned_rate_limits": { 00:05:31.513 "rw_ios_per_sec": 0, 00:05:31.513 "rw_mbytes_per_sec": 0, 00:05:31.513 "r_mbytes_per_sec": 0, 00:05:31.513 "w_mbytes_per_sec": 0 00:05:31.513 }, 00:05:31.513 "claimed": false, 00:05:31.513 "zoned": false, 00:05:31.513 "supported_io_types": { 00:05:31.513 "read": true, 00:05:31.513 "write": true, 00:05:31.513 "unmap": true, 00:05:31.513 "flush": true, 00:05:31.513 "reset": true, 00:05:31.513 "nvme_admin": false, 00:05:31.513 "nvme_io": false, 00:05:31.513 "nvme_io_md": false, 00:05:31.513 "write_zeroes": true, 00:05:31.513 "zcopy": true, 00:05:31.513 "get_zone_info": false, 00:05:31.513 "zone_management": false, 00:05:31.513 "zone_append": false, 00:05:31.513 "compare": false, 00:05:31.513 "compare_and_write": false, 00:05:31.513 "abort": true, 00:05:31.513 "seek_hole": false, 00:05:31.513 "seek_data": false, 00:05:31.513 "copy": true, 00:05:31.513 "nvme_iov_md": false 00:05:31.513 }, 00:05:31.513 "memory_domains": [ 00:05:31.513 { 00:05:31.513 "dma_device_id": "system", 00:05:31.513 "dma_device_type": 1 00:05:31.513 }, 00:05:31.513 { 00:05:31.513 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:31.513 "dma_device_type": 2 00:05:31.513 } 00:05:31.513 ], 00:05:31.513 "driver_specific": { 00:05:31.513 "passthru": { 00:05:31.513 "name": "Passthru0", 00:05:31.513 "base_bdev_name": "Malloc0" 00:05:31.513 } 00:05:31.513 } 00:05:31.513 } 00:05:31.513 ]' 00:05:31.513 13:54:09 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:31.513 13:54:09 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:31.513 13:54:09 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:31.513 13:54:09 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.513 13:54:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.513 13:54:09 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.513 13:54:09 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:31.513 13:54:09 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.513 13:54:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.513 13:54:09 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.513 13:54:09 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:31.513 13:54:09 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.513 13:54:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.513 13:54:09 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.513 13:54:09 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:31.513 13:54:09 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:31.513 ************************************ 00:05:31.513 END TEST rpc_integrity 00:05:31.513 ************************************ 00:05:31.513 13:54:09 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:31.513 00:05:31.513 real 0m0.217s 00:05:31.513 user 0m0.123s 00:05:31.513 sys 0m0.034s 00:05:31.513 13:54:09 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:31.513 13:54:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:31.513 13:54:09 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:31.513 13:54:09 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:31.513 13:54:09 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:31.514 13:54:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.514 ************************************ 00:05:31.514 START TEST rpc_plugins 00:05:31.514 ************************************ 00:05:31.514 13:54:09 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:05:31.514 13:54:09 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:31.514 13:54:09 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.514 13:54:09 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:31.514 13:54:09 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.514 13:54:09 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:31.514 13:54:09 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:31.514 13:54:09 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.514 13:54:09 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:31.514 13:54:09 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.514 13:54:09 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:31.514 { 00:05:31.514 "name": "Malloc1", 00:05:31.514 "aliases": [ 00:05:31.514 "31628b69-812b-43b5-af03-527b4c23705c" 00:05:31.514 ], 00:05:31.514 "product_name": "Malloc disk", 00:05:31.514 "block_size": 4096, 00:05:31.514 "num_blocks": 256, 00:05:31.514 "uuid": "31628b69-812b-43b5-af03-527b4c23705c", 00:05:31.514 "assigned_rate_limits": { 00:05:31.514 "rw_ios_per_sec": 0, 00:05:31.514 "rw_mbytes_per_sec": 0, 00:05:31.514 "r_mbytes_per_sec": 0, 00:05:31.514 "w_mbytes_per_sec": 0 00:05:31.514 }, 00:05:31.514 "claimed": false, 00:05:31.514 "zoned": false, 00:05:31.514 "supported_io_types": { 00:05:31.514 "read": true, 00:05:31.514 "write": true, 00:05:31.514 "unmap": true, 00:05:31.514 "flush": true, 00:05:31.514 "reset": true, 00:05:31.514 "nvme_admin": false, 00:05:31.514 "nvme_io": false, 00:05:31.514 "nvme_io_md": false, 00:05:31.514 "write_zeroes": true, 00:05:31.514 "zcopy": true, 00:05:31.514 "get_zone_info": false, 00:05:31.514 "zone_management": false, 00:05:31.514 "zone_append": false, 00:05:31.514 "compare": false, 00:05:31.514 "compare_and_write": false, 00:05:31.514 "abort": true, 00:05:31.514 "seek_hole": false, 00:05:31.514 "seek_data": false, 00:05:31.514 "copy": true, 00:05:31.514 "nvme_iov_md": false 00:05:31.514 }, 00:05:31.514 "memory_domains": [ 00:05:31.514 { 00:05:31.514 "dma_device_id": "system", 00:05:31.514 "dma_device_type": 1 00:05:31.514 }, 00:05:31.514 { 00:05:31.514 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:31.514 "dma_device_type": 2 00:05:31.514 } 00:05:31.514 ], 00:05:31.514 "driver_specific": {} 00:05:31.514 } 00:05:31.514 ]' 00:05:31.514 13:54:09 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:31.514 13:54:09 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:31.514 13:54:09 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:31.514 13:54:09 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.514 13:54:09 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:31.772 13:54:09 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.772 13:54:09 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:31.772 13:54:09 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.772 13:54:09 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:31.772 13:54:09 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.772 13:54:09 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:31.772 13:54:09 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:31.772 ************************************ 00:05:31.772 END TEST rpc_plugins 00:05:31.772 ************************************ 00:05:31.772 13:54:09 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:31.772 00:05:31.772 real 0m0.108s 00:05:31.772 user 0m0.063s 00:05:31.772 sys 0m0.015s 00:05:31.772 13:54:09 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:31.772 13:54:09 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:31.772 13:54:09 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:31.772 13:54:09 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:31.772 13:54:09 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:31.772 13:54:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.772 ************************************ 00:05:31.772 START TEST rpc_trace_cmd_test 00:05:31.772 ************************************ 00:05:31.772 13:54:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:05:31.772 13:54:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:31.772 13:54:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:31.772 13:54:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:31.772 13:54:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:31.772 13:54:09 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.772 13:54:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:31.772 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid69792", 00:05:31.772 "tpoint_group_mask": "0x8", 00:05:31.772 "iscsi_conn": { 00:05:31.772 "mask": "0x2", 00:05:31.772 "tpoint_mask": "0x0" 00:05:31.772 }, 00:05:31.772 "scsi": { 00:05:31.772 "mask": "0x4", 00:05:31.772 "tpoint_mask": "0x0" 00:05:31.772 }, 00:05:31.772 "bdev": { 00:05:31.772 "mask": "0x8", 00:05:31.772 "tpoint_mask": "0xffffffffffffffff" 00:05:31.772 }, 00:05:31.772 "nvmf_rdma": { 00:05:31.772 "mask": "0x10", 00:05:31.772 "tpoint_mask": "0x0" 00:05:31.772 }, 00:05:31.772 "nvmf_tcp": { 00:05:31.772 "mask": "0x20", 00:05:31.772 "tpoint_mask": "0x0" 00:05:31.772 }, 00:05:31.772 "ftl": { 00:05:31.772 "mask": "0x40", 00:05:31.772 "tpoint_mask": "0x0" 00:05:31.772 }, 00:05:31.772 "blobfs": { 00:05:31.772 "mask": "0x80", 00:05:31.772 "tpoint_mask": "0x0" 00:05:31.772 }, 00:05:31.772 "dsa": { 00:05:31.772 "mask": "0x200", 00:05:31.772 "tpoint_mask": "0x0" 00:05:31.772 }, 00:05:31.772 "thread": { 00:05:31.772 "mask": "0x400", 00:05:31.772 "tpoint_mask": "0x0" 00:05:31.772 }, 00:05:31.772 "nvme_pcie": { 00:05:31.772 "mask": "0x800", 00:05:31.772 "tpoint_mask": "0x0" 00:05:31.772 }, 00:05:31.772 "iaa": { 00:05:31.772 "mask": "0x1000", 00:05:31.772 "tpoint_mask": "0x0" 00:05:31.772 }, 00:05:31.772 "nvme_tcp": { 00:05:31.772 "mask": "0x2000", 00:05:31.772 "tpoint_mask": "0x0" 00:05:31.772 }, 00:05:31.772 "bdev_nvme": { 00:05:31.772 "mask": "0x4000", 00:05:31.772 "tpoint_mask": "0x0" 00:05:31.772 }, 00:05:31.772 "sock": { 00:05:31.772 "mask": "0x8000", 00:05:31.772 "tpoint_mask": "0x0" 00:05:31.772 }, 00:05:31.772 "blob": { 00:05:31.772 "mask": "0x10000", 00:05:31.772 "tpoint_mask": "0x0" 00:05:31.772 }, 00:05:31.772 "bdev_raid": { 00:05:31.772 "mask": "0x20000", 00:05:31.772 "tpoint_mask": "0x0" 00:05:31.772 } 00:05:31.772 }' 00:05:31.772 13:54:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:31.772 13:54:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:05:31.772 13:54:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:31.772 13:54:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:31.772 13:54:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:31.772 13:54:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:31.772 13:54:09 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:31.772 13:54:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:31.772 13:54:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:31.772 ************************************ 00:05:31.772 END TEST rpc_trace_cmd_test 00:05:31.772 ************************************ 00:05:31.772 13:54:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:31.772 00:05:31.772 real 0m0.170s 00:05:31.772 user 0m0.135s 00:05:31.772 sys 0m0.027s 00:05:31.772 13:54:10 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:31.772 13:54:10 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:32.029 13:54:10 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:32.029 13:54:10 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:32.029 13:54:10 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:32.029 13:54:10 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:32.029 13:54:10 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:32.029 13:54:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.029 ************************************ 00:05:32.029 START TEST rpc_daemon_integrity 00:05:32.029 ************************************ 00:05:32.029 13:54:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:32.029 13:54:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:32.029 13:54:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.029 13:54:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.029 13:54:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.029 13:54:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:32.029 13:54:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:32.029 13:54:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:32.029 13:54:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:32.029 13:54:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.029 13:54:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.029 13:54:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.029 13:54:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:32.029 13:54:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:32.029 13:54:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.029 13:54:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.029 13:54:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.029 13:54:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:32.029 { 00:05:32.029 "name": "Malloc2", 00:05:32.029 "aliases": [ 00:05:32.029 "9c8f68d7-2d35-44a3-92bd-88792f098ca7" 00:05:32.029 ], 00:05:32.029 "product_name": "Malloc disk", 00:05:32.029 "block_size": 512, 00:05:32.029 "num_blocks": 16384, 00:05:32.029 "uuid": "9c8f68d7-2d35-44a3-92bd-88792f098ca7", 00:05:32.029 "assigned_rate_limits": { 00:05:32.029 "rw_ios_per_sec": 0, 00:05:32.029 "rw_mbytes_per_sec": 0, 00:05:32.029 "r_mbytes_per_sec": 0, 00:05:32.029 "w_mbytes_per_sec": 0 00:05:32.029 }, 00:05:32.029 "claimed": false, 00:05:32.029 "zoned": false, 00:05:32.029 "supported_io_types": { 00:05:32.029 "read": true, 00:05:32.029 "write": true, 00:05:32.029 "unmap": true, 00:05:32.029 "flush": true, 00:05:32.029 "reset": true, 00:05:32.029 "nvme_admin": false, 00:05:32.029 "nvme_io": false, 00:05:32.029 "nvme_io_md": false, 00:05:32.029 "write_zeroes": true, 00:05:32.029 "zcopy": true, 00:05:32.029 "get_zone_info": false, 00:05:32.029 "zone_management": false, 00:05:32.029 "zone_append": false, 00:05:32.029 "compare": false, 00:05:32.029 "compare_and_write": false, 00:05:32.029 "abort": true, 00:05:32.029 "seek_hole": false, 00:05:32.029 "seek_data": false, 00:05:32.029 "copy": true, 00:05:32.029 "nvme_iov_md": false 00:05:32.029 }, 00:05:32.029 "memory_domains": [ 00:05:32.029 { 00:05:32.029 "dma_device_id": "system", 00:05:32.029 "dma_device_type": 1 00:05:32.029 }, 00:05:32.029 { 00:05:32.029 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:32.029 "dma_device_type": 2 00:05:32.029 } 00:05:32.029 ], 00:05:32.029 "driver_specific": {} 00:05:32.029 } 00:05:32.029 ]' 00:05:32.029 13:54:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:32.029 13:54:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:32.029 13:54:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:32.029 13:54:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.029 13:54:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.029 [2024-11-17 13:54:10.202632] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:32.029 [2024-11-17 13:54:10.202684] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:32.029 [2024-11-17 13:54:10.202706] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:05:32.029 [2024-11-17 13:54:10.202716] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:32.029 [2024-11-17 13:54:10.204840] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:32.029 [2024-11-17 13:54:10.204873] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:32.029 Passthru0 00:05:32.029 13:54:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.029 13:54:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:32.029 13:54:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.029 13:54:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.029 13:54:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.029 13:54:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:32.029 { 00:05:32.029 "name": "Malloc2", 00:05:32.029 "aliases": [ 00:05:32.029 "9c8f68d7-2d35-44a3-92bd-88792f098ca7" 00:05:32.029 ], 00:05:32.029 "product_name": "Malloc disk", 00:05:32.029 "block_size": 512, 00:05:32.029 "num_blocks": 16384, 00:05:32.029 "uuid": "9c8f68d7-2d35-44a3-92bd-88792f098ca7", 00:05:32.029 "assigned_rate_limits": { 00:05:32.029 "rw_ios_per_sec": 0, 00:05:32.029 "rw_mbytes_per_sec": 0, 00:05:32.029 "r_mbytes_per_sec": 0, 00:05:32.029 "w_mbytes_per_sec": 0 00:05:32.029 }, 00:05:32.029 "claimed": true, 00:05:32.029 "claim_type": "exclusive_write", 00:05:32.029 "zoned": false, 00:05:32.029 "supported_io_types": { 00:05:32.029 "read": true, 00:05:32.029 "write": true, 00:05:32.029 "unmap": true, 00:05:32.029 "flush": true, 00:05:32.029 "reset": true, 00:05:32.029 "nvme_admin": false, 00:05:32.029 "nvme_io": false, 00:05:32.029 "nvme_io_md": false, 00:05:32.029 "write_zeroes": true, 00:05:32.029 "zcopy": true, 00:05:32.029 "get_zone_info": false, 00:05:32.029 "zone_management": false, 00:05:32.029 "zone_append": false, 00:05:32.029 "compare": false, 00:05:32.029 "compare_and_write": false, 00:05:32.029 "abort": true, 00:05:32.029 "seek_hole": false, 00:05:32.029 "seek_data": false, 00:05:32.029 "copy": true, 00:05:32.029 "nvme_iov_md": false 00:05:32.029 }, 00:05:32.029 "memory_domains": [ 00:05:32.029 { 00:05:32.029 "dma_device_id": "system", 00:05:32.029 "dma_device_type": 1 00:05:32.029 }, 00:05:32.029 { 00:05:32.029 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:32.029 "dma_device_type": 2 00:05:32.029 } 00:05:32.029 ], 00:05:32.029 "driver_specific": {} 00:05:32.029 }, 00:05:32.029 { 00:05:32.029 "name": "Passthru0", 00:05:32.029 "aliases": [ 00:05:32.029 "13b15858-d6eb-5447-9398-df2949fe3cd1" 00:05:32.029 ], 00:05:32.030 "product_name": "passthru", 00:05:32.030 "block_size": 512, 00:05:32.030 "num_blocks": 16384, 00:05:32.030 "uuid": "13b15858-d6eb-5447-9398-df2949fe3cd1", 00:05:32.030 "assigned_rate_limits": { 00:05:32.030 "rw_ios_per_sec": 0, 00:05:32.030 "rw_mbytes_per_sec": 0, 00:05:32.030 "r_mbytes_per_sec": 0, 00:05:32.030 "w_mbytes_per_sec": 0 00:05:32.030 }, 00:05:32.030 "claimed": false, 00:05:32.030 "zoned": false, 00:05:32.030 "supported_io_types": { 00:05:32.030 "read": true, 00:05:32.030 "write": true, 00:05:32.030 "unmap": true, 00:05:32.030 "flush": true, 00:05:32.030 "reset": true, 00:05:32.030 "nvme_admin": false, 00:05:32.030 "nvme_io": false, 00:05:32.030 "nvme_io_md": false, 00:05:32.030 "write_zeroes": true, 00:05:32.030 "zcopy": true, 00:05:32.030 "get_zone_info": false, 00:05:32.030 "zone_management": false, 00:05:32.030 "zone_append": false, 00:05:32.030 "compare": false, 00:05:32.030 "compare_and_write": false, 00:05:32.030 "abort": true, 00:05:32.030 "seek_hole": false, 00:05:32.030 "seek_data": false, 00:05:32.030 "copy": true, 00:05:32.030 "nvme_iov_md": false 00:05:32.030 }, 00:05:32.030 "memory_domains": [ 00:05:32.030 { 00:05:32.030 "dma_device_id": "system", 00:05:32.030 "dma_device_type": 1 00:05:32.030 }, 00:05:32.030 { 00:05:32.030 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:32.030 "dma_device_type": 2 00:05:32.030 } 00:05:32.030 ], 00:05:32.030 "driver_specific": { 00:05:32.030 "passthru": { 00:05:32.030 "name": "Passthru0", 00:05:32.030 "base_bdev_name": "Malloc2" 00:05:32.030 } 00:05:32.030 } 00:05:32.030 } 00:05:32.030 ]' 00:05:32.030 13:54:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:32.030 13:54:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:32.030 13:54:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:32.030 13:54:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.030 13:54:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.030 13:54:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.030 13:54:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:32.030 13:54:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.030 13:54:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.030 13:54:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.030 13:54:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:32.030 13:54:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.030 13:54:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.030 13:54:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.030 13:54:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:32.030 13:54:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:32.030 ************************************ 00:05:32.030 END TEST rpc_daemon_integrity 00:05:32.030 ************************************ 00:05:32.030 13:54:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:32.030 00:05:32.030 real 0m0.230s 00:05:32.030 user 0m0.128s 00:05:32.030 sys 0m0.036s 00:05:32.030 13:54:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:32.030 13:54:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.289 13:54:10 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:32.289 13:54:10 rpc -- rpc/rpc.sh@84 -- # killprocess 69792 00:05:32.289 13:54:10 rpc -- common/autotest_common.sh@950 -- # '[' -z 69792 ']' 00:05:32.289 13:54:10 rpc -- common/autotest_common.sh@954 -- # kill -0 69792 00:05:32.289 13:54:10 rpc -- common/autotest_common.sh@955 -- # uname 00:05:32.289 13:54:10 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:32.289 13:54:10 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69792 00:05:32.289 killing process with pid 69792 00:05:32.289 13:54:10 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:32.289 13:54:10 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:32.289 13:54:10 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69792' 00:05:32.289 13:54:10 rpc -- common/autotest_common.sh@969 -- # kill 69792 00:05:32.289 13:54:10 rpc -- common/autotest_common.sh@974 -- # wait 69792 00:05:32.547 00:05:32.547 real 0m2.262s 00:05:32.547 user 0m2.698s 00:05:32.547 sys 0m0.558s 00:05:32.547 13:54:10 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:32.547 13:54:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.547 ************************************ 00:05:32.547 END TEST rpc 00:05:32.547 ************************************ 00:05:32.547 13:54:10 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:32.547 13:54:10 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:32.547 13:54:10 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:32.547 13:54:10 -- common/autotest_common.sh@10 -- # set +x 00:05:32.547 ************************************ 00:05:32.547 START TEST skip_rpc 00:05:32.547 ************************************ 00:05:32.547 13:54:10 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:32.547 * Looking for test storage... 00:05:32.547 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:32.547 13:54:10 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:32.547 13:54:10 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:32.547 13:54:10 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:32.547 13:54:10 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:32.547 13:54:10 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:32.547 13:54:10 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:32.547 13:54:10 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:32.547 13:54:10 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:32.547 13:54:10 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:32.547 13:54:10 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:32.547 13:54:10 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:32.547 13:54:10 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:32.547 13:54:10 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:32.547 13:54:10 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:32.547 13:54:10 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:32.547 13:54:10 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:32.547 13:54:10 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:32.547 13:54:10 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:32.547 13:54:10 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:32.547 13:54:10 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:32.547 13:54:10 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:32.547 13:54:10 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:32.547 13:54:10 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:32.547 13:54:10 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:32.547 13:54:10 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:32.547 13:54:10 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:32.547 13:54:10 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:32.547 13:54:10 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:32.547 13:54:10 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:32.547 13:54:10 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:32.547 13:54:10 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:32.547 13:54:10 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:32.547 13:54:10 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:32.547 13:54:10 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:32.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.547 --rc genhtml_branch_coverage=1 00:05:32.547 --rc genhtml_function_coverage=1 00:05:32.547 --rc genhtml_legend=1 00:05:32.547 --rc geninfo_all_blocks=1 00:05:32.547 --rc geninfo_unexecuted_blocks=1 00:05:32.547 00:05:32.547 ' 00:05:32.547 13:54:10 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:32.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.547 --rc genhtml_branch_coverage=1 00:05:32.547 --rc genhtml_function_coverage=1 00:05:32.547 --rc genhtml_legend=1 00:05:32.547 --rc geninfo_all_blocks=1 00:05:32.547 --rc geninfo_unexecuted_blocks=1 00:05:32.547 00:05:32.547 ' 00:05:32.547 13:54:10 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:32.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.547 --rc genhtml_branch_coverage=1 00:05:32.547 --rc genhtml_function_coverage=1 00:05:32.547 --rc genhtml_legend=1 00:05:32.547 --rc geninfo_all_blocks=1 00:05:32.547 --rc geninfo_unexecuted_blocks=1 00:05:32.547 00:05:32.547 ' 00:05:32.547 13:54:10 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:32.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.547 --rc genhtml_branch_coverage=1 00:05:32.547 --rc genhtml_function_coverage=1 00:05:32.547 --rc genhtml_legend=1 00:05:32.547 --rc geninfo_all_blocks=1 00:05:32.547 --rc geninfo_unexecuted_blocks=1 00:05:32.547 00:05:32.547 ' 00:05:32.547 13:54:10 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:32.547 13:54:10 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:32.547 13:54:10 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:32.547 13:54:10 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:32.547 13:54:10 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:32.547 13:54:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.547 ************************************ 00:05:32.547 START TEST skip_rpc 00:05:32.547 ************************************ 00:05:32.547 13:54:10 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:05:32.547 13:54:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=69993 00:05:32.547 13:54:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:32.547 13:54:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:32.547 13:54:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:32.806 [2024-11-17 13:54:10.858700] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:32.806 [2024-11-17 13:54:10.858800] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69993 ] 00:05:32.806 [2024-11-17 13:54:11.004902] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.806 [2024-11-17 13:54:11.033404] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.085 13:54:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:38.085 13:54:15 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:38.085 13:54:15 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:38.085 13:54:15 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:38.085 13:54:15 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:38.085 13:54:15 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:38.085 13:54:15 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:38.085 13:54:15 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:38.085 13:54:15 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.085 13:54:15 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.085 13:54:15 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:38.085 13:54:15 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:38.085 13:54:15 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:38.085 13:54:15 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:38.085 13:54:15 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:38.085 13:54:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:38.085 13:54:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 69993 00:05:38.085 13:54:15 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 69993 ']' 00:05:38.085 13:54:15 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 69993 00:05:38.085 13:54:15 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:05:38.085 13:54:15 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:38.085 13:54:15 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69993 00:05:38.085 killing process with pid 69993 00:05:38.085 13:54:15 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:38.085 13:54:15 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:38.085 13:54:15 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69993' 00:05:38.085 13:54:15 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 69993 00:05:38.085 13:54:15 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 69993 00:05:38.085 ************************************ 00:05:38.085 END TEST skip_rpc 00:05:38.085 ************************************ 00:05:38.085 00:05:38.085 real 0m5.270s 00:05:38.085 user 0m4.951s 00:05:38.085 sys 0m0.223s 00:05:38.085 13:54:16 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:38.085 13:54:16 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.085 13:54:16 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:38.085 13:54:16 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:38.085 13:54:16 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:38.085 13:54:16 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.085 ************************************ 00:05:38.085 START TEST skip_rpc_with_json 00:05:38.085 ************************************ 00:05:38.085 13:54:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:05:38.085 13:54:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:38.085 13:54:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=70075 00:05:38.085 13:54:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:38.085 13:54:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 70075 00:05:38.085 13:54:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 70075 ']' 00:05:38.085 13:54:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:38.085 13:54:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.085 13:54:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:38.085 13:54:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.085 13:54:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:38.085 13:54:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:38.085 [2024-11-17 13:54:16.194280] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:38.085 [2024-11-17 13:54:16.194400] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70075 ] 00:05:38.085 [2024-11-17 13:54:16.343199] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.346 [2024-11-17 13:54:16.393061] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.917 13:54:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:38.917 13:54:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:05:38.917 13:54:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:38.917 13:54:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.917 13:54:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:38.917 [2024-11-17 13:54:17.043772] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:38.917 request: 00:05:38.917 { 00:05:38.917 "trtype": "tcp", 00:05:38.917 "method": "nvmf_get_transports", 00:05:38.917 "req_id": 1 00:05:38.917 } 00:05:38.917 Got JSON-RPC error response 00:05:38.917 response: 00:05:38.917 { 00:05:38.917 "code": -19, 00:05:38.917 "message": "No such device" 00:05:38.917 } 00:05:38.917 13:54:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:38.917 13:54:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:38.917 13:54:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.917 13:54:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:38.917 [2024-11-17 13:54:17.056031] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:38.917 13:54:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.917 13:54:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:38.917 13:54:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.917 13:54:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:38.917 13:54:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.917 13:54:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:39.178 { 00:05:39.178 "subsystems": [ 00:05:39.178 { 00:05:39.178 "subsystem": "fsdev", 00:05:39.178 "config": [ 00:05:39.178 { 00:05:39.178 "method": "fsdev_set_opts", 00:05:39.178 "params": { 00:05:39.178 "fsdev_io_pool_size": 65535, 00:05:39.178 "fsdev_io_cache_size": 256 00:05:39.178 } 00:05:39.178 } 00:05:39.178 ] 00:05:39.178 }, 00:05:39.178 { 00:05:39.178 "subsystem": "keyring", 00:05:39.178 "config": [] 00:05:39.178 }, 00:05:39.178 { 00:05:39.178 "subsystem": "iobuf", 00:05:39.178 "config": [ 00:05:39.178 { 00:05:39.178 "method": "iobuf_set_options", 00:05:39.178 "params": { 00:05:39.178 "small_pool_count": 8192, 00:05:39.178 "large_pool_count": 1024, 00:05:39.178 "small_bufsize": 8192, 00:05:39.178 "large_bufsize": 135168 00:05:39.178 } 00:05:39.178 } 00:05:39.178 ] 00:05:39.178 }, 00:05:39.178 { 00:05:39.178 "subsystem": "sock", 00:05:39.178 "config": [ 00:05:39.178 { 00:05:39.178 "method": "sock_set_default_impl", 00:05:39.178 "params": { 00:05:39.178 "impl_name": "posix" 00:05:39.178 } 00:05:39.178 }, 00:05:39.178 { 00:05:39.178 "method": "sock_impl_set_options", 00:05:39.178 "params": { 00:05:39.178 "impl_name": "ssl", 00:05:39.178 "recv_buf_size": 4096, 00:05:39.178 "send_buf_size": 4096, 00:05:39.178 "enable_recv_pipe": true, 00:05:39.178 "enable_quickack": false, 00:05:39.178 "enable_placement_id": 0, 00:05:39.178 "enable_zerocopy_send_server": true, 00:05:39.178 "enable_zerocopy_send_client": false, 00:05:39.178 "zerocopy_threshold": 0, 00:05:39.179 "tls_version": 0, 00:05:39.179 "enable_ktls": false 00:05:39.179 } 00:05:39.179 }, 00:05:39.179 { 00:05:39.179 "method": "sock_impl_set_options", 00:05:39.179 "params": { 00:05:39.179 "impl_name": "posix", 00:05:39.179 "recv_buf_size": 2097152, 00:05:39.179 "send_buf_size": 2097152, 00:05:39.179 "enable_recv_pipe": true, 00:05:39.179 "enable_quickack": false, 00:05:39.179 "enable_placement_id": 0, 00:05:39.179 "enable_zerocopy_send_server": true, 00:05:39.179 "enable_zerocopy_send_client": false, 00:05:39.179 "zerocopy_threshold": 0, 00:05:39.179 "tls_version": 0, 00:05:39.179 "enable_ktls": false 00:05:39.179 } 00:05:39.179 } 00:05:39.179 ] 00:05:39.179 }, 00:05:39.179 { 00:05:39.179 "subsystem": "vmd", 00:05:39.179 "config": [] 00:05:39.179 }, 00:05:39.179 { 00:05:39.179 "subsystem": "accel", 00:05:39.179 "config": [ 00:05:39.179 { 00:05:39.179 "method": "accel_set_options", 00:05:39.179 "params": { 00:05:39.179 "small_cache_size": 128, 00:05:39.179 "large_cache_size": 16, 00:05:39.179 "task_count": 2048, 00:05:39.179 "sequence_count": 2048, 00:05:39.179 "buf_count": 2048 00:05:39.179 } 00:05:39.179 } 00:05:39.179 ] 00:05:39.179 }, 00:05:39.179 { 00:05:39.179 "subsystem": "bdev", 00:05:39.179 "config": [ 00:05:39.179 { 00:05:39.179 "method": "bdev_set_options", 00:05:39.179 "params": { 00:05:39.179 "bdev_io_pool_size": 65535, 00:05:39.179 "bdev_io_cache_size": 256, 00:05:39.179 "bdev_auto_examine": true, 00:05:39.179 "iobuf_small_cache_size": 128, 00:05:39.179 "iobuf_large_cache_size": 16 00:05:39.179 } 00:05:39.179 }, 00:05:39.179 { 00:05:39.179 "method": "bdev_raid_set_options", 00:05:39.179 "params": { 00:05:39.179 "process_window_size_kb": 1024, 00:05:39.179 "process_max_bandwidth_mb_sec": 0 00:05:39.179 } 00:05:39.179 }, 00:05:39.179 { 00:05:39.179 "method": "bdev_iscsi_set_options", 00:05:39.179 "params": { 00:05:39.179 "timeout_sec": 30 00:05:39.179 } 00:05:39.179 }, 00:05:39.179 { 00:05:39.179 "method": "bdev_nvme_set_options", 00:05:39.179 "params": { 00:05:39.179 "action_on_timeout": "none", 00:05:39.179 "timeout_us": 0, 00:05:39.179 "timeout_admin_us": 0, 00:05:39.179 "keep_alive_timeout_ms": 10000, 00:05:39.179 "arbitration_burst": 0, 00:05:39.179 "low_priority_weight": 0, 00:05:39.179 "medium_priority_weight": 0, 00:05:39.179 "high_priority_weight": 0, 00:05:39.179 "nvme_adminq_poll_period_us": 10000, 00:05:39.179 "nvme_ioq_poll_period_us": 0, 00:05:39.179 "io_queue_requests": 0, 00:05:39.179 "delay_cmd_submit": true, 00:05:39.179 "transport_retry_count": 4, 00:05:39.179 "bdev_retry_count": 3, 00:05:39.179 "transport_ack_timeout": 0, 00:05:39.179 "ctrlr_loss_timeout_sec": 0, 00:05:39.179 "reconnect_delay_sec": 0, 00:05:39.179 "fast_io_fail_timeout_sec": 0, 00:05:39.179 "disable_auto_failback": false, 00:05:39.179 "generate_uuids": false, 00:05:39.179 "transport_tos": 0, 00:05:39.179 "nvme_error_stat": false, 00:05:39.179 "rdma_srq_size": 0, 00:05:39.179 "io_path_stat": false, 00:05:39.179 "allow_accel_sequence": false, 00:05:39.179 "rdma_max_cq_size": 0, 00:05:39.179 "rdma_cm_event_timeout_ms": 0, 00:05:39.179 "dhchap_digests": [ 00:05:39.179 "sha256", 00:05:39.179 "sha384", 00:05:39.179 "sha512" 00:05:39.179 ], 00:05:39.179 "dhchap_dhgroups": [ 00:05:39.179 "null", 00:05:39.179 "ffdhe2048", 00:05:39.179 "ffdhe3072", 00:05:39.179 "ffdhe4096", 00:05:39.179 "ffdhe6144", 00:05:39.179 "ffdhe8192" 00:05:39.179 ] 00:05:39.179 } 00:05:39.179 }, 00:05:39.179 { 00:05:39.179 "method": "bdev_nvme_set_hotplug", 00:05:39.179 "params": { 00:05:39.179 "period_us": 100000, 00:05:39.179 "enable": false 00:05:39.179 } 00:05:39.179 }, 00:05:39.179 { 00:05:39.179 "method": "bdev_wait_for_examine" 00:05:39.179 } 00:05:39.179 ] 00:05:39.179 }, 00:05:39.179 { 00:05:39.179 "subsystem": "scsi", 00:05:39.179 "config": null 00:05:39.179 }, 00:05:39.179 { 00:05:39.179 "subsystem": "scheduler", 00:05:39.179 "config": [ 00:05:39.179 { 00:05:39.179 "method": "framework_set_scheduler", 00:05:39.179 "params": { 00:05:39.179 "name": "static" 00:05:39.179 } 00:05:39.179 } 00:05:39.179 ] 00:05:39.179 }, 00:05:39.179 { 00:05:39.179 "subsystem": "vhost_scsi", 00:05:39.179 "config": [] 00:05:39.179 }, 00:05:39.179 { 00:05:39.179 "subsystem": "vhost_blk", 00:05:39.179 "config": [] 00:05:39.179 }, 00:05:39.179 { 00:05:39.179 "subsystem": "ublk", 00:05:39.179 "config": [] 00:05:39.179 }, 00:05:39.179 { 00:05:39.179 "subsystem": "nbd", 00:05:39.179 "config": [] 00:05:39.179 }, 00:05:39.179 { 00:05:39.179 "subsystem": "nvmf", 00:05:39.179 "config": [ 00:05:39.179 { 00:05:39.179 "method": "nvmf_set_config", 00:05:39.179 "params": { 00:05:39.179 "discovery_filter": "match_any", 00:05:39.179 "admin_cmd_passthru": { 00:05:39.179 "identify_ctrlr": false 00:05:39.179 }, 00:05:39.179 "dhchap_digests": [ 00:05:39.179 "sha256", 00:05:39.179 "sha384", 00:05:39.179 "sha512" 00:05:39.179 ], 00:05:39.179 "dhchap_dhgroups": [ 00:05:39.179 "null", 00:05:39.179 "ffdhe2048", 00:05:39.179 "ffdhe3072", 00:05:39.179 "ffdhe4096", 00:05:39.179 "ffdhe6144", 00:05:39.179 "ffdhe8192" 00:05:39.179 ] 00:05:39.179 } 00:05:39.179 }, 00:05:39.179 { 00:05:39.179 "method": "nvmf_set_max_subsystems", 00:05:39.179 "params": { 00:05:39.179 "max_subsystems": 1024 00:05:39.179 } 00:05:39.179 }, 00:05:39.179 { 00:05:39.179 "method": "nvmf_set_crdt", 00:05:39.179 "params": { 00:05:39.179 "crdt1": 0, 00:05:39.179 "crdt2": 0, 00:05:39.179 "crdt3": 0 00:05:39.179 } 00:05:39.179 }, 00:05:39.179 { 00:05:39.179 "method": "nvmf_create_transport", 00:05:39.179 "params": { 00:05:39.179 "trtype": "TCP", 00:05:39.179 "max_queue_depth": 128, 00:05:39.179 "max_io_qpairs_per_ctrlr": 127, 00:05:39.179 "in_capsule_data_size": 4096, 00:05:39.179 "max_io_size": 131072, 00:05:39.179 "io_unit_size": 131072, 00:05:39.179 "max_aq_depth": 128, 00:05:39.179 "num_shared_buffers": 511, 00:05:39.179 "buf_cache_size": 4294967295, 00:05:39.179 "dif_insert_or_strip": false, 00:05:39.179 "zcopy": false, 00:05:39.179 "c2h_success": true, 00:05:39.179 "sock_priority": 0, 00:05:39.179 "abort_timeout_sec": 1, 00:05:39.179 "ack_timeout": 0, 00:05:39.179 "data_wr_pool_size": 0 00:05:39.179 } 00:05:39.179 } 00:05:39.179 ] 00:05:39.179 }, 00:05:39.179 { 00:05:39.179 "subsystem": "iscsi", 00:05:39.179 "config": [ 00:05:39.179 { 00:05:39.179 "method": "iscsi_set_options", 00:05:39.179 "params": { 00:05:39.179 "node_base": "iqn.2016-06.io.spdk", 00:05:39.179 "max_sessions": 128, 00:05:39.179 "max_connections_per_session": 2, 00:05:39.179 "max_queue_depth": 64, 00:05:39.179 "default_time2wait": 2, 00:05:39.179 "default_time2retain": 20, 00:05:39.179 "first_burst_length": 8192, 00:05:39.179 "immediate_data": true, 00:05:39.179 "allow_duplicated_isid": false, 00:05:39.179 "error_recovery_level": 0, 00:05:39.179 "nop_timeout": 60, 00:05:39.179 "nop_in_interval": 30, 00:05:39.179 "disable_chap": false, 00:05:39.179 "require_chap": false, 00:05:39.179 "mutual_chap": false, 00:05:39.179 "chap_group": 0, 00:05:39.179 "max_large_datain_per_connection": 64, 00:05:39.179 "max_r2t_per_connection": 4, 00:05:39.179 "pdu_pool_size": 36864, 00:05:39.179 "immediate_data_pool_size": 16384, 00:05:39.179 "data_out_pool_size": 2048 00:05:39.179 } 00:05:39.179 } 00:05:39.179 ] 00:05:39.179 } 00:05:39.179 ] 00:05:39.179 } 00:05:39.179 13:54:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:39.179 13:54:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 70075 00:05:39.180 13:54:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 70075 ']' 00:05:39.180 13:54:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 70075 00:05:39.180 13:54:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:39.180 13:54:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:39.180 13:54:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70075 00:05:39.180 killing process with pid 70075 00:05:39.180 13:54:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:39.180 13:54:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:39.180 13:54:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70075' 00:05:39.180 13:54:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 70075 00:05:39.180 13:54:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 70075 00:05:39.439 13:54:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=70104 00:05:39.439 13:54:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:39.439 13:54:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:44.726 13:54:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 70104 00:05:44.726 13:54:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 70104 ']' 00:05:44.726 13:54:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 70104 00:05:44.726 13:54:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:44.726 13:54:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:44.726 13:54:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70104 00:05:44.726 killing process with pid 70104 00:05:44.726 13:54:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:44.726 13:54:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:44.726 13:54:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70104' 00:05:44.726 13:54:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 70104 00:05:44.726 13:54:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 70104 00:05:44.726 13:54:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:44.726 13:54:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:44.726 ************************************ 00:05:44.726 END TEST skip_rpc_with_json 00:05:44.726 ************************************ 00:05:44.726 00:05:44.726 real 0m6.745s 00:05:44.726 user 0m6.274s 00:05:44.726 sys 0m0.713s 00:05:44.726 13:54:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:44.726 13:54:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:44.726 13:54:22 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:44.726 13:54:22 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:44.726 13:54:22 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:44.726 13:54:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.726 ************************************ 00:05:44.726 START TEST skip_rpc_with_delay 00:05:44.726 ************************************ 00:05:44.726 13:54:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:05:44.726 13:54:22 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:44.726 13:54:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:44.726 13:54:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:44.726 13:54:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:44.726 13:54:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:44.726 13:54:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:44.726 13:54:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:44.726 13:54:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:44.726 13:54:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:44.726 13:54:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:44.726 13:54:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:44.726 13:54:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:44.726 [2024-11-17 13:54:22.970121] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:44.726 [2024-11-17 13:54:22.970730] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:44.726 13:54:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:44.726 13:54:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:44.726 ************************************ 00:05:44.726 END TEST skip_rpc_with_delay 00:05:44.726 ************************************ 00:05:44.726 13:54:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:44.726 13:54:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:44.726 00:05:44.726 real 0m0.114s 00:05:44.726 user 0m0.065s 00:05:44.726 sys 0m0.047s 00:05:44.726 13:54:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:44.726 13:54:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:44.985 13:54:23 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:44.985 13:54:23 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:44.985 13:54:23 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:44.985 13:54:23 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:44.985 13:54:23 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:44.985 13:54:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.985 ************************************ 00:05:44.985 START TEST exit_on_failed_rpc_init 00:05:44.985 ************************************ 00:05:44.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.985 13:54:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:05:44.985 13:54:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=70215 00:05:44.985 13:54:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 70215 00:05:44.985 13:54:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 70215 ']' 00:05:44.985 13:54:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.985 13:54:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:44.985 13:54:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.985 13:54:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:44.985 13:54:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:44.985 13:54:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:44.985 [2024-11-17 13:54:23.133024] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:44.985 [2024-11-17 13:54:23.133282] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70215 ] 00:05:44.985 [2024-11-17 13:54:23.279135] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.242 [2024-11-17 13:54:23.311295] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.883 13:54:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:45.883 13:54:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:05:45.883 13:54:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:45.883 13:54:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:45.883 13:54:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:45.883 13:54:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:45.883 13:54:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:45.883 13:54:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:45.883 13:54:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:45.883 13:54:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:45.883 13:54:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:45.883 13:54:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:45.883 13:54:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:45.883 13:54:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:45.883 13:54:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:45.883 [2024-11-17 13:54:24.017252] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:45.883 [2024-11-17 13:54:24.017361] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70228 ] 00:05:45.883 [2024-11-17 13:54:24.163397] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.141 [2024-11-17 13:54:24.194608] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.141 [2024-11-17 13:54:24.194696] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:46.141 [2024-11-17 13:54:24.194711] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:46.141 [2024-11-17 13:54:24.194725] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:46.141 13:54:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:46.141 13:54:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:46.141 13:54:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:46.141 13:54:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:46.141 13:54:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:46.141 13:54:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:46.141 13:54:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:46.141 13:54:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 70215 00:05:46.141 13:54:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 70215 ']' 00:05:46.141 13:54:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 70215 00:05:46.141 13:54:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:05:46.141 13:54:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:46.141 13:54:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70215 00:05:46.141 killing process with pid 70215 00:05:46.141 13:54:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:46.141 13:54:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:46.141 13:54:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70215' 00:05:46.141 13:54:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 70215 00:05:46.141 13:54:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 70215 00:05:46.400 00:05:46.400 real 0m1.502s 00:05:46.400 user 0m1.631s 00:05:46.400 sys 0m0.383s 00:05:46.400 ************************************ 00:05:46.400 END TEST exit_on_failed_rpc_init 00:05:46.400 ************************************ 00:05:46.400 13:54:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:46.400 13:54:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:46.400 13:54:24 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:46.400 00:05:46.400 real 0m13.964s 00:05:46.400 user 0m13.060s 00:05:46.400 sys 0m1.534s 00:05:46.400 13:54:24 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:46.400 13:54:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.400 ************************************ 00:05:46.400 END TEST skip_rpc 00:05:46.400 ************************************ 00:05:46.400 13:54:24 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:46.400 13:54:24 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:46.400 13:54:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:46.400 13:54:24 -- common/autotest_common.sh@10 -- # set +x 00:05:46.400 ************************************ 00:05:46.400 START TEST rpc_client 00:05:46.400 ************************************ 00:05:46.400 13:54:24 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:46.400 * Looking for test storage... 00:05:46.660 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:46.660 13:54:24 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:46.660 13:54:24 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:05:46.660 13:54:24 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:46.660 13:54:24 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:46.660 13:54:24 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:46.660 13:54:24 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:46.660 13:54:24 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:46.660 13:54:24 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.660 13:54:24 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:46.660 13:54:24 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:46.660 13:54:24 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:46.660 13:54:24 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:46.660 13:54:24 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:46.660 13:54:24 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:46.660 13:54:24 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:46.660 13:54:24 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:46.660 13:54:24 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:46.660 13:54:24 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:46.660 13:54:24 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.660 13:54:24 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:46.660 13:54:24 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:46.660 13:54:24 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.660 13:54:24 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:46.660 13:54:24 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.660 13:54:24 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:46.660 13:54:24 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:46.660 13:54:24 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.660 13:54:24 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:46.660 13:54:24 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.660 13:54:24 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.660 13:54:24 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.660 13:54:24 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:46.660 13:54:24 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.660 13:54:24 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:46.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.660 --rc genhtml_branch_coverage=1 00:05:46.660 --rc genhtml_function_coverage=1 00:05:46.660 --rc genhtml_legend=1 00:05:46.660 --rc geninfo_all_blocks=1 00:05:46.660 --rc geninfo_unexecuted_blocks=1 00:05:46.660 00:05:46.660 ' 00:05:46.660 13:54:24 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:46.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.660 --rc genhtml_branch_coverage=1 00:05:46.660 --rc genhtml_function_coverage=1 00:05:46.660 --rc genhtml_legend=1 00:05:46.660 --rc geninfo_all_blocks=1 00:05:46.660 --rc geninfo_unexecuted_blocks=1 00:05:46.660 00:05:46.660 ' 00:05:46.660 13:54:24 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:46.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.660 --rc genhtml_branch_coverage=1 00:05:46.660 --rc genhtml_function_coverage=1 00:05:46.660 --rc genhtml_legend=1 00:05:46.660 --rc geninfo_all_blocks=1 00:05:46.660 --rc geninfo_unexecuted_blocks=1 00:05:46.660 00:05:46.660 ' 00:05:46.660 13:54:24 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:46.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.660 --rc genhtml_branch_coverage=1 00:05:46.660 --rc genhtml_function_coverage=1 00:05:46.660 --rc genhtml_legend=1 00:05:46.660 --rc geninfo_all_blocks=1 00:05:46.660 --rc geninfo_unexecuted_blocks=1 00:05:46.660 00:05:46.660 ' 00:05:46.660 13:54:24 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:46.660 OK 00:05:46.660 13:54:24 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:46.660 00:05:46.660 real 0m0.171s 00:05:46.660 user 0m0.100s 00:05:46.660 sys 0m0.078s 00:05:46.660 13:54:24 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:46.660 13:54:24 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:46.660 ************************************ 00:05:46.660 END TEST rpc_client 00:05:46.660 ************************************ 00:05:46.660 13:54:24 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:46.660 13:54:24 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:46.660 13:54:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:46.660 13:54:24 -- common/autotest_common.sh@10 -- # set +x 00:05:46.660 ************************************ 00:05:46.660 START TEST json_config 00:05:46.660 ************************************ 00:05:46.660 13:54:24 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:46.660 13:54:24 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:46.660 13:54:24 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:05:46.660 13:54:24 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:46.922 13:54:24 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:46.922 13:54:24 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:46.922 13:54:24 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:46.922 13:54:24 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:46.922 13:54:24 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.922 13:54:24 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:46.922 13:54:24 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:46.922 13:54:24 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:46.922 13:54:24 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:46.922 13:54:24 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:46.922 13:54:24 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:46.922 13:54:24 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:46.922 13:54:24 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:46.922 13:54:24 json_config -- scripts/common.sh@345 -- # : 1 00:05:46.922 13:54:24 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:46.922 13:54:24 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.922 13:54:24 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:46.922 13:54:24 json_config -- scripts/common.sh@353 -- # local d=1 00:05:46.922 13:54:24 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.922 13:54:24 json_config -- scripts/common.sh@355 -- # echo 1 00:05:46.922 13:54:24 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.922 13:54:24 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:46.922 13:54:24 json_config -- scripts/common.sh@353 -- # local d=2 00:05:46.922 13:54:24 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.922 13:54:24 json_config -- scripts/common.sh@355 -- # echo 2 00:05:46.922 13:54:24 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.922 13:54:24 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.922 13:54:24 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.922 13:54:24 json_config -- scripts/common.sh@368 -- # return 0 00:05:46.922 13:54:24 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.922 13:54:24 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:46.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.922 --rc genhtml_branch_coverage=1 00:05:46.922 --rc genhtml_function_coverage=1 00:05:46.922 --rc genhtml_legend=1 00:05:46.922 --rc geninfo_all_blocks=1 00:05:46.922 --rc geninfo_unexecuted_blocks=1 00:05:46.922 00:05:46.922 ' 00:05:46.922 13:54:24 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:46.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.922 --rc genhtml_branch_coverage=1 00:05:46.922 --rc genhtml_function_coverage=1 00:05:46.922 --rc genhtml_legend=1 00:05:46.922 --rc geninfo_all_blocks=1 00:05:46.922 --rc geninfo_unexecuted_blocks=1 00:05:46.922 00:05:46.922 ' 00:05:46.923 13:54:24 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:46.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.923 --rc genhtml_branch_coverage=1 00:05:46.923 --rc genhtml_function_coverage=1 00:05:46.923 --rc genhtml_legend=1 00:05:46.923 --rc geninfo_all_blocks=1 00:05:46.923 --rc geninfo_unexecuted_blocks=1 00:05:46.923 00:05:46.923 ' 00:05:46.923 13:54:24 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:46.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.923 --rc genhtml_branch_coverage=1 00:05:46.923 --rc genhtml_function_coverage=1 00:05:46.923 --rc genhtml_legend=1 00:05:46.923 --rc geninfo_all_blocks=1 00:05:46.923 --rc geninfo_unexecuted_blocks=1 00:05:46.923 00:05:46.923 ' 00:05:46.923 13:54:24 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:46.923 13:54:24 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:46.923 13:54:24 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:46.923 13:54:24 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:46.923 13:54:24 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:46.923 13:54:24 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:46.923 13:54:24 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:46.923 13:54:24 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:46.923 13:54:24 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:46.923 13:54:24 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:46.923 13:54:24 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:46.923 13:54:24 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:46.923 13:54:24 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:49cce2c4-f077-41b8-9fb2-2aa7b66f16e7 00:05:46.923 13:54:24 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=49cce2c4-f077-41b8-9fb2-2aa7b66f16e7 00:05:46.923 13:54:24 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:46.923 13:54:24 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:46.923 13:54:24 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:46.923 13:54:24 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:46.923 13:54:24 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:46.923 13:54:24 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:46.923 13:54:24 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:46.923 13:54:24 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:46.923 13:54:24 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:46.923 13:54:24 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.923 13:54:24 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.923 13:54:24 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.923 13:54:24 json_config -- paths/export.sh@5 -- # export PATH 00:05:46.923 13:54:24 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.923 13:54:24 json_config -- nvmf/common.sh@51 -- # : 0 00:05:46.923 13:54:24 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:46.923 13:54:24 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:46.923 13:54:24 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:46.923 13:54:24 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:46.923 13:54:24 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:46.923 13:54:25 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:46.923 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:46.923 13:54:25 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:46.923 13:54:25 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:46.923 13:54:25 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:46.923 13:54:25 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:46.923 13:54:25 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:46.923 13:54:25 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:46.923 13:54:25 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:46.923 13:54:25 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:46.923 WARNING: No tests are enabled so not running JSON configuration tests 00:05:46.923 13:54:25 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:46.923 13:54:25 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:46.923 00:05:46.923 real 0m0.145s 00:05:46.923 user 0m0.089s 00:05:46.923 sys 0m0.051s 00:05:46.923 13:54:25 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:46.923 13:54:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:46.923 ************************************ 00:05:46.923 END TEST json_config 00:05:46.923 ************************************ 00:05:46.923 13:54:25 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:46.923 13:54:25 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:46.923 13:54:25 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:46.923 13:54:25 -- common/autotest_common.sh@10 -- # set +x 00:05:46.923 ************************************ 00:05:46.923 START TEST json_config_extra_key 00:05:46.923 ************************************ 00:05:46.923 13:54:25 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:46.923 13:54:25 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:46.923 13:54:25 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:46.923 13:54:25 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:05:46.923 13:54:25 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:46.923 13:54:25 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:46.923 13:54:25 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:46.923 13:54:25 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:46.923 13:54:25 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.923 13:54:25 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:46.923 13:54:25 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:46.923 13:54:25 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:46.923 13:54:25 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:46.923 13:54:25 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:46.923 13:54:25 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:46.923 13:54:25 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:46.923 13:54:25 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:46.923 13:54:25 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:46.923 13:54:25 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:46.923 13:54:25 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.923 13:54:25 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:46.923 13:54:25 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:46.923 13:54:25 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.923 13:54:25 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:46.923 13:54:25 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.923 13:54:25 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:46.923 13:54:25 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:46.923 13:54:25 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.923 13:54:25 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:46.923 13:54:25 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.923 13:54:25 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.923 13:54:25 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.923 13:54:25 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:46.923 13:54:25 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.923 13:54:25 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:46.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.923 --rc genhtml_branch_coverage=1 00:05:46.923 --rc genhtml_function_coverage=1 00:05:46.923 --rc genhtml_legend=1 00:05:46.923 --rc geninfo_all_blocks=1 00:05:46.923 --rc geninfo_unexecuted_blocks=1 00:05:46.923 00:05:46.923 ' 00:05:46.923 13:54:25 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:46.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.923 --rc genhtml_branch_coverage=1 00:05:46.923 --rc genhtml_function_coverage=1 00:05:46.923 --rc genhtml_legend=1 00:05:46.923 --rc geninfo_all_blocks=1 00:05:46.923 --rc geninfo_unexecuted_blocks=1 00:05:46.923 00:05:46.923 ' 00:05:46.923 13:54:25 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:46.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.923 --rc genhtml_branch_coverage=1 00:05:46.923 --rc genhtml_function_coverage=1 00:05:46.923 --rc genhtml_legend=1 00:05:46.923 --rc geninfo_all_blocks=1 00:05:46.923 --rc geninfo_unexecuted_blocks=1 00:05:46.923 00:05:46.923 ' 00:05:46.924 13:54:25 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:46.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.924 --rc genhtml_branch_coverage=1 00:05:46.924 --rc genhtml_function_coverage=1 00:05:46.924 --rc genhtml_legend=1 00:05:46.924 --rc geninfo_all_blocks=1 00:05:46.924 --rc geninfo_unexecuted_blocks=1 00:05:46.924 00:05:46.924 ' 00:05:46.924 13:54:25 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:46.924 13:54:25 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:46.924 13:54:25 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:46.924 13:54:25 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:46.924 13:54:25 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:46.924 13:54:25 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:46.924 13:54:25 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:46.924 13:54:25 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:46.924 13:54:25 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:46.924 13:54:25 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:46.924 13:54:25 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:46.924 13:54:25 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:46.924 13:54:25 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:49cce2c4-f077-41b8-9fb2-2aa7b66f16e7 00:05:46.924 13:54:25 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=49cce2c4-f077-41b8-9fb2-2aa7b66f16e7 00:05:46.924 13:54:25 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:46.924 13:54:25 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:46.924 13:54:25 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:46.924 13:54:25 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:46.924 13:54:25 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:46.924 13:54:25 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:46.924 13:54:25 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:46.924 13:54:25 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:46.924 13:54:25 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:46.924 13:54:25 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.924 13:54:25 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.924 13:54:25 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.924 13:54:25 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:46.924 13:54:25 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.924 13:54:25 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:46.924 13:54:25 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:46.924 13:54:25 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:46.924 13:54:25 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:46.924 13:54:25 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:46.924 13:54:25 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:46.924 13:54:25 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:46.924 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:46.924 13:54:25 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:46.924 13:54:25 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:46.924 13:54:25 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:46.924 13:54:25 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:46.924 13:54:25 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:46.924 13:54:25 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:46.924 13:54:25 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:46.924 13:54:25 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:46.924 13:54:25 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:46.924 13:54:25 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:46.924 INFO: launching applications... 00:05:46.924 Waiting for target to run... 00:05:46.924 13:54:25 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:46.924 13:54:25 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:46.924 13:54:25 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:46.924 13:54:25 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:46.924 13:54:25 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:46.924 13:54:25 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:46.924 13:54:25 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:46.924 13:54:25 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:46.924 13:54:25 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:46.924 13:54:25 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:46.924 13:54:25 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:46.924 13:54:25 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:46.924 13:54:25 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=70410 00:05:46.924 13:54:25 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:46.924 13:54:25 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 70410 /var/tmp/spdk_tgt.sock 00:05:46.924 13:54:25 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 70410 ']' 00:05:46.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:46.924 13:54:25 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:46.924 13:54:25 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:46.924 13:54:25 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:46.924 13:54:25 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:46.924 13:54:25 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:46.924 13:54:25 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:47.185 [2024-11-17 13:54:25.248463] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:47.185 [2024-11-17 13:54:25.248575] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70410 ] 00:05:47.446 [2024-11-17 13:54:25.545508] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.446 [2024-11-17 13:54:25.565120] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.018 13:54:26 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:48.018 00:05:48.018 13:54:26 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:48.018 13:54:26 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:48.018 INFO: shutting down applications... 00:05:48.018 13:54:26 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:48.018 13:54:26 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:48.018 13:54:26 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:48.018 13:54:26 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:48.018 13:54:26 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 70410 ]] 00:05:48.018 13:54:26 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 70410 00:05:48.018 13:54:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:48.018 13:54:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:48.018 13:54:26 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 70410 00:05:48.018 13:54:26 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:48.591 13:54:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:48.591 13:54:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:48.591 13:54:26 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 70410 00:05:48.591 13:54:26 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:48.591 13:54:26 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:48.591 13:54:26 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:48.591 SPDK target shutdown done 00:05:48.591 13:54:26 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:48.591 Success 00:05:48.591 13:54:26 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:48.591 00:05:48.591 real 0m1.566s 00:05:48.591 user 0m1.377s 00:05:48.591 sys 0m0.325s 00:05:48.591 13:54:26 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:48.591 13:54:26 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:48.591 ************************************ 00:05:48.591 END TEST json_config_extra_key 00:05:48.591 ************************************ 00:05:48.591 13:54:26 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:48.591 13:54:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:48.591 13:54:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:48.591 13:54:26 -- common/autotest_common.sh@10 -- # set +x 00:05:48.591 ************************************ 00:05:48.591 START TEST alias_rpc 00:05:48.591 ************************************ 00:05:48.591 13:54:26 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:48.591 * Looking for test storage... 00:05:48.591 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:48.591 13:54:26 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:48.591 13:54:26 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:48.591 13:54:26 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:48.591 13:54:26 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:48.591 13:54:26 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:48.591 13:54:26 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:48.591 13:54:26 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:48.591 13:54:26 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:48.591 13:54:26 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:48.591 13:54:26 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:48.591 13:54:26 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:48.591 13:54:26 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:48.591 13:54:26 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:48.591 13:54:26 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:48.591 13:54:26 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:48.591 13:54:26 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:48.591 13:54:26 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:48.591 13:54:26 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:48.591 13:54:26 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:48.591 13:54:26 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:48.591 13:54:26 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:48.591 13:54:26 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:48.591 13:54:26 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:48.591 13:54:26 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:48.591 13:54:26 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:48.591 13:54:26 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:48.591 13:54:26 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:48.591 13:54:26 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:48.591 13:54:26 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:48.591 13:54:26 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:48.591 13:54:26 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:48.591 13:54:26 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:48.591 13:54:26 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:48.591 13:54:26 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:48.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.591 --rc genhtml_branch_coverage=1 00:05:48.591 --rc genhtml_function_coverage=1 00:05:48.591 --rc genhtml_legend=1 00:05:48.591 --rc geninfo_all_blocks=1 00:05:48.591 --rc geninfo_unexecuted_blocks=1 00:05:48.591 00:05:48.591 ' 00:05:48.591 13:54:26 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:48.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.591 --rc genhtml_branch_coverage=1 00:05:48.591 --rc genhtml_function_coverage=1 00:05:48.591 --rc genhtml_legend=1 00:05:48.591 --rc geninfo_all_blocks=1 00:05:48.591 --rc geninfo_unexecuted_blocks=1 00:05:48.591 00:05:48.591 ' 00:05:48.591 13:54:26 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:48.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.591 --rc genhtml_branch_coverage=1 00:05:48.591 --rc genhtml_function_coverage=1 00:05:48.591 --rc genhtml_legend=1 00:05:48.591 --rc geninfo_all_blocks=1 00:05:48.591 --rc geninfo_unexecuted_blocks=1 00:05:48.591 00:05:48.591 ' 00:05:48.591 13:54:26 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:48.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.591 --rc genhtml_branch_coverage=1 00:05:48.591 --rc genhtml_function_coverage=1 00:05:48.591 --rc genhtml_legend=1 00:05:48.591 --rc geninfo_all_blocks=1 00:05:48.591 --rc geninfo_unexecuted_blocks=1 00:05:48.591 00:05:48.591 ' 00:05:48.591 13:54:26 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:48.591 13:54:26 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=70484 00:05:48.591 13:54:26 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 70484 00:05:48.591 13:54:26 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 70484 ']' 00:05:48.591 13:54:26 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.591 13:54:26 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:48.591 13:54:26 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.591 13:54:26 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:48.591 13:54:26 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.591 13:54:26 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:48.591 [2024-11-17 13:54:26.887542] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:48.592 [2024-11-17 13:54:26.887699] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70484 ] 00:05:48.853 [2024-11-17 13:54:27.036342] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.853 [2024-11-17 13:54:27.088211] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.425 13:54:27 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:49.425 13:54:27 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:49.425 13:54:27 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:49.685 13:54:27 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 70484 00:05:49.685 13:54:27 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 70484 ']' 00:05:49.685 13:54:27 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 70484 00:05:49.685 13:54:27 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:49.685 13:54:27 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:49.685 13:54:27 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70484 00:05:49.685 13:54:27 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:49.685 killing process with pid 70484 00:05:49.685 13:54:27 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:49.685 13:54:27 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70484' 00:05:49.685 13:54:27 alias_rpc -- common/autotest_common.sh@969 -- # kill 70484 00:05:49.685 13:54:27 alias_rpc -- common/autotest_common.sh@974 -- # wait 70484 00:05:49.945 00:05:49.945 real 0m1.502s 00:05:49.945 user 0m1.486s 00:05:49.945 sys 0m0.442s 00:05:49.945 13:54:28 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:49.945 13:54:28 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.945 ************************************ 00:05:49.945 END TEST alias_rpc 00:05:49.945 ************************************ 00:05:49.945 13:54:28 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:49.945 13:54:28 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:49.945 13:54:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:49.945 13:54:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:49.945 13:54:28 -- common/autotest_common.sh@10 -- # set +x 00:05:49.945 ************************************ 00:05:49.945 START TEST spdkcli_tcp 00:05:49.945 ************************************ 00:05:49.945 13:54:28 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:50.216 * Looking for test storage... 00:05:50.216 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:50.216 13:54:28 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:50.216 13:54:28 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:05:50.216 13:54:28 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:50.216 13:54:28 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:50.216 13:54:28 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:50.216 13:54:28 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:50.216 13:54:28 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:50.216 13:54:28 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:50.216 13:54:28 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:50.216 13:54:28 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:50.216 13:54:28 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:50.216 13:54:28 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:50.216 13:54:28 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:50.216 13:54:28 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:50.216 13:54:28 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:50.216 13:54:28 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:50.216 13:54:28 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:50.216 13:54:28 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:50.216 13:54:28 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:50.216 13:54:28 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:50.216 13:54:28 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:50.216 13:54:28 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:50.216 13:54:28 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:50.216 13:54:28 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:50.216 13:54:28 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:50.216 13:54:28 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:50.216 13:54:28 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:50.216 13:54:28 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:50.216 13:54:28 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:50.216 13:54:28 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:50.216 13:54:28 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:50.216 13:54:28 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:50.216 13:54:28 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:50.216 13:54:28 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:50.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.216 --rc genhtml_branch_coverage=1 00:05:50.216 --rc genhtml_function_coverage=1 00:05:50.216 --rc genhtml_legend=1 00:05:50.216 --rc geninfo_all_blocks=1 00:05:50.216 --rc geninfo_unexecuted_blocks=1 00:05:50.216 00:05:50.216 ' 00:05:50.216 13:54:28 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:50.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.216 --rc genhtml_branch_coverage=1 00:05:50.216 --rc genhtml_function_coverage=1 00:05:50.216 --rc genhtml_legend=1 00:05:50.216 --rc geninfo_all_blocks=1 00:05:50.216 --rc geninfo_unexecuted_blocks=1 00:05:50.216 00:05:50.216 ' 00:05:50.216 13:54:28 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:50.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.216 --rc genhtml_branch_coverage=1 00:05:50.216 --rc genhtml_function_coverage=1 00:05:50.216 --rc genhtml_legend=1 00:05:50.216 --rc geninfo_all_blocks=1 00:05:50.216 --rc geninfo_unexecuted_blocks=1 00:05:50.216 00:05:50.216 ' 00:05:50.216 13:54:28 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:50.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.216 --rc genhtml_branch_coverage=1 00:05:50.216 --rc genhtml_function_coverage=1 00:05:50.216 --rc genhtml_legend=1 00:05:50.216 --rc geninfo_all_blocks=1 00:05:50.216 --rc geninfo_unexecuted_blocks=1 00:05:50.216 00:05:50.216 ' 00:05:50.216 13:54:28 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:50.216 13:54:28 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:50.216 13:54:28 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:50.216 13:54:28 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:50.216 13:54:28 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:50.216 13:54:28 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:50.216 13:54:28 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:50.216 13:54:28 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:50.216 13:54:28 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:50.216 13:54:28 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=70563 00:05:50.216 13:54:28 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 70563 00:05:50.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.216 13:54:28 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 70563 ']' 00:05:50.216 13:54:28 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.216 13:54:28 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:50.216 13:54:28 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.216 13:54:28 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:50.216 13:54:28 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:50.216 13:54:28 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:50.216 [2024-11-17 13:54:28.431472] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:50.216 [2024-11-17 13:54:28.431598] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70563 ] 00:05:50.476 [2024-11-17 13:54:28.583356] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:50.476 [2024-11-17 13:54:28.634007] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.476 [2024-11-17 13:54:28.634071] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.050 13:54:29 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:51.050 13:54:29 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:51.050 13:54:29 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=70580 00:05:51.050 13:54:29 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:51.050 13:54:29 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:51.312 [ 00:05:51.312 "bdev_malloc_delete", 00:05:51.312 "bdev_malloc_create", 00:05:51.312 "bdev_null_resize", 00:05:51.312 "bdev_null_delete", 00:05:51.312 "bdev_null_create", 00:05:51.312 "bdev_nvme_cuse_unregister", 00:05:51.312 "bdev_nvme_cuse_register", 00:05:51.312 "bdev_opal_new_user", 00:05:51.312 "bdev_opal_set_lock_state", 00:05:51.312 "bdev_opal_delete", 00:05:51.312 "bdev_opal_get_info", 00:05:51.312 "bdev_opal_create", 00:05:51.312 "bdev_nvme_opal_revert", 00:05:51.312 "bdev_nvme_opal_init", 00:05:51.312 "bdev_nvme_send_cmd", 00:05:51.312 "bdev_nvme_set_keys", 00:05:51.312 "bdev_nvme_get_path_iostat", 00:05:51.312 "bdev_nvme_get_mdns_discovery_info", 00:05:51.312 "bdev_nvme_stop_mdns_discovery", 00:05:51.312 "bdev_nvme_start_mdns_discovery", 00:05:51.312 "bdev_nvme_set_multipath_policy", 00:05:51.312 "bdev_nvme_set_preferred_path", 00:05:51.312 "bdev_nvme_get_io_paths", 00:05:51.312 "bdev_nvme_remove_error_injection", 00:05:51.312 "bdev_nvme_add_error_injection", 00:05:51.312 "bdev_nvme_get_discovery_info", 00:05:51.312 "bdev_nvme_stop_discovery", 00:05:51.312 "bdev_nvme_start_discovery", 00:05:51.312 "bdev_nvme_get_controller_health_info", 00:05:51.312 "bdev_nvme_disable_controller", 00:05:51.312 "bdev_nvme_enable_controller", 00:05:51.312 "bdev_nvme_reset_controller", 00:05:51.312 "bdev_nvme_get_transport_statistics", 00:05:51.312 "bdev_nvme_apply_firmware", 00:05:51.312 "bdev_nvme_detach_controller", 00:05:51.312 "bdev_nvme_get_controllers", 00:05:51.312 "bdev_nvme_attach_controller", 00:05:51.312 "bdev_nvme_set_hotplug", 00:05:51.312 "bdev_nvme_set_options", 00:05:51.312 "bdev_passthru_delete", 00:05:51.312 "bdev_passthru_create", 00:05:51.312 "bdev_lvol_set_parent_bdev", 00:05:51.312 "bdev_lvol_set_parent", 00:05:51.312 "bdev_lvol_check_shallow_copy", 00:05:51.312 "bdev_lvol_start_shallow_copy", 00:05:51.312 "bdev_lvol_grow_lvstore", 00:05:51.312 "bdev_lvol_get_lvols", 00:05:51.312 "bdev_lvol_get_lvstores", 00:05:51.312 "bdev_lvol_delete", 00:05:51.312 "bdev_lvol_set_read_only", 00:05:51.312 "bdev_lvol_resize", 00:05:51.312 "bdev_lvol_decouple_parent", 00:05:51.312 "bdev_lvol_inflate", 00:05:51.312 "bdev_lvol_rename", 00:05:51.312 "bdev_lvol_clone_bdev", 00:05:51.312 "bdev_lvol_clone", 00:05:51.312 "bdev_lvol_snapshot", 00:05:51.312 "bdev_lvol_create", 00:05:51.312 "bdev_lvol_delete_lvstore", 00:05:51.312 "bdev_lvol_rename_lvstore", 00:05:51.312 "bdev_lvol_create_lvstore", 00:05:51.312 "bdev_raid_set_options", 00:05:51.312 "bdev_raid_remove_base_bdev", 00:05:51.312 "bdev_raid_add_base_bdev", 00:05:51.312 "bdev_raid_delete", 00:05:51.312 "bdev_raid_create", 00:05:51.312 "bdev_raid_get_bdevs", 00:05:51.312 "bdev_error_inject_error", 00:05:51.312 "bdev_error_delete", 00:05:51.312 "bdev_error_create", 00:05:51.312 "bdev_split_delete", 00:05:51.312 "bdev_split_create", 00:05:51.312 "bdev_delay_delete", 00:05:51.312 "bdev_delay_create", 00:05:51.312 "bdev_delay_update_latency", 00:05:51.312 "bdev_zone_block_delete", 00:05:51.312 "bdev_zone_block_create", 00:05:51.312 "blobfs_create", 00:05:51.312 "blobfs_detect", 00:05:51.312 "blobfs_set_cache_size", 00:05:51.312 "bdev_xnvme_delete", 00:05:51.312 "bdev_xnvme_create", 00:05:51.312 "bdev_aio_delete", 00:05:51.312 "bdev_aio_rescan", 00:05:51.312 "bdev_aio_create", 00:05:51.312 "bdev_ftl_set_property", 00:05:51.312 "bdev_ftl_get_properties", 00:05:51.312 "bdev_ftl_get_stats", 00:05:51.312 "bdev_ftl_unmap", 00:05:51.312 "bdev_ftl_unload", 00:05:51.313 "bdev_ftl_delete", 00:05:51.313 "bdev_ftl_load", 00:05:51.313 "bdev_ftl_create", 00:05:51.313 "bdev_virtio_attach_controller", 00:05:51.313 "bdev_virtio_scsi_get_devices", 00:05:51.313 "bdev_virtio_detach_controller", 00:05:51.313 "bdev_virtio_blk_set_hotplug", 00:05:51.313 "bdev_iscsi_delete", 00:05:51.313 "bdev_iscsi_create", 00:05:51.313 "bdev_iscsi_set_options", 00:05:51.313 "accel_error_inject_error", 00:05:51.313 "ioat_scan_accel_module", 00:05:51.313 "dsa_scan_accel_module", 00:05:51.313 "iaa_scan_accel_module", 00:05:51.313 "keyring_file_remove_key", 00:05:51.313 "keyring_file_add_key", 00:05:51.313 "keyring_linux_set_options", 00:05:51.313 "fsdev_aio_delete", 00:05:51.313 "fsdev_aio_create", 00:05:51.313 "iscsi_get_histogram", 00:05:51.313 "iscsi_enable_histogram", 00:05:51.313 "iscsi_set_options", 00:05:51.313 "iscsi_get_auth_groups", 00:05:51.313 "iscsi_auth_group_remove_secret", 00:05:51.313 "iscsi_auth_group_add_secret", 00:05:51.313 "iscsi_delete_auth_group", 00:05:51.313 "iscsi_create_auth_group", 00:05:51.313 "iscsi_set_discovery_auth", 00:05:51.313 "iscsi_get_options", 00:05:51.313 "iscsi_target_node_request_logout", 00:05:51.313 "iscsi_target_node_set_redirect", 00:05:51.313 "iscsi_target_node_set_auth", 00:05:51.313 "iscsi_target_node_add_lun", 00:05:51.313 "iscsi_get_stats", 00:05:51.313 "iscsi_get_connections", 00:05:51.313 "iscsi_portal_group_set_auth", 00:05:51.313 "iscsi_start_portal_group", 00:05:51.313 "iscsi_delete_portal_group", 00:05:51.313 "iscsi_create_portal_group", 00:05:51.313 "iscsi_get_portal_groups", 00:05:51.313 "iscsi_delete_target_node", 00:05:51.313 "iscsi_target_node_remove_pg_ig_maps", 00:05:51.313 "iscsi_target_node_add_pg_ig_maps", 00:05:51.313 "iscsi_create_target_node", 00:05:51.313 "iscsi_get_target_nodes", 00:05:51.313 "iscsi_delete_initiator_group", 00:05:51.313 "iscsi_initiator_group_remove_initiators", 00:05:51.313 "iscsi_initiator_group_add_initiators", 00:05:51.313 "iscsi_create_initiator_group", 00:05:51.313 "iscsi_get_initiator_groups", 00:05:51.313 "nvmf_set_crdt", 00:05:51.313 "nvmf_set_config", 00:05:51.313 "nvmf_set_max_subsystems", 00:05:51.313 "nvmf_stop_mdns_prr", 00:05:51.313 "nvmf_publish_mdns_prr", 00:05:51.313 "nvmf_subsystem_get_listeners", 00:05:51.313 "nvmf_subsystem_get_qpairs", 00:05:51.313 "nvmf_subsystem_get_controllers", 00:05:51.313 "nvmf_get_stats", 00:05:51.313 "nvmf_get_transports", 00:05:51.313 "nvmf_create_transport", 00:05:51.313 "nvmf_get_targets", 00:05:51.313 "nvmf_delete_target", 00:05:51.313 "nvmf_create_target", 00:05:51.313 "nvmf_subsystem_allow_any_host", 00:05:51.313 "nvmf_subsystem_set_keys", 00:05:51.313 "nvmf_subsystem_remove_host", 00:05:51.313 "nvmf_subsystem_add_host", 00:05:51.313 "nvmf_ns_remove_host", 00:05:51.313 "nvmf_ns_add_host", 00:05:51.313 "nvmf_subsystem_remove_ns", 00:05:51.313 "nvmf_subsystem_set_ns_ana_group", 00:05:51.313 "nvmf_subsystem_add_ns", 00:05:51.313 "nvmf_subsystem_listener_set_ana_state", 00:05:51.313 "nvmf_discovery_get_referrals", 00:05:51.313 "nvmf_discovery_remove_referral", 00:05:51.313 "nvmf_discovery_add_referral", 00:05:51.313 "nvmf_subsystem_remove_listener", 00:05:51.313 "nvmf_subsystem_add_listener", 00:05:51.313 "nvmf_delete_subsystem", 00:05:51.313 "nvmf_create_subsystem", 00:05:51.313 "nvmf_get_subsystems", 00:05:51.313 "env_dpdk_get_mem_stats", 00:05:51.313 "nbd_get_disks", 00:05:51.313 "nbd_stop_disk", 00:05:51.313 "nbd_start_disk", 00:05:51.313 "ublk_recover_disk", 00:05:51.313 "ublk_get_disks", 00:05:51.313 "ublk_stop_disk", 00:05:51.313 "ublk_start_disk", 00:05:51.313 "ublk_destroy_target", 00:05:51.313 "ublk_create_target", 00:05:51.313 "virtio_blk_create_transport", 00:05:51.313 "virtio_blk_get_transports", 00:05:51.313 "vhost_controller_set_coalescing", 00:05:51.313 "vhost_get_controllers", 00:05:51.313 "vhost_delete_controller", 00:05:51.313 "vhost_create_blk_controller", 00:05:51.313 "vhost_scsi_controller_remove_target", 00:05:51.313 "vhost_scsi_controller_add_target", 00:05:51.313 "vhost_start_scsi_controller", 00:05:51.313 "vhost_create_scsi_controller", 00:05:51.313 "thread_set_cpumask", 00:05:51.313 "scheduler_set_options", 00:05:51.313 "framework_get_governor", 00:05:51.313 "framework_get_scheduler", 00:05:51.313 "framework_set_scheduler", 00:05:51.313 "framework_get_reactors", 00:05:51.313 "thread_get_io_channels", 00:05:51.313 "thread_get_pollers", 00:05:51.313 "thread_get_stats", 00:05:51.313 "framework_monitor_context_switch", 00:05:51.313 "spdk_kill_instance", 00:05:51.313 "log_enable_timestamps", 00:05:51.313 "log_get_flags", 00:05:51.313 "log_clear_flag", 00:05:51.313 "log_set_flag", 00:05:51.313 "log_get_level", 00:05:51.313 "log_set_level", 00:05:51.313 "log_get_print_level", 00:05:51.313 "log_set_print_level", 00:05:51.313 "framework_enable_cpumask_locks", 00:05:51.313 "framework_disable_cpumask_locks", 00:05:51.313 "framework_wait_init", 00:05:51.313 "framework_start_init", 00:05:51.313 "scsi_get_devices", 00:05:51.313 "bdev_get_histogram", 00:05:51.313 "bdev_enable_histogram", 00:05:51.313 "bdev_set_qos_limit", 00:05:51.313 "bdev_set_qd_sampling_period", 00:05:51.313 "bdev_get_bdevs", 00:05:51.313 "bdev_reset_iostat", 00:05:51.313 "bdev_get_iostat", 00:05:51.313 "bdev_examine", 00:05:51.313 "bdev_wait_for_examine", 00:05:51.313 "bdev_set_options", 00:05:51.313 "accel_get_stats", 00:05:51.313 "accel_set_options", 00:05:51.313 "accel_set_driver", 00:05:51.313 "accel_crypto_key_destroy", 00:05:51.313 "accel_crypto_keys_get", 00:05:51.313 "accel_crypto_key_create", 00:05:51.313 "accel_assign_opc", 00:05:51.313 "accel_get_module_info", 00:05:51.313 "accel_get_opc_assignments", 00:05:51.313 "vmd_rescan", 00:05:51.313 "vmd_remove_device", 00:05:51.313 "vmd_enable", 00:05:51.313 "sock_get_default_impl", 00:05:51.313 "sock_set_default_impl", 00:05:51.313 "sock_impl_set_options", 00:05:51.313 "sock_impl_get_options", 00:05:51.313 "iobuf_get_stats", 00:05:51.313 "iobuf_set_options", 00:05:51.313 "keyring_get_keys", 00:05:51.313 "framework_get_pci_devices", 00:05:51.313 "framework_get_config", 00:05:51.313 "framework_get_subsystems", 00:05:51.313 "fsdev_set_opts", 00:05:51.313 "fsdev_get_opts", 00:05:51.313 "trace_get_info", 00:05:51.313 "trace_get_tpoint_group_mask", 00:05:51.313 "trace_disable_tpoint_group", 00:05:51.313 "trace_enable_tpoint_group", 00:05:51.313 "trace_clear_tpoint_mask", 00:05:51.313 "trace_set_tpoint_mask", 00:05:51.313 "notify_get_notifications", 00:05:51.313 "notify_get_types", 00:05:51.313 "spdk_get_version", 00:05:51.313 "rpc_get_methods" 00:05:51.313 ] 00:05:51.313 13:54:29 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:51.313 13:54:29 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:51.313 13:54:29 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:51.313 13:54:29 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:51.313 13:54:29 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 70563 00:05:51.313 13:54:29 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 70563 ']' 00:05:51.313 13:54:29 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 70563 00:05:51.313 13:54:29 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:05:51.313 13:54:29 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:51.313 13:54:29 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70563 00:05:51.313 13:54:29 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:51.313 13:54:29 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:51.313 killing process with pid 70563 00:05:51.314 13:54:29 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70563' 00:05:51.314 13:54:29 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 70563 00:05:51.314 13:54:29 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 70563 00:05:51.886 00:05:51.886 real 0m1.668s 00:05:51.886 user 0m2.885s 00:05:51.886 sys 0m0.492s 00:05:51.886 13:54:29 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:51.886 ************************************ 00:05:51.886 END TEST spdkcli_tcp 00:05:51.886 ************************************ 00:05:51.886 13:54:29 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:51.886 13:54:29 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:51.886 13:54:29 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:51.886 13:54:29 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:51.886 13:54:29 -- common/autotest_common.sh@10 -- # set +x 00:05:51.886 ************************************ 00:05:51.886 START TEST dpdk_mem_utility 00:05:51.886 ************************************ 00:05:51.886 13:54:29 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:51.886 * Looking for test storage... 00:05:51.886 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:51.886 13:54:30 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:51.886 13:54:30 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:05:51.886 13:54:30 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:51.886 13:54:30 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:51.886 13:54:30 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:51.886 13:54:30 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:51.886 13:54:30 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:51.886 13:54:30 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:51.886 13:54:30 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:51.886 13:54:30 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:51.886 13:54:30 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:51.886 13:54:30 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:51.886 13:54:30 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:51.886 13:54:30 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:51.886 13:54:30 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:51.886 13:54:30 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:51.886 13:54:30 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:51.886 13:54:30 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:51.886 13:54:30 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:51.886 13:54:30 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:51.886 13:54:30 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:51.886 13:54:30 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:51.886 13:54:30 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:51.886 13:54:30 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:51.886 13:54:30 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:51.886 13:54:30 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:51.886 13:54:30 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:51.886 13:54:30 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:51.886 13:54:30 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:51.886 13:54:30 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:51.886 13:54:30 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:51.886 13:54:30 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:51.886 13:54:30 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:51.886 13:54:30 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:51.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.886 --rc genhtml_branch_coverage=1 00:05:51.886 --rc genhtml_function_coverage=1 00:05:51.886 --rc genhtml_legend=1 00:05:51.886 --rc geninfo_all_blocks=1 00:05:51.886 --rc geninfo_unexecuted_blocks=1 00:05:51.886 00:05:51.886 ' 00:05:51.886 13:54:30 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:51.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.886 --rc genhtml_branch_coverage=1 00:05:51.886 --rc genhtml_function_coverage=1 00:05:51.886 --rc genhtml_legend=1 00:05:51.886 --rc geninfo_all_blocks=1 00:05:51.886 --rc geninfo_unexecuted_blocks=1 00:05:51.886 00:05:51.886 ' 00:05:51.886 13:54:30 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:51.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.886 --rc genhtml_branch_coverage=1 00:05:51.886 --rc genhtml_function_coverage=1 00:05:51.886 --rc genhtml_legend=1 00:05:51.886 --rc geninfo_all_blocks=1 00:05:51.886 --rc geninfo_unexecuted_blocks=1 00:05:51.886 00:05:51.886 ' 00:05:51.886 13:54:30 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:51.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.886 --rc genhtml_branch_coverage=1 00:05:51.886 --rc genhtml_function_coverage=1 00:05:51.886 --rc genhtml_legend=1 00:05:51.886 --rc geninfo_all_blocks=1 00:05:51.886 --rc geninfo_unexecuted_blocks=1 00:05:51.886 00:05:51.886 ' 00:05:51.886 13:54:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:51.886 13:54:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=70658 00:05:51.886 13:54:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 70658 00:05:51.886 13:54:30 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 70658 ']' 00:05:51.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.886 13:54:30 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.886 13:54:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:51.886 13:54:30 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:51.886 13:54:30 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.886 13:54:30 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:51.886 13:54:30 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:51.886 [2024-11-17 13:54:30.171477] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:51.886 [2024-11-17 13:54:30.171616] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70658 ] 00:05:52.147 [2024-11-17 13:54:30.326021] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.147 [2024-11-17 13:54:30.376054] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.742 13:54:31 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:52.742 13:54:31 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:52.742 13:54:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:52.742 13:54:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:52.742 13:54:31 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:52.742 13:54:31 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:52.742 { 00:05:52.742 "filename": "/tmp/spdk_mem_dump.txt" 00:05:52.742 } 00:05:52.742 13:54:31 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:52.742 13:54:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:53.004 DPDK memory size 860.000000 MiB in 1 heap(s) 00:05:53.004 1 heaps totaling size 860.000000 MiB 00:05:53.004 size: 860.000000 MiB heap id: 0 00:05:53.004 end heaps---------- 00:05:53.004 9 mempools totaling size 642.649841 MiB 00:05:53.004 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:53.004 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:53.004 size: 92.545471 MiB name: bdev_io_70658 00:05:53.004 size: 51.011292 MiB name: evtpool_70658 00:05:53.004 size: 50.003479 MiB name: msgpool_70658 00:05:53.005 size: 36.509338 MiB name: fsdev_io_70658 00:05:53.005 size: 21.763794 MiB name: PDU_Pool 00:05:53.005 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:53.005 size: 0.026123 MiB name: Session_Pool 00:05:53.005 end mempools------- 00:05:53.005 6 memzones totaling size 4.142822 MiB 00:05:53.005 size: 1.000366 MiB name: RG_ring_0_70658 00:05:53.005 size: 1.000366 MiB name: RG_ring_1_70658 00:05:53.005 size: 1.000366 MiB name: RG_ring_4_70658 00:05:53.005 size: 1.000366 MiB name: RG_ring_5_70658 00:05:53.005 size: 0.125366 MiB name: RG_ring_2_70658 00:05:53.005 size: 0.015991 MiB name: RG_ring_3_70658 00:05:53.005 end memzones------- 00:05:53.005 13:54:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:53.005 heap id: 0 total size: 860.000000 MiB number of busy elements: 313 number of free elements: 16 00:05:53.005 list of free elements. size: 13.935425 MiB 00:05:53.005 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:53.005 element at address: 0x200000800000 with size: 1.996948 MiB 00:05:53.005 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:05:53.005 element at address: 0x20001be00000 with size: 0.999878 MiB 00:05:53.005 element at address: 0x200034a00000 with size: 0.994446 MiB 00:05:53.005 element at address: 0x200009600000 with size: 0.959839 MiB 00:05:53.005 element at address: 0x200015e00000 with size: 0.954285 MiB 00:05:53.005 element at address: 0x20001c000000 with size: 0.936584 MiB 00:05:53.005 element at address: 0x200000200000 with size: 0.835022 MiB 00:05:53.005 element at address: 0x20001d800000 with size: 0.566956 MiB 00:05:53.005 element at address: 0x20000d800000 with size: 0.489258 MiB 00:05:53.005 element at address: 0x200003e00000 with size: 0.488281 MiB 00:05:53.005 element at address: 0x20001c200000 with size: 0.485657 MiB 00:05:53.005 element at address: 0x200007000000 with size: 0.480286 MiB 00:05:53.005 element at address: 0x20002ac00000 with size: 0.395752 MiB 00:05:53.005 element at address: 0x200003a00000 with size: 0.352844 MiB 00:05:53.005 list of standard malloc elements. size: 199.267883 MiB 00:05:53.005 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:05:53.005 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:05:53.005 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:05:53.005 element at address: 0x20001befff80 with size: 1.000122 MiB 00:05:53.005 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:05:53.005 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:53.005 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:05:53.005 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:53.005 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:05:53.005 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:05:53.005 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:05:53.005 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:05:53.005 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:05:53.005 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:05:53.005 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:05:53.005 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:05:53.005 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:05:53.005 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:05:53.005 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:05:53.005 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:05:53.005 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:05:53.005 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:05:53.005 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:05:53.005 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:05:53.005 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:05:53.005 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:05:53.005 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:05:53.005 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:05:53.005 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:05:53.005 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:05:53.005 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:05:53.005 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:05:53.005 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:05:53.005 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:05:53.005 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:05:53.005 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:05:53.005 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:05:53.005 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:05:53.005 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:05:53.005 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:05:53.005 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:05:53.005 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:05:53.005 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:05:53.005 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:05:53.005 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:05:53.005 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:05:53.005 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:05:53.005 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:05:53.005 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:53.005 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:53.005 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:53.005 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:53.005 element at address: 0x200003a5a540 with size: 0.000183 MiB 00:05:53.005 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:05:53.005 element at address: 0x200003a5ea00 with size: 0.000183 MiB 00:05:53.005 element at address: 0x200003a7ecc0 with size: 0.000183 MiB 00:05:53.005 element at address: 0x200003a7ed80 with size: 0.000183 MiB 00:05:53.005 element at address: 0x200003a7ee40 with size: 0.000183 MiB 00:05:53.005 element at address: 0x200003a7ef00 with size: 0.000183 MiB 00:05:53.005 element at address: 0x200003a7efc0 with size: 0.000183 MiB 00:05:53.005 element at address: 0x200003a7f080 with size: 0.000183 MiB 00:05:53.006 element at address: 0x200003a7f140 with size: 0.000183 MiB 00:05:53.006 element at address: 0x200003a7f200 with size: 0.000183 MiB 00:05:53.006 element at address: 0x200003a7f2c0 with size: 0.000183 MiB 00:05:53.006 element at address: 0x200003a7f380 with size: 0.000183 MiB 00:05:53.006 element at address: 0x200003a7f440 with size: 0.000183 MiB 00:05:53.006 element at address: 0x200003a7f500 with size: 0.000183 MiB 00:05:53.006 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:05:53.006 element at address: 0x200003aff880 with size: 0.000183 MiB 00:05:53.006 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:53.006 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:53.006 element at address: 0x200003e7d000 with size: 0.000183 MiB 00:05:53.006 element at address: 0x200003e7d0c0 with size: 0.000183 MiB 00:05:53.006 element at address: 0x200003e7d180 with size: 0.000183 MiB 00:05:53.006 element at address: 0x200003e7d240 with size: 0.000183 MiB 00:05:53.006 element at address: 0x200003e7d300 with size: 0.000183 MiB 00:05:53.006 element at address: 0x200003e7d3c0 with size: 0.000183 MiB 00:05:53.006 element at address: 0x200003e7d480 with size: 0.000183 MiB 00:05:53.006 element at address: 0x200003e7d540 with size: 0.000183 MiB 00:05:53.006 element at address: 0x200003e7d600 with size: 0.000183 MiB 00:05:53.006 element at address: 0x200003e7d6c0 with size: 0.000183 MiB 00:05:53.006 element at address: 0x200003e7d780 with size: 0.000183 MiB 00:05:53.006 element at address: 0x200003e7d840 with size: 0.000183 MiB 00:05:53.006 element at address: 0x200003e7d900 with size: 0.000183 MiB 00:05:53.006 element at address: 0x200003e7d9c0 with size: 0.000183 MiB 00:05:53.006 element at address: 0x200003e7da80 with size: 0.000183 MiB 00:05:53.006 element at address: 0x200003e7db40 with size: 0.000183 MiB 00:05:53.006 element at address: 0x200003e7dc00 with size: 0.000183 MiB 00:05:53.006 element at address: 0x200003e7dcc0 with size: 0.000183 MiB 00:05:53.006 element at address: 0x200003e7dd80 with size: 0.000183 MiB 00:05:53.006 element at address: 0x200003e7de40 with size: 0.000183 MiB 00:05:53.006 element at address: 0x200003e7df00 with size: 0.000183 MiB 00:05:53.006 element at address: 0x200003e7dfc0 with size: 0.000183 MiB 00:05:53.006 element at address: 0x200003e7e080 with size: 0.000183 MiB 00:05:53.006 element at address: 0x200003e7e140 with size: 0.000183 MiB 00:05:53.006 element at address: 0x200003e7e200 with size: 0.000183 MiB 00:05:53.006 element at address: 0x200003e7e2c0 with size: 0.000183 MiB 00:05:53.006 element at address: 0x200003e7e380 with size: 0.000183 MiB 00:05:53.006 element at address: 0x200003e7e440 with size: 0.000183 MiB 00:05:53.006 element at address: 0x200003e7e500 with size: 0.000183 MiB 00:05:53.006 element at address: 0x200003e7e5c0 with size: 0.000183 MiB 00:05:53.006 element at address: 0x200003e7e680 with size: 0.000183 MiB 00:05:53.006 element at address: 0x200003e7e740 with size: 0.000183 MiB 00:05:53.006 element at address: 0x200003e7e800 with size: 0.000183 MiB 00:05:53.006 element at address: 0x200003e7e8c0 with size: 0.000183 MiB 00:05:53.006 element at address: 0x200003e7e980 with size: 0.000183 MiB 00:05:53.006 element at address: 0x200003e7ea40 with size: 0.000183 MiB 00:05:53.006 element at address: 0x200003e7eb00 with size: 0.000183 MiB 00:05:53.006 element at address: 0x200003e7ebc0 with size: 0.000183 MiB 00:05:53.006 element at address: 0x200003e7ec80 with size: 0.000183 MiB 00:05:53.006 element at address: 0x200003e7ed40 with size: 0.000183 MiB 00:05:53.006 element at address: 0x200003e7ee00 with size: 0.000183 MiB 00:05:53.006 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:53.006 element at address: 0x20000707af40 with size: 0.000183 MiB 00:05:53.006 element at address: 0x20000707b000 with size: 0.000183 MiB 00:05:53.006 element at address: 0x20000707b0c0 with size: 0.000183 MiB 00:05:53.006 element at address: 0x20000707b180 with size: 0.000183 MiB 00:05:53.006 element at address: 0x20000707b240 with size: 0.000183 MiB 00:05:53.006 element at address: 0x20000707b300 with size: 0.000183 MiB 00:05:53.006 element at address: 0x20000707b3c0 with size: 0.000183 MiB 00:05:53.006 element at address: 0x20000707b480 with size: 0.000183 MiB 00:05:53.006 element at address: 0x20000707b540 with size: 0.000183 MiB 00:05:53.006 element at address: 0x20000707b600 with size: 0.000183 MiB 00:05:53.006 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:05:53.006 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:05:53.006 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:05:53.006 element at address: 0x20000d87d400 with size: 0.000183 MiB 00:05:53.006 element at address: 0x20000d87d4c0 with size: 0.000183 MiB 00:05:53.006 element at address: 0x20000d87d580 with size: 0.000183 MiB 00:05:53.006 element at address: 0x20000d87d640 with size: 0.000183 MiB 00:05:53.006 element at address: 0x20000d87d700 with size: 0.000183 MiB 00:05:53.006 element at address: 0x20000d87d7c0 with size: 0.000183 MiB 00:05:53.006 element at address: 0x20000d87d880 with size: 0.000183 MiB 00:05:53.006 element at address: 0x20000d87d940 with size: 0.000183 MiB 00:05:53.006 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:05:53.006 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:05:53.006 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:05:53.006 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d891240 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d891300 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d8913c0 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d891480 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d891540 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d891600 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d8916c0 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d891780 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d891840 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d891900 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d8919c0 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d891a80 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d891b40 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d891c00 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d891cc0 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d891d80 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d891e40 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d891f00 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d891fc0 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d892080 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d892140 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d892200 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d8922c0 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d892380 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d892440 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d892500 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d8925c0 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d892680 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d892740 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d892800 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d8928c0 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d892980 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d892a40 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d892b00 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d892bc0 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d892c80 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d892d40 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d892e00 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d892ec0 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d892f80 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d893040 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d893100 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d8931c0 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d893280 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d893340 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d893400 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d8934c0 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d893580 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d893640 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d893700 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d8937c0 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d893880 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d893940 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d893a00 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d893ac0 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d893b80 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d893c40 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d893d00 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d893dc0 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d893e80 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d893f40 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d894000 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d8940c0 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d894180 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d894240 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d894300 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d8943c0 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d894480 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d894540 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d894600 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d8946c0 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d894780 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d894840 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d894900 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d8949c0 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d894a80 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d894b40 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d894c00 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d894cc0 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d894d80 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d894e40 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d894f00 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d894fc0 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d895080 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d895140 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d895200 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d8952c0 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d895380 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20001d895440 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20002ac65500 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20002ac655c0 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20002ac6c1c0 with size: 0.000183 MiB 00:05:53.007 element at address: 0x20002ac6c3c0 with size: 0.000183 MiB 00:05:53.008 element at address: 0x20002ac6c480 with size: 0.000183 MiB 00:05:53.008 element at address: 0x20002ac6c540 with size: 0.000183 MiB 00:05:53.008 element at address: 0x20002ac6c600 with size: 0.000183 MiB 00:05:53.008 element at address: 0x20002ac6c6c0 with size: 0.000183 MiB 00:05:53.008 element at address: 0x20002ac6c780 with size: 0.000183 MiB 00:05:53.008 element at address: 0x20002ac6c840 with size: 0.000183 MiB 00:05:53.008 element at address: 0x20002ac6c900 with size: 0.000183 MiB 00:05:53.008 element at address: 0x20002ac6c9c0 with size: 0.000183 MiB 00:05:53.008 element at address: 0x20002ac6ca80 with size: 0.000183 MiB 00:05:53.008 element at address: 0x20002ac6cb40 with size: 0.000183 MiB 00:05:53.008 element at address: 0x20002ac6cc00 with size: 0.000183 MiB 00:05:53.008 element at address: 0x20002ac6ccc0 with size: 0.000183 MiB 00:05:53.008 element at address: 0x20002ac6cd80 with size: 0.000183 MiB 00:05:53.008 element at address: 0x20002ac6ce40 with size: 0.000183 MiB 00:05:53.008 element at address: 0x20002ac6cf00 with size: 0.000183 MiB 00:05:53.008 element at address: 0x20002ac6cfc0 with size: 0.000183 MiB 00:05:53.008 element at address: 0x20002ac6d080 with size: 0.000183 MiB 00:05:53.008 element at address: 0x20002ac6d140 with size: 0.000183 MiB 00:05:53.008 element at address: 0x20002ac6d200 with size: 0.000183 MiB 00:05:53.008 element at address: 0x20002ac6d2c0 with size: 0.000183 MiB 00:05:53.008 element at address: 0x20002ac6d380 with size: 0.000183 MiB 00:05:53.008 element at address: 0x20002ac6d440 with size: 0.000183 MiB 00:05:53.008 element at address: 0x20002ac6d500 with size: 0.000183 MiB 00:05:53.008 element at address: 0x20002ac6d5c0 with size: 0.000183 MiB 00:05:53.008 element at address: 0x20002ac6d680 with size: 0.000183 MiB 00:05:53.008 element at address: 0x20002ac6d740 with size: 0.000183 MiB 00:05:53.008 element at address: 0x20002ac6d800 with size: 0.000183 MiB 00:05:53.008 element at address: 0x20002ac6d8c0 with size: 0.000183 MiB 00:05:53.008 element at address: 0x20002ac6d980 with size: 0.000183 MiB 00:05:53.008 element at address: 0x20002ac6da40 with size: 0.000183 MiB 00:05:53.008 element at address: 0x20002ac6db00 with size: 0.000183 MiB 00:05:53.008 element at address: 0x20002ac6dbc0 with size: 0.000183 MiB 00:05:53.008 element at address: 0x20002ac6dc80 with size: 0.000183 MiB 00:05:53.008 element at address: 0x20002ac6dd40 with size: 0.000183 MiB 00:05:53.008 element at address: 0x20002ac6de00 with size: 0.000183 MiB 00:05:53.008 element at address: 0x20002ac6dec0 with size: 0.000183 MiB 00:05:53.008 element at address: 0x20002ac6df80 with size: 0.000183 MiB 00:05:53.008 element at address: 0x20002ac6e040 with size: 0.000183 MiB 00:05:53.008 element at address: 0x20002ac6e100 with size: 0.000183 MiB 00:05:53.008 element at address: 0x20002ac6e1c0 with size: 0.000183 MiB 00:05:53.008 element at address: 0x20002ac6e280 with size: 0.000183 MiB 00:05:53.008 element at address: 0x20002ac6e340 with size: 0.000183 MiB 00:05:53.008 element at address: 0x20002ac6e400 with size: 0.000183 MiB 00:05:53.008 element at address: 0x20002ac6e4c0 with size: 0.000183 MiB 00:05:53.008 element at address: 0x20002ac6e580 with size: 0.000183 MiB 00:05:53.008 element at address: 0x20002ac6e640 with size: 0.000183 MiB 00:05:53.008 element at address: 0x20002ac6e700 with size: 0.000183 MiB 00:05:53.008 element at address: 0x20002ac6e7c0 with size: 0.000183 MiB 00:05:53.008 element at address: 0x20002ac6e880 with size: 0.000183 MiB 00:05:53.008 element at address: 0x20002ac6e940 with size: 0.000183 MiB 00:05:53.008 element at address: 0x20002ac6ea00 with size: 0.000183 MiB 00:05:53.008 element at address: 0x20002ac6eac0 with size: 0.000183 MiB 00:05:53.008 element at address: 0x20002ac6eb80 with size: 0.000183 MiB 00:05:53.008 element at address: 0x20002ac6ec40 with size: 0.000183 MiB 00:05:53.008 element at address: 0x20002ac6ed00 with size: 0.000183 MiB 00:05:53.008 element at address: 0x20002ac6edc0 with size: 0.000183 MiB 00:05:53.008 element at address: 0x20002ac6ee80 with size: 0.000183 MiB 00:05:53.008 element at address: 0x20002ac6ef40 with size: 0.000183 MiB 00:05:53.008 element at address: 0x20002ac6f000 with size: 0.000183 MiB 00:05:53.008 element at address: 0x20002ac6f0c0 with size: 0.000183 MiB 00:05:53.008 element at address: 0x20002ac6f180 with size: 0.000183 MiB 00:05:53.008 element at address: 0x20002ac6f240 with size: 0.000183 MiB 00:05:53.008 element at address: 0x20002ac6f300 with size: 0.000183 MiB 00:05:53.008 element at address: 0x20002ac6f3c0 with size: 0.000183 MiB 00:05:53.008 element at address: 0x20002ac6f480 with size: 0.000183 MiB 00:05:53.008 element at address: 0x20002ac6f540 with size: 0.000183 MiB 00:05:53.008 element at address: 0x20002ac6f600 with size: 0.000183 MiB 00:05:53.008 element at address: 0x20002ac6f6c0 with size: 0.000183 MiB 00:05:53.008 element at address: 0x20002ac6f780 with size: 0.000183 MiB 00:05:53.008 element at address: 0x20002ac6f840 with size: 0.000183 MiB 00:05:53.008 element at address: 0x20002ac6f900 with size: 0.000183 MiB 00:05:53.008 element at address: 0x20002ac6f9c0 with size: 0.000183 MiB 00:05:53.008 element at address: 0x20002ac6fa80 with size: 0.000183 MiB 00:05:53.008 element at address: 0x20002ac6fb40 with size: 0.000183 MiB 00:05:53.008 element at address: 0x20002ac6fc00 with size: 0.000183 MiB 00:05:53.008 element at address: 0x20002ac6fcc0 with size: 0.000183 MiB 00:05:53.008 element at address: 0x20002ac6fd80 with size: 0.000183 MiB 00:05:53.008 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:05:53.008 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:05:53.008 list of memzone associated elements. size: 646.796692 MiB 00:05:53.008 element at address: 0x20001d895500 with size: 211.416748 MiB 00:05:53.008 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:53.008 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:05:53.008 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:53.008 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:05:53.008 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_70658_0 00:05:53.008 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:53.008 associated memzone info: size: 48.002930 MiB name: MP_evtpool_70658_0 00:05:53.008 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:53.008 associated memzone info: size: 48.002930 MiB name: MP_msgpool_70658_0 00:05:53.008 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:05:53.008 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_70658_0 00:05:53.008 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:05:53.008 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:53.009 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:05:53.009 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:53.009 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:53.009 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_70658 00:05:53.009 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:53.009 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_70658 00:05:53.009 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:53.009 associated memzone info: size: 1.007996 MiB name: MP_evtpool_70658 00:05:53.009 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:05:53.009 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:53.009 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:05:53.009 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:53.009 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:05:53.009 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:53.009 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:05:53.009 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:53.009 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:53.009 associated memzone info: size: 1.000366 MiB name: RG_ring_0_70658 00:05:53.009 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:53.009 associated memzone info: size: 1.000366 MiB name: RG_ring_1_70658 00:05:53.009 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:05:53.009 associated memzone info: size: 1.000366 MiB name: RG_ring_4_70658 00:05:53.009 element at address: 0x200034afe940 with size: 1.000488 MiB 00:05:53.009 associated memzone info: size: 1.000366 MiB name: RG_ring_5_70658 00:05:53.009 element at address: 0x200003a7f680 with size: 0.500488 MiB 00:05:53.009 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_70658 00:05:53.009 element at address: 0x200003e7eec0 with size: 0.500488 MiB 00:05:53.009 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_70658 00:05:53.009 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:05:53.009 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:53.009 element at address: 0x20000707b780 with size: 0.500488 MiB 00:05:53.009 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:53.009 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:05:53.009 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:53.009 element at address: 0x200003a5eac0 with size: 0.125488 MiB 00:05:53.009 associated memzone info: size: 0.125366 MiB name: RG_ring_2_70658 00:05:53.009 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:05:53.009 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:53.009 element at address: 0x20002ac65680 with size: 0.023743 MiB 00:05:53.009 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:53.009 element at address: 0x200003a5a800 with size: 0.016113 MiB 00:05:53.009 associated memzone info: size: 0.015991 MiB name: RG_ring_3_70658 00:05:53.009 element at address: 0x20002ac6b7c0 with size: 0.002441 MiB 00:05:53.009 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:53.009 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:05:53.009 associated memzone info: size: 0.000183 MiB name: MP_msgpool_70658 00:05:53.009 element at address: 0x200003aff940 with size: 0.000305 MiB 00:05:53.009 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_70658 00:05:53.009 element at address: 0x200003a5a600 with size: 0.000305 MiB 00:05:53.009 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_70658 00:05:53.009 element at address: 0x20002ac6c280 with size: 0.000305 MiB 00:05:53.009 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:53.009 13:54:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:53.009 13:54:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 70658 00:05:53.009 13:54:31 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 70658 ']' 00:05:53.009 13:54:31 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 70658 00:05:53.009 13:54:31 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:53.009 13:54:31 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:53.009 13:54:31 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70658 00:05:53.009 killing process with pid 70658 00:05:53.009 13:54:31 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:53.009 13:54:31 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:53.009 13:54:31 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70658' 00:05:53.009 13:54:31 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 70658 00:05:53.009 13:54:31 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 70658 00:05:53.271 ************************************ 00:05:53.271 END TEST dpdk_mem_utility 00:05:53.271 ************************************ 00:05:53.271 00:05:53.271 real 0m1.563s 00:05:53.271 user 0m1.535s 00:05:53.271 sys 0m0.479s 00:05:53.271 13:54:31 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:53.271 13:54:31 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:53.271 13:54:31 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:53.271 13:54:31 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:53.271 13:54:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:53.271 13:54:31 -- common/autotest_common.sh@10 -- # set +x 00:05:53.271 ************************************ 00:05:53.271 START TEST event 00:05:53.271 ************************************ 00:05:53.271 13:54:31 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:53.532 * Looking for test storage... 00:05:53.532 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:53.532 13:54:31 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:53.532 13:54:31 event -- common/autotest_common.sh@1681 -- # lcov --version 00:05:53.532 13:54:31 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:53.532 13:54:31 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:53.532 13:54:31 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:53.532 13:54:31 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:53.532 13:54:31 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:53.532 13:54:31 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:53.532 13:54:31 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:53.532 13:54:31 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:53.532 13:54:31 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:53.532 13:54:31 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:53.532 13:54:31 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:53.532 13:54:31 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:53.532 13:54:31 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:53.532 13:54:31 event -- scripts/common.sh@344 -- # case "$op" in 00:05:53.532 13:54:31 event -- scripts/common.sh@345 -- # : 1 00:05:53.532 13:54:31 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:53.532 13:54:31 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:53.532 13:54:31 event -- scripts/common.sh@365 -- # decimal 1 00:05:53.532 13:54:31 event -- scripts/common.sh@353 -- # local d=1 00:05:53.532 13:54:31 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:53.532 13:54:31 event -- scripts/common.sh@355 -- # echo 1 00:05:53.532 13:54:31 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:53.532 13:54:31 event -- scripts/common.sh@366 -- # decimal 2 00:05:53.532 13:54:31 event -- scripts/common.sh@353 -- # local d=2 00:05:53.532 13:54:31 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:53.532 13:54:31 event -- scripts/common.sh@355 -- # echo 2 00:05:53.532 13:54:31 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:53.532 13:54:31 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:53.532 13:54:31 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:53.532 13:54:31 event -- scripts/common.sh@368 -- # return 0 00:05:53.532 13:54:31 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:53.532 13:54:31 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:53.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.532 --rc genhtml_branch_coverage=1 00:05:53.532 --rc genhtml_function_coverage=1 00:05:53.532 --rc genhtml_legend=1 00:05:53.532 --rc geninfo_all_blocks=1 00:05:53.532 --rc geninfo_unexecuted_blocks=1 00:05:53.532 00:05:53.532 ' 00:05:53.532 13:54:31 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:53.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.532 --rc genhtml_branch_coverage=1 00:05:53.532 --rc genhtml_function_coverage=1 00:05:53.532 --rc genhtml_legend=1 00:05:53.532 --rc geninfo_all_blocks=1 00:05:53.532 --rc geninfo_unexecuted_blocks=1 00:05:53.532 00:05:53.532 ' 00:05:53.532 13:54:31 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:53.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.532 --rc genhtml_branch_coverage=1 00:05:53.532 --rc genhtml_function_coverage=1 00:05:53.532 --rc genhtml_legend=1 00:05:53.532 --rc geninfo_all_blocks=1 00:05:53.532 --rc geninfo_unexecuted_blocks=1 00:05:53.532 00:05:53.532 ' 00:05:53.532 13:54:31 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:53.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.532 --rc genhtml_branch_coverage=1 00:05:53.532 --rc genhtml_function_coverage=1 00:05:53.532 --rc genhtml_legend=1 00:05:53.532 --rc geninfo_all_blocks=1 00:05:53.532 --rc geninfo_unexecuted_blocks=1 00:05:53.532 00:05:53.532 ' 00:05:53.532 13:54:31 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:53.532 13:54:31 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:53.532 13:54:31 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:53.532 13:54:31 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:53.532 13:54:31 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:53.532 13:54:31 event -- common/autotest_common.sh@10 -- # set +x 00:05:53.532 ************************************ 00:05:53.532 START TEST event_perf 00:05:53.532 ************************************ 00:05:53.532 13:54:31 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:53.532 Running I/O for 1 seconds...[2024-11-17 13:54:31.748680] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:53.532 [2024-11-17 13:54:31.748951] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70738 ] 00:05:53.794 [2024-11-17 13:54:31.900011] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:53.794 [2024-11-17 13:54:31.953207] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:53.794 [2024-11-17 13:54:31.953548] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:53.794 [2024-11-17 13:54:31.954306] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.794 Running I/O for 1 seconds...[2024-11-17 13:54:31.954365] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:05:54.739 00:05:54.739 lcore 0: 132325 00:05:54.739 lcore 1: 132324 00:05:54.739 lcore 2: 132327 00:05:54.739 lcore 3: 132326 00:05:54.739 done. 00:05:55.001 00:05:55.001 real 0m1.328s 00:05:55.001 user 0m4.094s 00:05:55.001 sys 0m0.107s 00:05:55.001 13:54:33 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:55.001 13:54:33 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:55.001 ************************************ 00:05:55.001 END TEST event_perf 00:05:55.001 ************************************ 00:05:55.001 13:54:33 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:55.001 13:54:33 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:55.001 13:54:33 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:55.001 13:54:33 event -- common/autotest_common.sh@10 -- # set +x 00:05:55.001 ************************************ 00:05:55.001 START TEST event_reactor 00:05:55.001 ************************************ 00:05:55.001 13:54:33 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:55.001 [2024-11-17 13:54:33.143536] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:55.001 [2024-11-17 13:54:33.143678] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70778 ] 00:05:55.001 [2024-11-17 13:54:33.295322] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.262 [2024-11-17 13:54:33.345792] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.207 test_start 00:05:56.207 oneshot 00:05:56.207 tick 100 00:05:56.207 tick 100 00:05:56.207 tick 250 00:05:56.207 tick 100 00:05:56.207 tick 100 00:05:56.207 tick 100 00:05:56.207 tick 250 00:05:56.207 tick 500 00:05:56.207 tick 100 00:05:56.207 tick 100 00:05:56.207 tick 250 00:05:56.207 tick 100 00:05:56.207 tick 100 00:05:56.207 test_end 00:05:56.207 00:05:56.207 real 0m1.319s 00:05:56.207 user 0m1.119s 00:05:56.207 sys 0m0.088s 00:05:56.207 ************************************ 00:05:56.207 END TEST event_reactor 00:05:56.207 ************************************ 00:05:56.207 13:54:34 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:56.207 13:54:34 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:56.207 13:54:34 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:56.207 13:54:34 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:56.207 13:54:34 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:56.207 13:54:34 event -- common/autotest_common.sh@10 -- # set +x 00:05:56.207 ************************************ 00:05:56.207 START TEST event_reactor_perf 00:05:56.207 ************************************ 00:05:56.207 13:54:34 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:56.468 [2024-11-17 13:54:34.533754] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:56.468 [2024-11-17 13:54:34.534064] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70814 ] 00:05:56.468 [2024-11-17 13:54:34.686231] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.468 [2024-11-17 13:54:34.736326] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.859 test_start 00:05:57.859 test_end 00:05:57.859 Performance: 309103 events per second 00:05:57.859 00:05:57.859 real 0m1.321s 00:05:57.859 user 0m1.117s 00:05:57.859 sys 0m0.093s 00:05:57.859 13:54:35 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:57.859 ************************************ 00:05:57.859 END TEST event_reactor_perf 00:05:57.859 ************************************ 00:05:57.859 13:54:35 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:57.859 13:54:35 event -- event/event.sh@49 -- # uname -s 00:05:57.859 13:54:35 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:57.859 13:54:35 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:57.859 13:54:35 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:57.859 13:54:35 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:57.859 13:54:35 event -- common/autotest_common.sh@10 -- # set +x 00:05:57.859 ************************************ 00:05:57.859 START TEST event_scheduler 00:05:57.859 ************************************ 00:05:57.859 13:54:35 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:57.859 * Looking for test storage... 00:05:57.859 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:57.859 13:54:35 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:57.859 13:54:35 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:05:57.859 13:54:35 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:57.859 13:54:36 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:57.859 13:54:36 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:57.859 13:54:36 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:57.859 13:54:36 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:57.859 13:54:36 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:57.859 13:54:36 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:57.859 13:54:36 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:57.859 13:54:36 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:57.859 13:54:36 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:57.859 13:54:36 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:57.859 13:54:36 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:57.859 13:54:36 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:57.859 13:54:36 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:57.859 13:54:36 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:57.859 13:54:36 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:57.859 13:54:36 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:57.859 13:54:36 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:57.859 13:54:36 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:57.859 13:54:36 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:57.859 13:54:36 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:57.859 13:54:36 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:57.859 13:54:36 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:57.859 13:54:36 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:57.859 13:54:36 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:57.859 13:54:36 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:57.859 13:54:36 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:57.859 13:54:36 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:57.859 13:54:36 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:57.859 13:54:36 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:57.859 13:54:36 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:57.859 13:54:36 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:57.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.859 --rc genhtml_branch_coverage=1 00:05:57.859 --rc genhtml_function_coverage=1 00:05:57.859 --rc genhtml_legend=1 00:05:57.859 --rc geninfo_all_blocks=1 00:05:57.859 --rc geninfo_unexecuted_blocks=1 00:05:57.859 00:05:57.859 ' 00:05:57.859 13:54:36 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:57.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.859 --rc genhtml_branch_coverage=1 00:05:57.859 --rc genhtml_function_coverage=1 00:05:57.859 --rc genhtml_legend=1 00:05:57.859 --rc geninfo_all_blocks=1 00:05:57.859 --rc geninfo_unexecuted_blocks=1 00:05:57.859 00:05:57.859 ' 00:05:57.859 13:54:36 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:57.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.859 --rc genhtml_branch_coverage=1 00:05:57.859 --rc genhtml_function_coverage=1 00:05:57.859 --rc genhtml_legend=1 00:05:57.859 --rc geninfo_all_blocks=1 00:05:57.859 --rc geninfo_unexecuted_blocks=1 00:05:57.859 00:05:57.859 ' 00:05:57.859 13:54:36 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:57.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.859 --rc genhtml_branch_coverage=1 00:05:57.859 --rc genhtml_function_coverage=1 00:05:57.859 --rc genhtml_legend=1 00:05:57.859 --rc geninfo_all_blocks=1 00:05:57.859 --rc geninfo_unexecuted_blocks=1 00:05:57.859 00:05:57.859 ' 00:05:57.859 13:54:36 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:57.859 13:54:36 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=70885 00:05:57.859 13:54:36 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:57.859 13:54:36 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 70885 00:05:57.859 13:54:36 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 70885 ']' 00:05:57.859 13:54:36 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.859 13:54:36 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:57.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.859 13:54:36 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.860 13:54:36 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:57.860 13:54:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:57.860 13:54:36 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:57.860 [2024-11-17 13:54:36.121708] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:57.860 [2024-11-17 13:54:36.121863] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70885 ] 00:05:58.121 [2024-11-17 13:54:36.275152] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:58.121 [2024-11-17 13:54:36.328876] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.121 [2024-11-17 13:54:36.329507] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:58.121 [2024-11-17 13:54:36.329872] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:05:58.121 [2024-11-17 13:54:36.329928] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:58.692 13:54:36 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:58.692 13:54:36 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:58.692 13:54:36 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:58.692 13:54:36 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.692 13:54:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:58.692 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:58.692 POWER: Cannot set governor of lcore 0 to userspace 00:05:58.692 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:58.692 POWER: Cannot set governor of lcore 0 to performance 00:05:58.692 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:58.692 POWER: Cannot set governor of lcore 0 to userspace 00:05:58.692 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:58.692 POWER: Cannot set governor of lcore 0 to userspace 00:05:58.692 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:58.692 POWER: Unable to set Power Management Environment for lcore 0 00:05:58.692 [2024-11-17 13:54:36.976139] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:58.692 [2024-11-17 13:54:36.976167] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:58.692 [2024-11-17 13:54:36.976177] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:58.692 [2024-11-17 13:54:36.976197] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:58.692 [2024-11-17 13:54:36.976205] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:58.692 [2024-11-17 13:54:36.976216] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:58.692 13:54:36 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.692 13:54:36 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:58.692 13:54:36 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.692 13:54:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:58.952 [2024-11-17 13:54:37.060457] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:58.952 13:54:37 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.952 13:54:37 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:58.952 13:54:37 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:58.952 13:54:37 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:58.952 13:54:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:58.952 ************************************ 00:05:58.952 START TEST scheduler_create_thread 00:05:58.952 ************************************ 00:05:58.952 13:54:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:58.952 13:54:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:58.953 13:54:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.953 13:54:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.953 2 00:05:58.953 13:54:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.953 13:54:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:58.953 13:54:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.953 13:54:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.953 3 00:05:58.953 13:54:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.953 13:54:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:58.953 13:54:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.953 13:54:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.953 4 00:05:58.953 13:54:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.953 13:54:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:58.953 13:54:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.953 13:54:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.953 5 00:05:58.953 13:54:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.953 13:54:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:58.953 13:54:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.953 13:54:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.953 6 00:05:58.953 13:54:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.953 13:54:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:58.953 13:54:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.953 13:54:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.953 7 00:05:58.953 13:54:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.953 13:54:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:58.953 13:54:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.953 13:54:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.953 8 00:05:58.953 13:54:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.953 13:54:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:58.953 13:54:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.953 13:54:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.953 9 00:05:58.953 13:54:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.953 13:54:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:58.953 13:54:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.953 13:54:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.953 10 00:05:58.953 13:54:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.953 13:54:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:58.953 13:54:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.953 13:54:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.953 13:54:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.953 13:54:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:58.953 13:54:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:58.953 13:54:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.953 13:54:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.894 13:54:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.894 13:54:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:59.894 13:54:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.894 13:54:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:01.275 13:54:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.275 13:54:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:01.275 13:54:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:01.275 13:54:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.275 13:54:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.232 ************************************ 00:06:02.232 END TEST scheduler_create_thread 00:06:02.232 ************************************ 00:06:02.232 13:54:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.232 00:06:02.232 real 0m3.371s 00:06:02.232 user 0m0.018s 00:06:02.232 sys 0m0.004s 00:06:02.232 13:54:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:02.232 13:54:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.232 13:54:40 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:02.232 13:54:40 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 70885 00:06:02.232 13:54:40 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 70885 ']' 00:06:02.232 13:54:40 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 70885 00:06:02.232 13:54:40 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:06:02.232 13:54:40 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:02.232 13:54:40 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70885 00:06:02.232 killing process with pid 70885 00:06:02.232 13:54:40 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:02.232 13:54:40 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:02.232 13:54:40 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70885' 00:06:02.232 13:54:40 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 70885 00:06:02.232 13:54:40 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 70885 00:06:02.802 [2024-11-17 13:54:40.828485] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:02.802 ************************************ 00:06:02.802 END TEST event_scheduler 00:06:02.802 ************************************ 00:06:02.802 00:06:02.802 real 0m5.125s 00:06:02.802 user 0m10.095s 00:06:02.802 sys 0m0.387s 00:06:02.802 13:54:41 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:02.802 13:54:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:02.802 13:54:41 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:02.802 13:54:41 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:02.802 13:54:41 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:02.802 13:54:41 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:02.802 13:54:41 event -- common/autotest_common.sh@10 -- # set +x 00:06:02.802 ************************************ 00:06:02.802 START TEST app_repeat 00:06:02.802 ************************************ 00:06:02.802 13:54:41 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:06:02.802 13:54:41 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.802 13:54:41 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.802 13:54:41 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:02.802 13:54:41 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:02.802 13:54:41 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:02.802 13:54:41 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:02.802 13:54:41 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:02.802 Process app_repeat pid: 70991 00:06:02.802 spdk_app_start Round 0 00:06:02.802 13:54:41 event.app_repeat -- event/event.sh@19 -- # repeat_pid=70991 00:06:02.802 13:54:41 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:02.802 13:54:41 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 70991' 00:06:02.802 13:54:41 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:02.802 13:54:41 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:02.802 13:54:41 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:02.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:02.802 13:54:41 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70991 /var/tmp/spdk-nbd.sock 00:06:02.802 13:54:41 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70991 ']' 00:06:02.802 13:54:41 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:02.802 13:54:41 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:02.802 13:54:41 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:02.803 13:54:41 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:02.803 13:54:41 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:03.065 [2024-11-17 13:54:41.120667] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:03.065 [2024-11-17 13:54:41.120778] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70991 ] 00:06:03.065 [2024-11-17 13:54:41.265685] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:03.065 [2024-11-17 13:54:41.298254] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.065 [2024-11-17 13:54:41.298317] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.006 13:54:41 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:04.006 13:54:41 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:04.006 13:54:41 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:04.006 Malloc0 00:06:04.006 13:54:42 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:04.265 Malloc1 00:06:04.265 13:54:42 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:04.265 13:54:42 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.265 13:54:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:04.265 13:54:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:04.265 13:54:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.265 13:54:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:04.265 13:54:42 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:04.265 13:54:42 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.265 13:54:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:04.265 13:54:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:04.265 13:54:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.265 13:54:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:04.265 13:54:42 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:04.265 13:54:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:04.265 13:54:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:04.265 13:54:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:04.524 /dev/nbd0 00:06:04.524 13:54:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:04.524 13:54:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:04.524 13:54:42 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:04.524 13:54:42 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:04.524 13:54:42 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:04.524 13:54:42 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:04.524 13:54:42 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:04.524 13:54:42 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:04.524 13:54:42 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:04.524 13:54:42 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:04.524 13:54:42 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:04.524 1+0 records in 00:06:04.524 1+0 records out 00:06:04.524 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000378157 s, 10.8 MB/s 00:06:04.524 13:54:42 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:04.524 13:54:42 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:04.524 13:54:42 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:04.524 13:54:42 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:04.524 13:54:42 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:04.524 13:54:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:04.524 13:54:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:04.524 13:54:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:04.524 /dev/nbd1 00:06:04.781 13:54:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:04.781 13:54:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:04.781 13:54:42 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:04.781 13:54:42 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:04.781 13:54:42 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:04.781 13:54:42 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:04.781 13:54:42 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:04.781 13:54:42 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:04.781 13:54:42 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:04.781 13:54:42 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:04.781 13:54:42 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:04.781 1+0 records in 00:06:04.781 1+0 records out 00:06:04.781 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000208994 s, 19.6 MB/s 00:06:04.781 13:54:42 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:04.781 13:54:42 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:04.781 13:54:42 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:04.781 13:54:42 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:04.781 13:54:42 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:04.781 13:54:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:04.781 13:54:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:04.781 13:54:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:04.781 13:54:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.781 13:54:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:04.781 13:54:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:04.781 { 00:06:04.781 "nbd_device": "/dev/nbd0", 00:06:04.781 "bdev_name": "Malloc0" 00:06:04.781 }, 00:06:04.781 { 00:06:04.781 "nbd_device": "/dev/nbd1", 00:06:04.781 "bdev_name": "Malloc1" 00:06:04.781 } 00:06:04.782 ]' 00:06:04.782 13:54:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:04.782 { 00:06:04.782 "nbd_device": "/dev/nbd0", 00:06:04.782 "bdev_name": "Malloc0" 00:06:04.782 }, 00:06:04.782 { 00:06:04.782 "nbd_device": "/dev/nbd1", 00:06:04.782 "bdev_name": "Malloc1" 00:06:04.782 } 00:06:04.782 ]' 00:06:04.782 13:54:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:04.782 13:54:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:04.782 /dev/nbd1' 00:06:04.782 13:54:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:04.782 /dev/nbd1' 00:06:04.782 13:54:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:04.782 13:54:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:04.782 13:54:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:04.782 13:54:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:04.782 13:54:43 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:04.782 13:54:43 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:04.782 13:54:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.782 13:54:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:04.782 13:54:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:04.782 13:54:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:04.782 13:54:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:04.782 13:54:43 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:04.782 256+0 records in 00:06:04.782 256+0 records out 00:06:04.782 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00704628 s, 149 MB/s 00:06:04.782 13:54:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:04.782 13:54:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:04.782 256+0 records in 00:06:04.782 256+0 records out 00:06:04.782 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.013715 s, 76.5 MB/s 00:06:04.782 13:54:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:04.782 13:54:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:05.043 256+0 records in 00:06:05.043 256+0 records out 00:06:05.043 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0174584 s, 60.1 MB/s 00:06:05.043 13:54:43 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:05.043 13:54:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.043 13:54:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:05.043 13:54:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:05.043 13:54:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:05.043 13:54:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:05.043 13:54:43 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:05.043 13:54:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:05.043 13:54:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:05.043 13:54:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:05.043 13:54:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:05.043 13:54:43 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:05.043 13:54:43 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:05.043 13:54:43 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.043 13:54:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.043 13:54:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:05.043 13:54:43 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:05.043 13:54:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:05.043 13:54:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:05.043 13:54:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:05.043 13:54:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:05.043 13:54:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:05.043 13:54:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:05.043 13:54:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:05.043 13:54:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:05.043 13:54:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:05.043 13:54:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:05.043 13:54:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:05.043 13:54:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:05.305 13:54:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:05.305 13:54:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:05.305 13:54:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:05.305 13:54:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:05.305 13:54:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:05.305 13:54:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:05.305 13:54:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:05.305 13:54:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:05.305 13:54:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:05.305 13:54:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.305 13:54:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:05.566 13:54:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:05.566 13:54:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:05.566 13:54:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:05.566 13:54:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:05.566 13:54:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:05.567 13:54:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:05.567 13:54:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:05.567 13:54:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:05.567 13:54:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:05.567 13:54:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:05.567 13:54:43 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:05.567 13:54:43 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:05.567 13:54:43 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:05.828 13:54:43 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:05.828 [2024-11-17 13:54:44.062730] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:05.828 [2024-11-17 13:54:44.089259] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.828 [2024-11-17 13:54:44.089275] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.828 [2024-11-17 13:54:44.117860] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:05.828 [2024-11-17 13:54:44.117912] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:09.129 spdk_app_start Round 1 00:06:09.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:09.130 13:54:46 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:09.130 13:54:46 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:09.130 13:54:46 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70991 /var/tmp/spdk-nbd.sock 00:06:09.130 13:54:46 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70991 ']' 00:06:09.130 13:54:46 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:09.130 13:54:46 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:09.130 13:54:46 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:09.130 13:54:46 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:09.130 13:54:46 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:09.130 13:54:47 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:09.130 13:54:47 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:09.130 13:54:47 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:09.130 Malloc0 00:06:09.130 13:54:47 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:09.390 Malloc1 00:06:09.390 13:54:47 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:09.390 13:54:47 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.390 13:54:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:09.390 13:54:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:09.390 13:54:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.390 13:54:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:09.390 13:54:47 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:09.390 13:54:47 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.390 13:54:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:09.390 13:54:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:09.390 13:54:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.390 13:54:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:09.390 13:54:47 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:09.390 13:54:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:09.390 13:54:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:09.390 13:54:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:09.390 /dev/nbd0 00:06:09.390 13:54:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:09.390 13:54:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:09.390 13:54:47 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:09.390 13:54:47 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:09.390 13:54:47 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:09.390 13:54:47 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:09.390 13:54:47 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:09.390 13:54:47 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:09.390 13:54:47 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:09.390 13:54:47 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:09.390 13:54:47 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:09.390 1+0 records in 00:06:09.390 1+0 records out 00:06:09.390 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000217124 s, 18.9 MB/s 00:06:09.390 13:54:47 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:09.390 13:54:47 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:09.390 13:54:47 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:09.390 13:54:47 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:09.390 13:54:47 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:09.390 13:54:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:09.390 13:54:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:09.390 13:54:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:09.651 /dev/nbd1 00:06:09.651 13:54:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:09.651 13:54:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:09.651 13:54:47 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:09.651 13:54:47 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:09.651 13:54:47 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:09.651 13:54:47 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:09.651 13:54:47 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:09.651 13:54:47 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:09.651 13:54:47 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:09.651 13:54:47 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:09.651 13:54:47 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:09.651 1+0 records in 00:06:09.651 1+0 records out 00:06:09.651 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000183776 s, 22.3 MB/s 00:06:09.651 13:54:47 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:09.651 13:54:47 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:09.651 13:54:47 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:09.651 13:54:47 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:09.651 13:54:47 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:09.651 13:54:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:09.651 13:54:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:09.651 13:54:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:09.651 13:54:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.651 13:54:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:09.913 13:54:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:09.913 { 00:06:09.913 "nbd_device": "/dev/nbd0", 00:06:09.913 "bdev_name": "Malloc0" 00:06:09.913 }, 00:06:09.913 { 00:06:09.913 "nbd_device": "/dev/nbd1", 00:06:09.913 "bdev_name": "Malloc1" 00:06:09.913 } 00:06:09.913 ]' 00:06:09.913 13:54:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:09.913 { 00:06:09.913 "nbd_device": "/dev/nbd0", 00:06:09.913 "bdev_name": "Malloc0" 00:06:09.913 }, 00:06:09.913 { 00:06:09.913 "nbd_device": "/dev/nbd1", 00:06:09.913 "bdev_name": "Malloc1" 00:06:09.913 } 00:06:09.913 ]' 00:06:09.913 13:54:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:09.913 13:54:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:09.913 /dev/nbd1' 00:06:09.913 13:54:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:09.913 /dev/nbd1' 00:06:09.913 13:54:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:09.913 13:54:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:09.913 13:54:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:09.913 13:54:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:09.913 13:54:48 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:09.913 13:54:48 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:09.913 13:54:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.913 13:54:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:09.913 13:54:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:09.913 13:54:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:09.913 13:54:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:09.913 13:54:48 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:09.913 256+0 records in 00:06:09.913 256+0 records out 00:06:09.913 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00414228 s, 253 MB/s 00:06:09.913 13:54:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:09.913 13:54:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:09.913 256+0 records in 00:06:09.913 256+0 records out 00:06:09.913 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0123951 s, 84.6 MB/s 00:06:09.913 13:54:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:09.913 13:54:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:09.913 256+0 records in 00:06:09.913 256+0 records out 00:06:09.913 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0180037 s, 58.2 MB/s 00:06:09.913 13:54:48 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:09.913 13:54:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.913 13:54:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:09.913 13:54:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:09.913 13:54:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:09.913 13:54:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:09.913 13:54:48 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:09.913 13:54:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:09.913 13:54:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:09.913 13:54:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:09.913 13:54:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:09.913 13:54:48 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:10.180 13:54:48 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:10.180 13:54:48 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.180 13:54:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.180 13:54:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:10.180 13:54:48 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:10.180 13:54:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:10.180 13:54:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:10.180 13:54:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:10.180 13:54:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:10.180 13:54:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:10.180 13:54:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:10.180 13:54:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:10.180 13:54:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:10.180 13:54:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:10.180 13:54:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:10.180 13:54:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:10.180 13:54:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:10.463 13:54:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:10.463 13:54:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:10.463 13:54:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:10.463 13:54:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:10.463 13:54:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:10.463 13:54:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:10.463 13:54:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:10.463 13:54:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:10.463 13:54:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:10.463 13:54:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.463 13:54:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:10.723 13:54:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:10.723 13:54:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:10.723 13:54:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:10.723 13:54:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:10.723 13:54:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:10.723 13:54:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:10.723 13:54:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:10.723 13:54:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:10.723 13:54:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:10.723 13:54:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:10.723 13:54:48 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:10.723 13:54:48 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:10.723 13:54:48 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:10.983 13:54:49 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:10.983 [2024-11-17 13:54:49.170140] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:10.983 [2024-11-17 13:54:49.196788] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.983 [2024-11-17 13:54:49.196879] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.983 [2024-11-17 13:54:49.225337] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:10.983 [2024-11-17 13:54:49.225381] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:14.274 spdk_app_start Round 2 00:06:14.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:14.274 13:54:52 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:14.274 13:54:52 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:14.274 13:54:52 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70991 /var/tmp/spdk-nbd.sock 00:06:14.274 13:54:52 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70991 ']' 00:06:14.274 13:54:52 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:14.274 13:54:52 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:14.274 13:54:52 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:14.274 13:54:52 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:14.274 13:54:52 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:14.274 13:54:52 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:14.274 13:54:52 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:14.274 13:54:52 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:14.274 Malloc0 00:06:14.274 13:54:52 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:14.535 Malloc1 00:06:14.535 13:54:52 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:14.535 13:54:52 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.535 13:54:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:14.535 13:54:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:14.535 13:54:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.535 13:54:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:14.535 13:54:52 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:14.535 13:54:52 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.535 13:54:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:14.535 13:54:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:14.535 13:54:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.535 13:54:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:14.535 13:54:52 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:14.535 13:54:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:14.535 13:54:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:14.535 13:54:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:14.796 /dev/nbd0 00:06:14.796 13:54:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:14.796 13:54:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:14.796 13:54:52 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:14.796 13:54:52 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:14.796 13:54:52 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:14.796 13:54:52 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:14.796 13:54:52 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:14.796 13:54:52 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:14.796 13:54:52 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:14.796 13:54:52 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:14.796 13:54:52 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:14.796 1+0 records in 00:06:14.796 1+0 records out 00:06:14.796 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000480676 s, 8.5 MB/s 00:06:14.796 13:54:52 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:14.796 13:54:52 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:14.796 13:54:52 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:14.796 13:54:52 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:14.796 13:54:52 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:14.796 13:54:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:14.796 13:54:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:14.796 13:54:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:15.057 /dev/nbd1 00:06:15.057 13:54:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:15.057 13:54:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:15.057 13:54:53 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:15.057 13:54:53 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:15.057 13:54:53 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:15.057 13:54:53 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:15.057 13:54:53 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:15.057 13:54:53 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:15.057 13:54:53 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:15.057 13:54:53 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:15.057 13:54:53 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:15.057 1+0 records in 00:06:15.057 1+0 records out 00:06:15.057 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000284623 s, 14.4 MB/s 00:06:15.057 13:54:53 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:15.057 13:54:53 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:15.057 13:54:53 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:15.057 13:54:53 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:15.057 13:54:53 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:15.057 13:54:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:15.057 13:54:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:15.057 13:54:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:15.057 13:54:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.057 13:54:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:15.318 13:54:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:15.318 { 00:06:15.318 "nbd_device": "/dev/nbd0", 00:06:15.318 "bdev_name": "Malloc0" 00:06:15.318 }, 00:06:15.318 { 00:06:15.318 "nbd_device": "/dev/nbd1", 00:06:15.318 "bdev_name": "Malloc1" 00:06:15.318 } 00:06:15.318 ]' 00:06:15.318 13:54:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:15.318 { 00:06:15.318 "nbd_device": "/dev/nbd0", 00:06:15.318 "bdev_name": "Malloc0" 00:06:15.318 }, 00:06:15.318 { 00:06:15.318 "nbd_device": "/dev/nbd1", 00:06:15.318 "bdev_name": "Malloc1" 00:06:15.318 } 00:06:15.318 ]' 00:06:15.318 13:54:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:15.318 13:54:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:15.318 /dev/nbd1' 00:06:15.318 13:54:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:15.318 /dev/nbd1' 00:06:15.318 13:54:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:15.318 13:54:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:15.318 13:54:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:15.318 13:54:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:15.318 13:54:53 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:15.318 13:54:53 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:15.318 13:54:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.318 13:54:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:15.318 13:54:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:15.318 13:54:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:15.318 13:54:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:15.318 13:54:53 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:15.318 256+0 records in 00:06:15.318 256+0 records out 00:06:15.318 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00773461 s, 136 MB/s 00:06:15.318 13:54:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:15.318 13:54:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:15.318 256+0 records in 00:06:15.318 256+0 records out 00:06:15.318 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0204004 s, 51.4 MB/s 00:06:15.318 13:54:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:15.318 13:54:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:15.318 256+0 records in 00:06:15.318 256+0 records out 00:06:15.318 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0190431 s, 55.1 MB/s 00:06:15.318 13:54:53 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:15.318 13:54:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.318 13:54:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:15.318 13:54:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:15.318 13:54:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:15.318 13:54:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:15.318 13:54:53 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:15.318 13:54:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:15.318 13:54:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:15.318 13:54:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:15.318 13:54:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:15.318 13:54:53 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:15.318 13:54:53 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:15.318 13:54:53 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.318 13:54:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.318 13:54:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:15.318 13:54:53 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:15.318 13:54:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:15.318 13:54:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:15.580 13:54:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:15.580 13:54:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:15.580 13:54:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:15.580 13:54:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:15.580 13:54:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:15.580 13:54:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:15.580 13:54:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:15.580 13:54:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:15.580 13:54:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:15.580 13:54:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:15.842 13:54:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:15.842 13:54:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:15.842 13:54:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:15.842 13:54:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:15.842 13:54:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:15.842 13:54:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:15.842 13:54:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:15.842 13:54:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:15.842 13:54:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:15.842 13:54:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.842 13:54:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:15.842 13:54:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:15.842 13:54:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:15.842 13:54:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:15.842 13:54:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:15.842 13:54:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:15.842 13:54:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:15.842 13:54:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:15.842 13:54:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:15.842 13:54:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:16.103 13:54:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:16.103 13:54:54 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:16.103 13:54:54 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:16.103 13:54:54 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:16.103 13:54:54 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:16.364 [2024-11-17 13:54:54.429672] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:16.364 [2024-11-17 13:54:54.456110] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.364 [2024-11-17 13:54:54.456210] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.364 [2024-11-17 13:54:54.484654] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:16.364 [2024-11-17 13:54:54.484700] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:19.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:19.693 13:54:57 event.app_repeat -- event/event.sh@38 -- # waitforlisten 70991 /var/tmp/spdk-nbd.sock 00:06:19.693 13:54:57 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70991 ']' 00:06:19.693 13:54:57 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:19.693 13:54:57 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:19.693 13:54:57 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:19.693 13:54:57 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:19.693 13:54:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:19.693 13:54:57 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:19.693 13:54:57 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:19.693 13:54:57 event.app_repeat -- event/event.sh@39 -- # killprocess 70991 00:06:19.693 13:54:57 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 70991 ']' 00:06:19.693 13:54:57 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 70991 00:06:19.693 13:54:57 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:06:19.693 13:54:57 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:19.693 13:54:57 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70991 00:06:19.693 killing process with pid 70991 00:06:19.693 13:54:57 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:19.693 13:54:57 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:19.693 13:54:57 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70991' 00:06:19.693 13:54:57 event.app_repeat -- common/autotest_common.sh@969 -- # kill 70991 00:06:19.693 13:54:57 event.app_repeat -- common/autotest_common.sh@974 -- # wait 70991 00:06:19.693 spdk_app_start is called in Round 0. 00:06:19.693 Shutdown signal received, stop current app iteration 00:06:19.693 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:06:19.693 spdk_app_start is called in Round 1. 00:06:19.693 Shutdown signal received, stop current app iteration 00:06:19.693 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:06:19.693 spdk_app_start is called in Round 2. 00:06:19.693 Shutdown signal received, stop current app iteration 00:06:19.693 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:06:19.693 spdk_app_start is called in Round 3. 00:06:19.693 Shutdown signal received, stop current app iteration 00:06:19.693 ************************************ 00:06:19.693 END TEST app_repeat 00:06:19.693 ************************************ 00:06:19.693 13:54:57 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:19.693 13:54:57 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:19.693 00:06:19.693 real 0m16.623s 00:06:19.693 user 0m36.999s 00:06:19.693 sys 0m2.009s 00:06:19.693 13:54:57 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:19.693 13:54:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:19.693 13:54:57 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:19.693 13:54:57 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:19.693 13:54:57 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:19.693 13:54:57 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:19.693 13:54:57 event -- common/autotest_common.sh@10 -- # set +x 00:06:19.693 ************************************ 00:06:19.693 START TEST cpu_locks 00:06:19.693 ************************************ 00:06:19.693 13:54:57 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:19.693 * Looking for test storage... 00:06:19.693 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:19.693 13:54:57 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:19.693 13:54:57 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:19.693 13:54:57 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:06:19.693 13:54:57 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:19.693 13:54:57 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:19.693 13:54:57 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:19.693 13:54:57 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:19.693 13:54:57 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:19.693 13:54:57 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:19.693 13:54:57 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:19.693 13:54:57 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:19.693 13:54:57 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:19.693 13:54:57 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:19.693 13:54:57 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:19.693 13:54:57 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:19.693 13:54:57 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:19.693 13:54:57 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:19.693 13:54:57 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:19.693 13:54:57 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:19.693 13:54:57 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:19.693 13:54:57 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:19.693 13:54:57 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:19.693 13:54:57 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:19.693 13:54:57 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:19.693 13:54:57 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:19.693 13:54:57 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:19.693 13:54:57 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:19.693 13:54:57 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:19.693 13:54:57 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:19.693 13:54:57 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:19.693 13:54:57 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:19.693 13:54:57 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:19.693 13:54:57 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:19.693 13:54:57 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:19.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.693 --rc genhtml_branch_coverage=1 00:06:19.693 --rc genhtml_function_coverage=1 00:06:19.693 --rc genhtml_legend=1 00:06:19.693 --rc geninfo_all_blocks=1 00:06:19.693 --rc geninfo_unexecuted_blocks=1 00:06:19.693 00:06:19.693 ' 00:06:19.693 13:54:57 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:19.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.693 --rc genhtml_branch_coverage=1 00:06:19.693 --rc genhtml_function_coverage=1 00:06:19.693 --rc genhtml_legend=1 00:06:19.693 --rc geninfo_all_blocks=1 00:06:19.693 --rc geninfo_unexecuted_blocks=1 00:06:19.693 00:06:19.693 ' 00:06:19.693 13:54:57 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:19.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.693 --rc genhtml_branch_coverage=1 00:06:19.693 --rc genhtml_function_coverage=1 00:06:19.693 --rc genhtml_legend=1 00:06:19.693 --rc geninfo_all_blocks=1 00:06:19.693 --rc geninfo_unexecuted_blocks=1 00:06:19.693 00:06:19.694 ' 00:06:19.694 13:54:57 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:19.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.694 --rc genhtml_branch_coverage=1 00:06:19.694 --rc genhtml_function_coverage=1 00:06:19.694 --rc genhtml_legend=1 00:06:19.694 --rc geninfo_all_blocks=1 00:06:19.694 --rc geninfo_unexecuted_blocks=1 00:06:19.694 00:06:19.694 ' 00:06:19.694 13:54:57 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:19.694 13:54:57 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:19.694 13:54:57 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:19.694 13:54:57 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:19.694 13:54:57 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:19.694 13:54:57 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:19.694 13:54:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:19.694 ************************************ 00:06:19.694 START TEST default_locks 00:06:19.694 ************************************ 00:06:19.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.694 13:54:57 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:06:19.694 13:54:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=71405 00:06:19.694 13:54:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 71405 00:06:19.694 13:54:57 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 71405 ']' 00:06:19.694 13:54:57 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.694 13:54:57 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:19.694 13:54:57 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.694 13:54:57 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:19.694 13:54:57 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:19.694 13:54:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:19.954 [2024-11-17 13:54:58.000763] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:19.954 [2024-11-17 13:54:58.001113] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71405 ] 00:06:19.954 [2024-11-17 13:54:58.152096] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.954 [2024-11-17 13:54:58.191247] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.898 13:54:58 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:20.898 13:54:58 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:06:20.898 13:54:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 71405 00:06:20.898 13:54:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 71405 00:06:20.898 13:54:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:20.898 13:54:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 71405 00:06:20.898 13:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 71405 ']' 00:06:20.898 13:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 71405 00:06:20.898 13:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:06:20.898 13:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:20.898 13:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71405 00:06:20.898 killing process with pid 71405 00:06:20.898 13:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:20.898 13:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:20.898 13:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71405' 00:06:20.898 13:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 71405 00:06:20.898 13:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 71405 00:06:21.159 13:54:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 71405 00:06:21.159 13:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:21.159 13:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71405 00:06:21.159 13:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:21.159 13:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:21.159 13:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:21.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.159 13:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:21.159 13:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 71405 00:06:21.159 13:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 71405 ']' 00:06:21.159 13:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.159 13:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:21.159 13:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.159 13:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:21.159 ERROR: process (pid: 71405) is no longer running 00:06:21.159 13:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:21.159 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71405) - No such process 00:06:21.159 13:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:21.159 13:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:06:21.160 13:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:21.160 13:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:21.160 13:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:21.160 13:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:21.160 13:54:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:21.160 13:54:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:21.160 13:54:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:21.160 13:54:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:21.160 00:06:21.160 real 0m1.368s 00:06:21.160 user 0m1.416s 00:06:21.160 sys 0m0.402s 00:06:21.160 13:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:21.160 13:54:59 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:21.160 ************************************ 00:06:21.160 END TEST default_locks 00:06:21.160 ************************************ 00:06:21.160 13:54:59 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:21.160 13:54:59 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:21.160 13:54:59 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:21.160 13:54:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:21.160 ************************************ 00:06:21.160 START TEST default_locks_via_rpc 00:06:21.160 ************************************ 00:06:21.160 13:54:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:06:21.160 13:54:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=71458 00:06:21.160 13:54:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 71458 00:06:21.160 13:54:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71458 ']' 00:06:21.160 13:54:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.160 13:54:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:21.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.160 13:54:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.160 13:54:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:21.160 13:54:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:21.160 13:54:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.160 [2024-11-17 13:54:59.415773] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:21.160 [2024-11-17 13:54:59.415914] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71458 ] 00:06:21.421 [2024-11-17 13:54:59.567595] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.421 [2024-11-17 13:54:59.616995] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.993 13:55:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:21.993 13:55:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:21.993 13:55:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:21.993 13:55:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.993 13:55:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.993 13:55:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.993 13:55:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:21.993 13:55:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:21.993 13:55:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:21.993 13:55:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:21.993 13:55:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:21.993 13:55:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.993 13:55:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.993 13:55:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.993 13:55:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 71458 00:06:21.993 13:55:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 71458 00:06:21.993 13:55:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:22.254 13:55:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 71458 00:06:22.254 13:55:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 71458 ']' 00:06:22.254 13:55:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 71458 00:06:22.254 13:55:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:06:22.254 13:55:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:22.254 13:55:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71458 00:06:22.254 killing process with pid 71458 00:06:22.254 13:55:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:22.254 13:55:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:22.254 13:55:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71458' 00:06:22.254 13:55:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 71458 00:06:22.254 13:55:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 71458 00:06:22.515 00:06:22.515 real 0m1.464s 00:06:22.515 user 0m1.441s 00:06:22.515 sys 0m0.477s 00:06:22.515 13:55:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:22.515 13:55:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.515 ************************************ 00:06:22.515 END TEST default_locks_via_rpc 00:06:22.515 ************************************ 00:06:22.777 13:55:00 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:22.777 13:55:00 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:22.777 13:55:00 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:22.777 13:55:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:22.777 ************************************ 00:06:22.777 START TEST non_locking_app_on_locked_coremask 00:06:22.777 ************************************ 00:06:22.777 13:55:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:06:22.777 13:55:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=71499 00:06:22.777 13:55:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 71499 /var/tmp/spdk.sock 00:06:22.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.777 13:55:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71499 ']' 00:06:22.777 13:55:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.777 13:55:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:22.777 13:55:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.777 13:55:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:22.777 13:55:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:22.777 13:55:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:22.777 [2024-11-17 13:55:00.949463] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:22.777 [2024-11-17 13:55:00.949615] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71499 ] 00:06:23.038 [2024-11-17 13:55:01.100008] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.038 [2024-11-17 13:55:01.149292] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:23.611 13:55:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:23.611 13:55:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:23.611 13:55:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=71515 00:06:23.611 13:55:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:23.611 13:55:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 71515 /var/tmp/spdk2.sock 00:06:23.611 13:55:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71515 ']' 00:06:23.611 13:55:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:23.611 13:55:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:23.611 13:55:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:23.611 13:55:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:23.611 13:55:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:23.611 [2024-11-17 13:55:01.865148] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:23.611 [2024-11-17 13:55:01.865553] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71515 ] 00:06:23.871 [2024-11-17 13:55:02.023346] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:23.871 [2024-11-17 13:55:02.023422] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.871 [2024-11-17 13:55:02.128869] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.442 13:55:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:24.442 13:55:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:24.442 13:55:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 71499 00:06:24.442 13:55:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71499 00:06:24.442 13:55:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:25.011 13:55:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 71499 00:06:25.011 13:55:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71499 ']' 00:06:25.011 13:55:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71499 00:06:25.011 13:55:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:25.011 13:55:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:25.011 13:55:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71499 00:06:25.011 killing process with pid 71499 00:06:25.011 13:55:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:25.011 13:55:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:25.011 13:55:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71499' 00:06:25.011 13:55:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71499 00:06:25.011 13:55:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71499 00:06:25.581 13:55:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 71515 00:06:25.581 13:55:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71515 ']' 00:06:25.581 13:55:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71515 00:06:25.581 13:55:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:25.581 13:55:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:25.581 13:55:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71515 00:06:25.581 killing process with pid 71515 00:06:25.581 13:55:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:25.581 13:55:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:25.581 13:55:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71515' 00:06:25.581 13:55:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71515 00:06:25.581 13:55:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71515 00:06:25.581 ************************************ 00:06:25.581 END TEST non_locking_app_on_locked_coremask 00:06:25.581 00:06:25.582 real 0m2.974s 00:06:25.582 user 0m3.209s 00:06:25.582 sys 0m0.926s 00:06:25.582 13:55:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:25.582 13:55:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:25.582 ************************************ 00:06:25.842 13:55:03 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:25.842 13:55:03 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:25.842 13:55:03 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:25.842 13:55:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:25.842 ************************************ 00:06:25.842 START TEST locking_app_on_unlocked_coremask 00:06:25.842 ************************************ 00:06:25.842 13:55:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:06:25.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.842 13:55:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=71573 00:06:25.842 13:55:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 71573 /var/tmp/spdk.sock 00:06:25.842 13:55:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71573 ']' 00:06:25.842 13:55:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.842 13:55:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:25.842 13:55:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.842 13:55:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:25.842 13:55:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:25.842 13:55:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:25.842 [2024-11-17 13:55:03.982716] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:25.842 [2024-11-17 13:55:03.982853] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71573 ] 00:06:25.842 [2024-11-17 13:55:04.127939] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:25.842 [2024-11-17 13:55:04.127984] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.105 [2024-11-17 13:55:04.157191] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:26.675 13:55:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:26.675 13:55:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:26.675 13:55:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=71589 00:06:26.675 13:55:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 71589 /var/tmp/spdk2.sock 00:06:26.675 13:55:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:26.675 13:55:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71589 ']' 00:06:26.675 13:55:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:26.675 13:55:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:26.675 13:55:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:26.675 13:55:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:26.675 13:55:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:26.675 [2024-11-17 13:55:04.873001] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:26.675 [2024-11-17 13:55:04.873284] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71589 ] 00:06:26.937 [2024-11-17 13:55:05.020062] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.937 [2024-11-17 13:55:05.077330] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.510 13:55:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:27.510 13:55:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:27.510 13:55:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 71589 00:06:27.510 13:55:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71589 00:06:27.510 13:55:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:27.770 13:55:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 71573 00:06:27.770 13:55:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71573 ']' 00:06:27.770 13:55:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 71573 00:06:27.770 13:55:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:27.770 13:55:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:27.770 13:55:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71573 00:06:27.770 killing process with pid 71573 00:06:27.770 13:55:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:27.770 13:55:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:27.770 13:55:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71573' 00:06:27.770 13:55:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 71573 00:06:27.770 13:55:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 71573 00:06:28.345 13:55:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 71589 00:06:28.345 13:55:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71589 ']' 00:06:28.345 13:55:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 71589 00:06:28.345 13:55:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:28.345 13:55:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:28.345 13:55:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71589 00:06:28.345 killing process with pid 71589 00:06:28.345 13:55:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:28.345 13:55:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:28.345 13:55:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71589' 00:06:28.345 13:55:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 71589 00:06:28.345 13:55:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 71589 00:06:28.607 ************************************ 00:06:28.607 END TEST locking_app_on_unlocked_coremask 00:06:28.607 ************************************ 00:06:28.607 00:06:28.607 real 0m2.886s 00:06:28.607 user 0m3.200s 00:06:28.607 sys 0m0.794s 00:06:28.607 13:55:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:28.607 13:55:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:28.607 13:55:06 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:28.607 13:55:06 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:28.607 13:55:06 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:28.607 13:55:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:28.607 ************************************ 00:06:28.607 START TEST locking_app_on_locked_coremask 00:06:28.607 ************************************ 00:06:28.607 13:55:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:28.607 13:55:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=71647 00:06:28.607 13:55:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:28.607 13:55:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 71647 /var/tmp/spdk.sock 00:06:28.607 13:55:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71647 ']' 00:06:28.607 13:55:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.607 13:55:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:28.607 13:55:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.607 13:55:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:28.607 13:55:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:28.870 [2024-11-17 13:55:06.925939] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:28.870 [2024-11-17 13:55:06.926102] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71647 ] 00:06:28.870 [2024-11-17 13:55:07.079504] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.870 [2024-11-17 13:55:07.115166] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.814 13:55:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:29.814 13:55:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:29.814 13:55:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=71663 00:06:29.814 13:55:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 71663 /var/tmp/spdk2.sock 00:06:29.814 13:55:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:29.814 13:55:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71663 /var/tmp/spdk2.sock 00:06:29.814 13:55:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:29.814 13:55:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:29.814 13:55:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:29.814 13:55:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:29.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:29.814 13:55:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:29.814 13:55:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 71663 /var/tmp/spdk2.sock 00:06:29.814 13:55:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71663 ']' 00:06:29.814 13:55:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:29.814 13:55:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:29.814 13:55:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:29.814 13:55:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:29.814 13:55:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:29.814 [2024-11-17 13:55:07.831653] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:29.814 [2024-11-17 13:55:07.831799] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71663 ] 00:06:29.814 [2024-11-17 13:55:07.989949] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 71647 has claimed it. 00:06:29.814 [2024-11-17 13:55:07.990035] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:30.387 ERROR: process (pid: 71663) is no longer running 00:06:30.387 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71663) - No such process 00:06:30.387 13:55:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:30.387 13:55:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:30.387 13:55:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:30.387 13:55:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:30.387 13:55:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:30.387 13:55:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:30.387 13:55:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 71647 00:06:30.387 13:55:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71647 00:06:30.387 13:55:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:30.649 13:55:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 71647 00:06:30.649 13:55:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71647 ']' 00:06:30.649 13:55:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71647 00:06:30.649 13:55:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:30.649 13:55:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:30.649 13:55:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71647 00:06:30.649 killing process with pid 71647 00:06:30.649 13:55:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:30.649 13:55:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:30.649 13:55:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71647' 00:06:30.649 13:55:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71647 00:06:30.649 13:55:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71647 00:06:30.911 ************************************ 00:06:30.911 END TEST locking_app_on_locked_coremask 00:06:30.911 ************************************ 00:06:30.911 00:06:30.911 real 0m2.332s 00:06:30.911 user 0m2.601s 00:06:30.911 sys 0m0.594s 00:06:30.911 13:55:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:30.911 13:55:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:31.172 13:55:09 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:31.172 13:55:09 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:31.172 13:55:09 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:31.172 13:55:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:31.172 ************************************ 00:06:31.172 START TEST locking_overlapped_coremask 00:06:31.172 ************************************ 00:06:31.172 13:55:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:31.172 13:55:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=71705 00:06:31.172 13:55:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 71705 /var/tmp/spdk.sock 00:06:31.172 13:55:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:31.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.172 13:55:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 71705 ']' 00:06:31.172 13:55:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.172 13:55:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:31.172 13:55:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.172 13:55:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:31.172 13:55:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:31.172 [2024-11-17 13:55:09.319047] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:31.172 [2024-11-17 13:55:09.319437] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71705 ] 00:06:31.172 [2024-11-17 13:55:09.464346] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:31.434 [2024-11-17 13:55:09.517309] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.434 [2024-11-17 13:55:09.517390] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.434 [2024-11-17 13:55:09.517449] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:32.008 13:55:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:32.008 13:55:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:32.008 13:55:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=71723 00:06:32.008 13:55:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 71723 /var/tmp/spdk2.sock 00:06:32.008 13:55:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:32.008 13:55:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:32.008 13:55:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71723 /var/tmp/spdk2.sock 00:06:32.008 13:55:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:32.008 13:55:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:32.008 13:55:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:32.008 13:55:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:32.008 13:55:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 71723 /var/tmp/spdk2.sock 00:06:32.008 13:55:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 71723 ']' 00:06:32.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:32.008 13:55:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:32.008 13:55:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:32.008 13:55:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:32.008 13:55:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:32.008 13:55:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:32.008 [2024-11-17 13:55:10.261411] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:32.008 [2024-11-17 13:55:10.261811] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71723 ] 00:06:32.268 [2024-11-17 13:55:10.418903] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71705 has claimed it. 00:06:32.268 [2024-11-17 13:55:10.418988] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:32.840 ERROR: process (pid: 71723) is no longer running 00:06:32.840 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71723) - No such process 00:06:32.840 13:55:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:32.840 13:55:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:32.840 13:55:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:32.840 13:55:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:32.841 13:55:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:32.841 13:55:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:32.841 13:55:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:32.841 13:55:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:32.841 13:55:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:32.841 13:55:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:32.841 13:55:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 71705 00:06:32.841 13:55:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 71705 ']' 00:06:32.841 13:55:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 71705 00:06:32.841 13:55:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:32.841 13:55:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:32.841 13:55:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71705 00:06:32.841 13:55:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:32.841 killing process with pid 71705 00:06:32.841 13:55:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:32.841 13:55:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71705' 00:06:32.841 13:55:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 71705 00:06:32.841 13:55:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 71705 00:06:33.102 00:06:33.102 real 0m2.034s 00:06:33.102 user 0m5.427s 00:06:33.102 sys 0m0.545s 00:06:33.102 13:55:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:33.102 ************************************ 00:06:33.102 END TEST locking_overlapped_coremask 00:06:33.102 ************************************ 00:06:33.102 13:55:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:33.102 13:55:11 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:33.102 13:55:11 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:33.102 13:55:11 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:33.102 13:55:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:33.102 ************************************ 00:06:33.102 START TEST locking_overlapped_coremask_via_rpc 00:06:33.102 ************************************ 00:06:33.102 13:55:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:33.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.102 13:55:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=71765 00:06:33.102 13:55:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 71765 /var/tmp/spdk.sock 00:06:33.102 13:55:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71765 ']' 00:06:33.102 13:55:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:33.102 13:55:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.102 13:55:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:33.102 13:55:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.102 13:55:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:33.102 13:55:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.364 [2024-11-17 13:55:11.426884] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:33.364 [2024-11-17 13:55:11.427033] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71765 ] 00:06:33.364 [2024-11-17 13:55:11.577881] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:33.364 [2024-11-17 13:55:11.578104] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:33.364 [2024-11-17 13:55:11.630566] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.364 [2024-11-17 13:55:11.630887] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:33.364 [2024-11-17 13:55:11.630926] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.305 13:55:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:34.305 13:55:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:34.306 13:55:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=71783 00:06:34.306 13:55:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 71783 /var/tmp/spdk2.sock 00:06:34.306 13:55:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:34.306 13:55:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71783 ']' 00:06:34.306 13:55:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:34.306 13:55:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:34.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:34.306 13:55:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:34.306 13:55:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:34.306 13:55:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.306 [2024-11-17 13:55:12.357763] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:34.306 [2024-11-17 13:55:12.358187] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71783 ] 00:06:34.306 [2024-11-17 13:55:12.515702] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:34.306 [2024-11-17 13:55:12.515772] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:34.567 [2024-11-17 13:55:12.627152] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:34.567 [2024-11-17 13:55:12.627286] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:34.567 [2024-11-17 13:55:12.627339] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:06:35.138 13:55:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:35.138 13:55:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:35.138 13:55:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:35.138 13:55:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.138 13:55:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.138 13:55:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.138 13:55:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:35.138 13:55:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:35.138 13:55:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:35.138 13:55:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:35.138 13:55:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:35.138 13:55:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:35.138 13:55:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:35.138 13:55:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:35.138 13:55:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.138 13:55:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.138 [2024-11-17 13:55:13.232417] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71765 has claimed it. 00:06:35.138 request: 00:06:35.138 { 00:06:35.138 "method": "framework_enable_cpumask_locks", 00:06:35.138 "req_id": 1 00:06:35.138 } 00:06:35.138 Got JSON-RPC error response 00:06:35.138 response: 00:06:35.138 { 00:06:35.138 "code": -32603, 00:06:35.138 "message": "Failed to claim CPU core: 2" 00:06:35.138 } 00:06:35.138 13:55:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:35.138 13:55:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:35.138 13:55:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:35.138 13:55:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:35.138 13:55:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:35.138 13:55:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 71765 /var/tmp/spdk.sock 00:06:35.138 13:55:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71765 ']' 00:06:35.138 13:55:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.139 13:55:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:35.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.139 13:55:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.139 13:55:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:35.139 13:55:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.399 13:55:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:35.399 13:55:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:35.399 13:55:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 71783 /var/tmp/spdk2.sock 00:06:35.399 13:55:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71783 ']' 00:06:35.399 13:55:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:35.399 13:55:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:35.399 13:55:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:35.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:35.399 13:55:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:35.399 13:55:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.399 13:55:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:35.399 13:55:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:35.399 13:55:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:35.399 13:55:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:35.399 13:55:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:35.399 13:55:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:35.399 00:06:35.399 real 0m2.316s 00:06:35.399 user 0m1.099s 00:06:35.399 sys 0m0.141s 00:06:35.399 ************************************ 00:06:35.399 END TEST locking_overlapped_coremask_via_rpc 00:06:35.399 ************************************ 00:06:35.399 13:55:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:35.399 13:55:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.399 13:55:13 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:35.399 13:55:13 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71765 ]] 00:06:35.399 13:55:13 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71765 00:06:35.399 13:55:13 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71765 ']' 00:06:35.399 13:55:13 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71765 00:06:35.399 13:55:13 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:35.399 13:55:13 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:35.399 13:55:13 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71765 00:06:35.660 13:55:13 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:35.660 13:55:13 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:35.660 13:55:13 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71765' 00:06:35.660 killing process with pid 71765 00:06:35.660 13:55:13 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 71765 00:06:35.660 13:55:13 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 71765 00:06:35.660 13:55:13 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71783 ]] 00:06:35.660 13:55:13 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71783 00:06:35.660 13:55:13 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71783 ']' 00:06:35.660 13:55:13 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71783 00:06:35.660 13:55:13 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:35.660 13:55:13 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:35.660 13:55:13 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71783 00:06:35.920 killing process with pid 71783 00:06:35.920 13:55:13 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:35.920 13:55:13 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:35.920 13:55:13 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71783' 00:06:35.920 13:55:13 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 71783 00:06:35.920 13:55:13 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 71783 00:06:35.920 13:55:14 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:35.920 Process with pid 71765 is not found 00:06:35.920 13:55:14 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:35.920 13:55:14 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71765 ]] 00:06:35.920 13:55:14 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71765 00:06:35.920 13:55:14 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71765 ']' 00:06:35.920 13:55:14 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71765 00:06:35.920 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (71765) - No such process 00:06:35.920 13:55:14 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 71765 is not found' 00:06:35.920 13:55:14 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71783 ]] 00:06:35.920 13:55:14 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71783 00:06:35.920 13:55:14 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71783 ']' 00:06:35.920 13:55:14 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71783 00:06:35.920 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (71783) - No such process 00:06:35.920 Process with pid 71783 is not found 00:06:35.920 13:55:14 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 71783 is not found' 00:06:35.920 13:55:14 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:35.920 00:06:35.920 real 0m16.452s 00:06:35.920 user 0m28.272s 00:06:35.920 sys 0m4.832s 00:06:35.920 13:55:14 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:35.920 ************************************ 00:06:35.920 END TEST cpu_locks 00:06:35.920 ************************************ 00:06:35.920 13:55:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:36.181 00:06:36.181 real 0m42.692s 00:06:36.181 user 1m21.878s 00:06:36.181 sys 0m7.753s 00:06:36.181 13:55:14 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:36.181 13:55:14 event -- common/autotest_common.sh@10 -- # set +x 00:06:36.181 ************************************ 00:06:36.181 END TEST event 00:06:36.181 ************************************ 00:06:36.181 13:55:14 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:36.181 13:55:14 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:36.181 13:55:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:36.181 13:55:14 -- common/autotest_common.sh@10 -- # set +x 00:06:36.181 ************************************ 00:06:36.181 START TEST thread 00:06:36.181 ************************************ 00:06:36.181 13:55:14 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:36.181 * Looking for test storage... 00:06:36.181 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:36.181 13:55:14 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:36.181 13:55:14 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:06:36.181 13:55:14 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:36.181 13:55:14 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:36.181 13:55:14 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:36.181 13:55:14 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:36.181 13:55:14 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:36.181 13:55:14 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:36.181 13:55:14 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:36.181 13:55:14 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:36.181 13:55:14 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:36.181 13:55:14 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:36.181 13:55:14 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:36.181 13:55:14 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:36.181 13:55:14 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:36.181 13:55:14 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:36.181 13:55:14 thread -- scripts/common.sh@345 -- # : 1 00:06:36.181 13:55:14 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:36.181 13:55:14 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:36.181 13:55:14 thread -- scripts/common.sh@365 -- # decimal 1 00:06:36.181 13:55:14 thread -- scripts/common.sh@353 -- # local d=1 00:06:36.181 13:55:14 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:36.181 13:55:14 thread -- scripts/common.sh@355 -- # echo 1 00:06:36.181 13:55:14 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:36.181 13:55:14 thread -- scripts/common.sh@366 -- # decimal 2 00:06:36.181 13:55:14 thread -- scripts/common.sh@353 -- # local d=2 00:06:36.181 13:55:14 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:36.181 13:55:14 thread -- scripts/common.sh@355 -- # echo 2 00:06:36.181 13:55:14 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:36.181 13:55:14 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:36.181 13:55:14 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:36.181 13:55:14 thread -- scripts/common.sh@368 -- # return 0 00:06:36.181 13:55:14 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:36.181 13:55:14 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:36.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.182 --rc genhtml_branch_coverage=1 00:06:36.182 --rc genhtml_function_coverage=1 00:06:36.182 --rc genhtml_legend=1 00:06:36.182 --rc geninfo_all_blocks=1 00:06:36.182 --rc geninfo_unexecuted_blocks=1 00:06:36.182 00:06:36.182 ' 00:06:36.182 13:55:14 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:36.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.182 --rc genhtml_branch_coverage=1 00:06:36.182 --rc genhtml_function_coverage=1 00:06:36.182 --rc genhtml_legend=1 00:06:36.182 --rc geninfo_all_blocks=1 00:06:36.182 --rc geninfo_unexecuted_blocks=1 00:06:36.182 00:06:36.182 ' 00:06:36.182 13:55:14 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:36.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.182 --rc genhtml_branch_coverage=1 00:06:36.182 --rc genhtml_function_coverage=1 00:06:36.182 --rc genhtml_legend=1 00:06:36.182 --rc geninfo_all_blocks=1 00:06:36.182 --rc geninfo_unexecuted_blocks=1 00:06:36.182 00:06:36.182 ' 00:06:36.182 13:55:14 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:36.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.182 --rc genhtml_branch_coverage=1 00:06:36.182 --rc genhtml_function_coverage=1 00:06:36.182 --rc genhtml_legend=1 00:06:36.182 --rc geninfo_all_blocks=1 00:06:36.182 --rc geninfo_unexecuted_blocks=1 00:06:36.182 00:06:36.182 ' 00:06:36.182 13:55:14 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:36.182 13:55:14 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:36.182 13:55:14 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:36.182 13:55:14 thread -- common/autotest_common.sh@10 -- # set +x 00:06:36.182 ************************************ 00:06:36.182 START TEST thread_poller_perf 00:06:36.182 ************************************ 00:06:36.182 13:55:14 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:36.443 [2024-11-17 13:55:14.490671] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:36.443 [2024-11-17 13:55:14.490938] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71910 ] 00:06:36.443 [2024-11-17 13:55:14.637781] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.443 [2024-11-17 13:55:14.688343] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.443 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:37.838 [2024-11-17T13:55:16.139Z] ====================================== 00:06:37.838 [2024-11-17T13:55:16.139Z] busy:2610029076 (cyc) 00:06:37.838 [2024-11-17T13:55:16.139Z] total_run_count: 306000 00:06:37.838 [2024-11-17T13:55:16.139Z] tsc_hz: 2600000000 (cyc) 00:06:37.838 [2024-11-17T13:55:16.139Z] ====================================== 00:06:37.838 [2024-11-17T13:55:16.139Z] poller_cost: 8529 (cyc), 3280 (nsec) 00:06:37.838 00:06:37.838 real 0m1.319s 00:06:37.838 user 0m1.131s 00:06:37.838 sys 0m0.079s 00:06:37.838 13:55:15 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:37.838 ************************************ 00:06:37.838 END TEST thread_poller_perf 00:06:37.838 ************************************ 00:06:37.838 13:55:15 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:37.838 13:55:15 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:37.838 13:55:15 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:37.838 13:55:15 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:37.838 13:55:15 thread -- common/autotest_common.sh@10 -- # set +x 00:06:37.838 ************************************ 00:06:37.838 START TEST thread_poller_perf 00:06:37.838 ************************************ 00:06:37.838 13:55:15 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:37.838 [2024-11-17 13:55:15.872030] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:37.838 [2024-11-17 13:55:15.872441] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71952 ] 00:06:37.838 [2024-11-17 13:55:16.021023] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.838 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:37.838 [2024-11-17 13:55:16.086656] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.239 [2024-11-17T13:55:17.540Z] ====================================== 00:06:39.239 [2024-11-17T13:55:17.540Z] busy:2603265096 (cyc) 00:06:39.239 [2024-11-17T13:55:17.540Z] total_run_count: 3964000 00:06:39.239 [2024-11-17T13:55:17.540Z] tsc_hz: 2600000000 (cyc) 00:06:39.239 [2024-11-17T13:55:17.540Z] ====================================== 00:06:39.239 [2024-11-17T13:55:17.540Z] poller_cost: 656 (cyc), 252 (nsec) 00:06:39.239 ************************************ 00:06:39.239 END TEST thread_poller_perf 00:06:39.239 ************************************ 00:06:39.239 00:06:39.239 real 0m1.297s 00:06:39.239 user 0m1.111s 00:06:39.239 sys 0m0.079s 00:06:39.239 13:55:17 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:39.239 13:55:17 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:39.239 13:55:17 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:39.239 ************************************ 00:06:39.239 END TEST thread 00:06:39.239 ************************************ 00:06:39.239 00:06:39.239 real 0m2.871s 00:06:39.239 user 0m2.361s 00:06:39.239 sys 0m0.271s 00:06:39.239 13:55:17 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:39.239 13:55:17 thread -- common/autotest_common.sh@10 -- # set +x 00:06:39.239 13:55:17 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:39.239 13:55:17 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:39.239 13:55:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:39.239 13:55:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:39.239 13:55:17 -- common/autotest_common.sh@10 -- # set +x 00:06:39.239 ************************************ 00:06:39.239 START TEST app_cmdline 00:06:39.239 ************************************ 00:06:39.239 13:55:17 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:39.239 * Looking for test storage... 00:06:39.239 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:39.239 13:55:17 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:39.239 13:55:17 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:06:39.239 13:55:17 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:39.239 13:55:17 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:39.239 13:55:17 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:39.239 13:55:17 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:39.239 13:55:17 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:39.239 13:55:17 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:39.239 13:55:17 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:39.239 13:55:17 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:39.239 13:55:17 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:39.240 13:55:17 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:39.240 13:55:17 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:39.240 13:55:17 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:39.240 13:55:17 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:39.240 13:55:17 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:39.240 13:55:17 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:39.240 13:55:17 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:39.240 13:55:17 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:39.240 13:55:17 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:39.240 13:55:17 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:39.240 13:55:17 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:39.240 13:55:17 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:39.240 13:55:17 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:39.240 13:55:17 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:39.240 13:55:17 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:39.240 13:55:17 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:39.240 13:55:17 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:39.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.240 13:55:17 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:39.240 13:55:17 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:39.240 13:55:17 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:39.240 13:55:17 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:39.240 13:55:17 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:39.240 13:55:17 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:39.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.240 --rc genhtml_branch_coverage=1 00:06:39.240 --rc genhtml_function_coverage=1 00:06:39.240 --rc genhtml_legend=1 00:06:39.240 --rc geninfo_all_blocks=1 00:06:39.240 --rc geninfo_unexecuted_blocks=1 00:06:39.240 00:06:39.240 ' 00:06:39.240 13:55:17 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:39.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.240 --rc genhtml_branch_coverage=1 00:06:39.240 --rc genhtml_function_coverage=1 00:06:39.240 --rc genhtml_legend=1 00:06:39.240 --rc geninfo_all_blocks=1 00:06:39.240 --rc geninfo_unexecuted_blocks=1 00:06:39.240 00:06:39.240 ' 00:06:39.240 13:55:17 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:39.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.240 --rc genhtml_branch_coverage=1 00:06:39.240 --rc genhtml_function_coverage=1 00:06:39.240 --rc genhtml_legend=1 00:06:39.240 --rc geninfo_all_blocks=1 00:06:39.240 --rc geninfo_unexecuted_blocks=1 00:06:39.240 00:06:39.240 ' 00:06:39.240 13:55:17 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:39.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.240 --rc genhtml_branch_coverage=1 00:06:39.240 --rc genhtml_function_coverage=1 00:06:39.240 --rc genhtml_legend=1 00:06:39.240 --rc geninfo_all_blocks=1 00:06:39.240 --rc geninfo_unexecuted_blocks=1 00:06:39.240 00:06:39.240 ' 00:06:39.240 13:55:17 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:39.240 13:55:17 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=72030 00:06:39.240 13:55:17 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 72030 00:06:39.240 13:55:17 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 72030 ']' 00:06:39.240 13:55:17 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.240 13:55:17 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:39.240 13:55:17 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.240 13:55:17 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:39.240 13:55:17 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:39.240 13:55:17 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:39.240 [2024-11-17 13:55:17.418527] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:39.240 [2024-11-17 13:55:17.418776] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72030 ] 00:06:39.500 [2024-11-17 13:55:17.566455] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.500 [2024-11-17 13:55:17.598368] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.067 13:55:18 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:40.067 13:55:18 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:40.067 13:55:18 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:40.325 { 00:06:40.325 "version": "SPDK v24.09.1-pre git sha1 b18e1bd62", 00:06:40.325 "fields": { 00:06:40.325 "major": 24, 00:06:40.325 "minor": 9, 00:06:40.325 "patch": 1, 00:06:40.325 "suffix": "-pre", 00:06:40.325 "commit": "b18e1bd62" 00:06:40.325 } 00:06:40.325 } 00:06:40.325 13:55:18 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:40.325 13:55:18 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:40.325 13:55:18 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:40.325 13:55:18 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:40.325 13:55:18 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:40.325 13:55:18 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:40.325 13:55:18 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:40.325 13:55:18 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.325 13:55:18 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:40.325 13:55:18 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.325 13:55:18 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:40.325 13:55:18 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:40.325 13:55:18 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:40.325 13:55:18 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:40.325 13:55:18 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:40.325 13:55:18 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:40.325 13:55:18 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:40.325 13:55:18 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:40.325 13:55:18 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:40.325 13:55:18 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:40.325 13:55:18 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:40.325 13:55:18 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:40.325 13:55:18 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:40.325 13:55:18 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:40.583 request: 00:06:40.583 { 00:06:40.583 "method": "env_dpdk_get_mem_stats", 00:06:40.583 "req_id": 1 00:06:40.583 } 00:06:40.583 Got JSON-RPC error response 00:06:40.583 response: 00:06:40.583 { 00:06:40.583 "code": -32601, 00:06:40.583 "message": "Method not found" 00:06:40.583 } 00:06:40.583 13:55:18 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:40.583 13:55:18 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:40.583 13:55:18 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:40.583 13:55:18 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:40.583 13:55:18 app_cmdline -- app/cmdline.sh@1 -- # killprocess 72030 00:06:40.583 13:55:18 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 72030 ']' 00:06:40.583 13:55:18 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 72030 00:06:40.583 13:55:18 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:40.583 13:55:18 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:40.583 13:55:18 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72030 00:06:40.583 killing process with pid 72030 00:06:40.583 13:55:18 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:40.583 13:55:18 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:40.583 13:55:18 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72030' 00:06:40.583 13:55:18 app_cmdline -- common/autotest_common.sh@969 -- # kill 72030 00:06:40.583 13:55:18 app_cmdline -- common/autotest_common.sh@974 -- # wait 72030 00:06:40.842 00:06:40.842 real 0m1.784s 00:06:40.842 user 0m2.129s 00:06:40.842 sys 0m0.413s 00:06:40.842 13:55:18 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:40.842 13:55:18 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:40.842 ************************************ 00:06:40.842 END TEST app_cmdline 00:06:40.842 ************************************ 00:06:40.842 13:55:19 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:40.842 13:55:19 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:40.842 13:55:19 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:40.842 13:55:19 -- common/autotest_common.sh@10 -- # set +x 00:06:40.842 ************************************ 00:06:40.842 START TEST version 00:06:40.842 ************************************ 00:06:40.842 13:55:19 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:40.842 * Looking for test storage... 00:06:40.842 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:40.842 13:55:19 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:40.842 13:55:19 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:40.842 13:55:19 version -- common/autotest_common.sh@1681 -- # lcov --version 00:06:41.100 13:55:19 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:41.100 13:55:19 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:41.100 13:55:19 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:41.100 13:55:19 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:41.100 13:55:19 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:41.100 13:55:19 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:41.100 13:55:19 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:41.100 13:55:19 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:41.100 13:55:19 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:41.100 13:55:19 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:41.100 13:55:19 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:41.100 13:55:19 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:41.100 13:55:19 version -- scripts/common.sh@344 -- # case "$op" in 00:06:41.100 13:55:19 version -- scripts/common.sh@345 -- # : 1 00:06:41.100 13:55:19 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:41.100 13:55:19 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:41.100 13:55:19 version -- scripts/common.sh@365 -- # decimal 1 00:06:41.100 13:55:19 version -- scripts/common.sh@353 -- # local d=1 00:06:41.100 13:55:19 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:41.100 13:55:19 version -- scripts/common.sh@355 -- # echo 1 00:06:41.100 13:55:19 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:41.100 13:55:19 version -- scripts/common.sh@366 -- # decimal 2 00:06:41.100 13:55:19 version -- scripts/common.sh@353 -- # local d=2 00:06:41.100 13:55:19 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:41.100 13:55:19 version -- scripts/common.sh@355 -- # echo 2 00:06:41.100 13:55:19 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:41.100 13:55:19 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:41.100 13:55:19 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:41.100 13:55:19 version -- scripts/common.sh@368 -- # return 0 00:06:41.100 13:55:19 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:41.100 13:55:19 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:41.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.100 --rc genhtml_branch_coverage=1 00:06:41.100 --rc genhtml_function_coverage=1 00:06:41.100 --rc genhtml_legend=1 00:06:41.100 --rc geninfo_all_blocks=1 00:06:41.100 --rc geninfo_unexecuted_blocks=1 00:06:41.100 00:06:41.100 ' 00:06:41.100 13:55:19 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:41.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.100 --rc genhtml_branch_coverage=1 00:06:41.100 --rc genhtml_function_coverage=1 00:06:41.100 --rc genhtml_legend=1 00:06:41.100 --rc geninfo_all_blocks=1 00:06:41.100 --rc geninfo_unexecuted_blocks=1 00:06:41.100 00:06:41.100 ' 00:06:41.100 13:55:19 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:41.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.100 --rc genhtml_branch_coverage=1 00:06:41.100 --rc genhtml_function_coverage=1 00:06:41.100 --rc genhtml_legend=1 00:06:41.100 --rc geninfo_all_blocks=1 00:06:41.100 --rc geninfo_unexecuted_blocks=1 00:06:41.100 00:06:41.100 ' 00:06:41.100 13:55:19 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:41.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.100 --rc genhtml_branch_coverage=1 00:06:41.100 --rc genhtml_function_coverage=1 00:06:41.100 --rc genhtml_legend=1 00:06:41.100 --rc geninfo_all_blocks=1 00:06:41.100 --rc geninfo_unexecuted_blocks=1 00:06:41.100 00:06:41.100 ' 00:06:41.100 13:55:19 version -- app/version.sh@17 -- # get_header_version major 00:06:41.100 13:55:19 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:41.100 13:55:19 version -- app/version.sh@14 -- # cut -f2 00:06:41.100 13:55:19 version -- app/version.sh@14 -- # tr -d '"' 00:06:41.100 13:55:19 version -- app/version.sh@17 -- # major=24 00:06:41.100 13:55:19 version -- app/version.sh@18 -- # get_header_version minor 00:06:41.100 13:55:19 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:41.100 13:55:19 version -- app/version.sh@14 -- # tr -d '"' 00:06:41.100 13:55:19 version -- app/version.sh@14 -- # cut -f2 00:06:41.100 13:55:19 version -- app/version.sh@18 -- # minor=9 00:06:41.100 13:55:19 version -- app/version.sh@19 -- # get_header_version patch 00:06:41.100 13:55:19 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:41.100 13:55:19 version -- app/version.sh@14 -- # cut -f2 00:06:41.100 13:55:19 version -- app/version.sh@14 -- # tr -d '"' 00:06:41.100 13:55:19 version -- app/version.sh@19 -- # patch=1 00:06:41.100 13:55:19 version -- app/version.sh@20 -- # get_header_version suffix 00:06:41.100 13:55:19 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:41.100 13:55:19 version -- app/version.sh@14 -- # cut -f2 00:06:41.100 13:55:19 version -- app/version.sh@14 -- # tr -d '"' 00:06:41.100 13:55:19 version -- app/version.sh@20 -- # suffix=-pre 00:06:41.100 13:55:19 version -- app/version.sh@22 -- # version=24.9 00:06:41.100 13:55:19 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:41.101 13:55:19 version -- app/version.sh@25 -- # version=24.9.1 00:06:41.101 13:55:19 version -- app/version.sh@28 -- # version=24.9.1rc0 00:06:41.101 13:55:19 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:41.101 13:55:19 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:41.101 13:55:19 version -- app/version.sh@30 -- # py_version=24.9.1rc0 00:06:41.101 13:55:19 version -- app/version.sh@31 -- # [[ 24.9.1rc0 == \2\4\.\9\.\1\r\c\0 ]] 00:06:41.101 00:06:41.101 real 0m0.195s 00:06:41.101 user 0m0.131s 00:06:41.101 sys 0m0.088s 00:06:41.101 13:55:19 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:41.101 13:55:19 version -- common/autotest_common.sh@10 -- # set +x 00:06:41.101 ************************************ 00:06:41.101 END TEST version 00:06:41.101 ************************************ 00:06:41.101 13:55:19 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:41.101 13:55:19 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:41.101 13:55:19 -- spdk/autotest.sh@194 -- # uname -s 00:06:41.101 13:55:19 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:41.101 13:55:19 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:41.101 13:55:19 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:41.101 13:55:19 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:06:41.101 13:55:19 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:41.101 13:55:19 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:41.101 13:55:19 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:41.101 13:55:19 -- common/autotest_common.sh@10 -- # set +x 00:06:41.101 ************************************ 00:06:41.101 START TEST blockdev_nvme 00:06:41.101 ************************************ 00:06:41.101 13:55:19 blockdev_nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:41.101 * Looking for test storage... 00:06:41.101 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:41.101 13:55:19 blockdev_nvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:41.101 13:55:19 blockdev_nvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:41.101 13:55:19 blockdev_nvme -- common/autotest_common.sh@1681 -- # lcov --version 00:06:41.359 13:55:19 blockdev_nvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:41.359 13:55:19 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:41.359 13:55:19 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:41.359 13:55:19 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:41.359 13:55:19 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:06:41.359 13:55:19 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:06:41.359 13:55:19 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:06:41.359 13:55:19 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:06:41.359 13:55:19 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:06:41.359 13:55:19 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:06:41.359 13:55:19 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:06:41.359 13:55:19 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:41.359 13:55:19 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:06:41.359 13:55:19 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:06:41.359 13:55:19 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:41.359 13:55:19 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:41.359 13:55:19 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:06:41.359 13:55:19 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:06:41.359 13:55:19 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:41.359 13:55:19 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:06:41.359 13:55:19 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:06:41.359 13:55:19 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:06:41.359 13:55:19 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:06:41.359 13:55:19 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:41.359 13:55:19 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:06:41.359 13:55:19 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:06:41.359 13:55:19 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:41.359 13:55:19 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:41.359 13:55:19 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:06:41.359 13:55:19 blockdev_nvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:41.359 13:55:19 blockdev_nvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:41.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.359 --rc genhtml_branch_coverage=1 00:06:41.359 --rc genhtml_function_coverage=1 00:06:41.359 --rc genhtml_legend=1 00:06:41.359 --rc geninfo_all_blocks=1 00:06:41.359 --rc geninfo_unexecuted_blocks=1 00:06:41.359 00:06:41.359 ' 00:06:41.359 13:55:19 blockdev_nvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:41.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.359 --rc genhtml_branch_coverage=1 00:06:41.359 --rc genhtml_function_coverage=1 00:06:41.359 --rc genhtml_legend=1 00:06:41.359 --rc geninfo_all_blocks=1 00:06:41.359 --rc geninfo_unexecuted_blocks=1 00:06:41.359 00:06:41.359 ' 00:06:41.359 13:55:19 blockdev_nvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:41.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.359 --rc genhtml_branch_coverage=1 00:06:41.359 --rc genhtml_function_coverage=1 00:06:41.359 --rc genhtml_legend=1 00:06:41.359 --rc geninfo_all_blocks=1 00:06:41.359 --rc geninfo_unexecuted_blocks=1 00:06:41.359 00:06:41.359 ' 00:06:41.359 13:55:19 blockdev_nvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:41.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.359 --rc genhtml_branch_coverage=1 00:06:41.359 --rc genhtml_function_coverage=1 00:06:41.359 --rc genhtml_legend=1 00:06:41.359 --rc geninfo_all_blocks=1 00:06:41.359 --rc geninfo_unexecuted_blocks=1 00:06:41.359 00:06:41.359 ' 00:06:41.359 13:55:19 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:41.359 13:55:19 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:06:41.359 13:55:19 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:41.359 13:55:19 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:41.359 13:55:19 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:41.359 13:55:19 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:41.359 13:55:19 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:41.359 13:55:19 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:41.359 13:55:19 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:06:41.359 13:55:19 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:41.359 13:55:19 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:41.359 13:55:19 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:41.359 13:55:19 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:06:41.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.359 13:55:19 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:41.359 13:55:19 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:41.359 13:55:19 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:06:41.359 13:55:19 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:41.359 13:55:19 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:06:41.359 13:55:19 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:41.359 13:55:19 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:41.359 13:55:19 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:41.359 13:55:19 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:06:41.359 13:55:19 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:06:41.359 13:55:19 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:41.359 13:55:19 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=72191 00:06:41.360 13:55:19 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:41.360 13:55:19 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 72191 00:06:41.360 13:55:19 blockdev_nvme -- common/autotest_common.sh@831 -- # '[' -z 72191 ']' 00:06:41.360 13:55:19 blockdev_nvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.360 13:55:19 blockdev_nvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:41.360 13:55:19 blockdev_nvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.360 13:55:19 blockdev_nvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:41.360 13:55:19 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:41.360 13:55:19 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:41.360 [2024-11-17 13:55:19.491944] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:41.360 [2024-11-17 13:55:19.492067] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72191 ] 00:06:41.360 [2024-11-17 13:55:19.639060] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.617 [2024-11-17 13:55:19.670574] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.184 13:55:20 blockdev_nvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:42.184 13:55:20 blockdev_nvme -- common/autotest_common.sh@864 -- # return 0 00:06:42.184 13:55:20 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:06:42.185 13:55:20 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:06:42.185 13:55:20 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:06:42.185 13:55:20 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:42.185 13:55:20 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:42.185 13:55:20 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:42.185 13:55:20 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.185 13:55:20 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:42.443 13:55:20 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.443 13:55:20 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:06:42.443 13:55:20 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.443 13:55:20 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:42.443 13:55:20 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.443 13:55:20 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:06:42.443 13:55:20 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:06:42.443 13:55:20 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.443 13:55:20 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:42.443 13:55:20 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.443 13:55:20 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:06:42.443 13:55:20 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.443 13:55:20 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:42.443 13:55:20 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.443 13:55:20 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:42.443 13:55:20 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.443 13:55:20 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:42.443 13:55:20 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.443 13:55:20 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:06:42.443 13:55:20 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:06:42.443 13:55:20 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:06:42.443 13:55:20 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.443 13:55:20 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:42.702 13:55:20 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.702 13:55:20 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:06:42.702 13:55:20 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:06:42.703 13:55:20 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "0ffba228-07bf-4d62-8669-32a3d9ed8cf5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "0ffba228-07bf-4d62-8669-32a3d9ed8cf5",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "fc1c3f63-f5c3-4f08-b7cf-084ea0af6cfd"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "fc1c3f63-f5c3-4f08-b7cf-084ea0af6cfd",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "3a75e436-f8ba-45e8-a31b-f7b59f95c0b4"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "3a75e436-f8ba-45e8-a31b-f7b59f95c0b4",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "6f5c84d9-c8dd-42b2-93c0-a6327c6ffae6"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6f5c84d9-c8dd-42b2-93c0-a6327c6ffae6",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "835350fc-16d5-4bc9-a527-a56cc7d63c51"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "835350fc-16d5-4bc9-a527-a56cc7d63c51",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "c711171f-be6a-4ef7-8acf-cf1c27b6dac3"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "c711171f-be6a-4ef7-8acf-cf1c27b6dac3",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:42.703 13:55:20 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:06:42.703 13:55:20 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:06:42.703 13:55:20 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:06:42.703 13:55:20 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 72191 00:06:42.703 13:55:20 blockdev_nvme -- common/autotest_common.sh@950 -- # '[' -z 72191 ']' 00:06:42.703 13:55:20 blockdev_nvme -- common/autotest_common.sh@954 -- # kill -0 72191 00:06:42.703 13:55:20 blockdev_nvme -- common/autotest_common.sh@955 -- # uname 00:06:42.703 13:55:20 blockdev_nvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:42.703 13:55:20 blockdev_nvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72191 00:06:42.703 killing process with pid 72191 00:06:42.703 13:55:20 blockdev_nvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:42.703 13:55:20 blockdev_nvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:42.703 13:55:20 blockdev_nvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72191' 00:06:42.703 13:55:20 blockdev_nvme -- common/autotest_common.sh@969 -- # kill 72191 00:06:42.703 13:55:20 blockdev_nvme -- common/autotest_common.sh@974 -- # wait 72191 00:06:42.961 13:55:21 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:42.961 13:55:21 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:42.961 13:55:21 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:06:42.961 13:55:21 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:42.961 13:55:21 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:42.961 ************************************ 00:06:42.961 START TEST bdev_hello_world 00:06:42.961 ************************************ 00:06:42.961 13:55:21 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:42.961 [2024-11-17 13:55:21.164289] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:42.961 [2024-11-17 13:55:21.164533] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72264 ] 00:06:43.219 [2024-11-17 13:55:21.313709] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.219 [2024-11-17 13:55:21.346276] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.478 [2024-11-17 13:55:21.714744] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:43.478 [2024-11-17 13:55:21.714789] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:43.478 [2024-11-17 13:55:21.714811] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:43.478 [2024-11-17 13:55:21.716922] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:43.478 [2024-11-17 13:55:21.717552] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:43.478 [2024-11-17 13:55:21.717578] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:43.478 [2024-11-17 13:55:21.718254] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:43.478 00:06:43.478 [2024-11-17 13:55:21.718294] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:43.736 00:06:43.736 real 0m0.775s 00:06:43.736 user 0m0.516s 00:06:43.736 sys 0m0.156s 00:06:43.737 13:55:21 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:43.737 ************************************ 00:06:43.737 END TEST bdev_hello_world 00:06:43.737 ************************************ 00:06:43.737 13:55:21 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:43.737 13:55:21 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:06:43.737 13:55:21 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:43.737 13:55:21 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:43.737 13:55:21 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:43.737 ************************************ 00:06:43.737 START TEST bdev_bounds 00:06:43.737 ************************************ 00:06:43.737 13:55:21 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:06:43.737 13:55:21 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=72295 00:06:43.737 13:55:21 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:43.737 Process bdevio pid: 72295 00:06:43.737 13:55:21 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 72295' 00:06:43.737 13:55:21 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 72295 00:06:43.737 13:55:21 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 72295 ']' 00:06:43.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.737 13:55:21 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.737 13:55:21 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:43.737 13:55:21 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.737 13:55:21 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:43.737 13:55:21 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:43.737 13:55:21 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:43.737 [2024-11-17 13:55:21.995026] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:43.737 [2024-11-17 13:55:21.995140] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72295 ] 00:06:43.994 [2024-11-17 13:55:22.143309] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:43.994 [2024-11-17 13:55:22.177998] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.994 [2024-11-17 13:55:22.178203] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.994 [2024-11-17 13:55:22.178221] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:44.929 13:55:22 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:44.929 13:55:22 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:06:44.929 13:55:22 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:44.929 I/O targets: 00:06:44.929 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:44.929 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:06:44.929 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:44.929 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:44.929 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:44.929 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:44.929 00:06:44.929 00:06:44.929 CUnit - A unit testing framework for C - Version 2.1-3 00:06:44.929 http://cunit.sourceforge.net/ 00:06:44.929 00:06:44.929 00:06:44.929 Suite: bdevio tests on: Nvme3n1 00:06:44.929 Test: blockdev write read block ...passed 00:06:44.929 Test: blockdev write zeroes read block ...passed 00:06:44.929 Test: blockdev write zeroes read no split ...passed 00:06:44.929 Test: blockdev write zeroes read split ...passed 00:06:44.929 Test: blockdev write zeroes read split partial ...passed 00:06:44.929 Test: blockdev reset ...[2024-11-17 13:55:22.972226] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:06:44.929 [2024-11-17 13:55:22.975324] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:44.929 passed 00:06:44.929 Test: blockdev write read 8 blocks ...passed 00:06:44.929 Test: blockdev write read size > 128k ...passed 00:06:44.929 Test: blockdev write read invalid size ...passed 00:06:44.929 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:44.929 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:44.929 Test: blockdev write read max offset ...passed 00:06:44.929 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:44.929 Test: blockdev writev readv 8 blocks ...passed 00:06:44.929 Test: blockdev writev readv 30 x 1block ...passed 00:06:44.929 Test: blockdev writev readv block ...passed 00:06:44.929 Test: blockdev writev readv size > 128k ...passed 00:06:44.929 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:44.929 Test: blockdev comparev and writev ...[2024-11-17 13:55:22.990545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2a6606000 len:0x1000 00:06:44.929 [2024-11-17 13:55:22.990597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:44.929 passed 00:06:44.929 Test: blockdev nvme passthru rw ...passed 00:06:44.929 Test: blockdev nvme passthru vendor specific ...passed 00:06:44.929 Test: blockdev nvme admin passthru ...[2024-11-17 13:55:22.992654] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:44.929 [2024-11-17 13:55:22.992688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:44.929 passed 00:06:44.929 Test: blockdev copy ...passed 00:06:44.929 Suite: bdevio tests on: Nvme2n3 00:06:44.929 Test: blockdev write read block ...passed 00:06:44.929 Test: blockdev write zeroes read block ...passed 00:06:44.929 Test: blockdev write zeroes read no split ...passed 00:06:44.929 Test: blockdev write zeroes read split ...passed 00:06:44.929 Test: blockdev write zeroes read split partial ...passed 00:06:44.929 Test: blockdev reset ...[2024-11-17 13:55:23.019228] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:06:44.929 passed 00:06:44.929 Test: blockdev write read 8 blocks ...[2024-11-17 13:55:23.021726] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:44.929 passed 00:06:44.929 Test: blockdev write read size > 128k ...passed 00:06:44.929 Test: blockdev write read invalid size ...passed 00:06:44.929 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:44.929 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:44.929 Test: blockdev write read max offset ...passed 00:06:44.929 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:44.929 Test: blockdev writev readv 8 blocks ...passed 00:06:44.929 Test: blockdev writev readv 30 x 1block ...passed 00:06:44.929 Test: blockdev writev readv block ...passed 00:06:44.929 Test: blockdev writev readv size > 128k ...passed 00:06:44.929 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:44.929 Test: blockdev comparev and writev ...[2024-11-17 13:55:23.027487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d6a05000 len:0x1000 00:06:44.929 [2024-11-17 13:55:23.027524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:44.929 passed 00:06:44.929 Test: blockdev nvme passthru rw ...passed 00:06:44.929 Test: blockdev nvme passthru vendor specific ...passed 00:06:44.929 Test: blockdev nvme admin passthru ...[2024-11-17 13:55:23.028132] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:44.929 [2024-11-17 13:55:23.028160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:44.929 passed 00:06:44.929 Test: blockdev copy ...passed 00:06:44.929 Suite: bdevio tests on: Nvme2n2 00:06:44.929 Test: blockdev write read block ...passed 00:06:44.929 Test: blockdev write zeroes read block ...passed 00:06:44.929 Test: blockdev write zeroes read no split ...passed 00:06:44.929 Test: blockdev write zeroes read split ...passed 00:06:44.929 Test: blockdev write zeroes read split partial ...passed 00:06:44.929 Test: blockdev reset ...[2024-11-17 13:55:23.041488] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:06:44.929 passed 00:06:44.929 Test: blockdev write read 8 blocks ...[2024-11-17 13:55:23.044905] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:44.929 passed 00:06:44.929 Test: blockdev write read size > 128k ...passed 00:06:44.929 Test: blockdev write read invalid size ...passed 00:06:44.929 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:44.929 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:44.929 Test: blockdev write read max offset ...passed 00:06:44.929 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:44.929 Test: blockdev writev readv 8 blocks ...passed 00:06:44.929 Test: blockdev writev readv 30 x 1block ...passed 00:06:44.929 Test: blockdev writev readv block ...passed 00:06:44.929 Test: blockdev writev readv size > 128k ...passed 00:06:44.929 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:44.930 Test: blockdev comparev and writev ...[2024-11-17 13:55:23.057187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d6e36000 len:0x1000 00:06:44.930 [2024-11-17 13:55:23.057223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:44.930 passed 00:06:44.930 Test: blockdev nvme passthru rw ...passed 00:06:44.930 Test: blockdev nvme passthru vendor specific ...passed 00:06:44.930 Test: blockdev nvme admin passthru ...[2024-11-17 13:55:23.058617] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:44.930 [2024-11-17 13:55:23.058649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:44.930 passed 00:06:44.930 Test: blockdev copy ...passed 00:06:44.930 Suite: bdevio tests on: Nvme2n1 00:06:44.930 Test: blockdev write read block ...passed 00:06:44.930 Test: blockdev write zeroes read block ...passed 00:06:44.930 Test: blockdev write zeroes read no split ...passed 00:06:44.930 Test: blockdev write zeroes read split ...passed 00:06:44.930 Test: blockdev write zeroes read split partial ...passed 00:06:44.930 Test: blockdev reset ...[2024-11-17 13:55:23.076219] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:06:44.930 passed 00:06:44.930 Test: blockdev write read 8 blocks ...[2024-11-17 13:55:23.077923] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:44.930 passed 00:06:44.930 Test: blockdev write read size > 128k ...passed 00:06:44.930 Test: blockdev write read invalid size ...passed 00:06:44.930 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:44.930 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:44.930 Test: blockdev write read max offset ...passed 00:06:44.930 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:44.930 Test: blockdev writev readv 8 blocks ...passed 00:06:44.930 Test: blockdev writev readv 30 x 1block ...passed 00:06:44.930 Test: blockdev writev readv block ...passed 00:06:44.930 Test: blockdev writev readv size > 128k ...passed 00:06:44.930 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:44.930 Test: blockdev comparev and writev ...[2024-11-17 13:55:23.084510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:06:44.930 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2d6e30000 len:0x1000 00:06:44.930 [2024-11-17 13:55:23.084637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:44.930 passed 00:06:44.930 Test: blockdev nvme passthru vendor specific ...passed 00:06:44.930 Test: blockdev nvme admin passthru ...[2024-11-17 13:55:23.085471] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:44.930 [2024-11-17 13:55:23.085501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:44.930 passed 00:06:44.930 Test: blockdev copy ...passed 00:06:44.930 Suite: bdevio tests on: Nvme1n1 00:06:44.930 Test: blockdev write read block ...passed 00:06:44.930 Test: blockdev write zeroes read block ...passed 00:06:44.930 Test: blockdev write zeroes read no split ...passed 00:06:44.930 Test: blockdev write zeroes read split ...passed 00:06:44.930 Test: blockdev write zeroes read split partial ...passed 00:06:44.930 Test: blockdev reset ...[2024-11-17 13:55:23.100646] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:06:44.930 passed 00:06:44.930 Test: blockdev write read 8 blocks ...[2024-11-17 13:55:23.102801] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:44.930 passed 00:06:44.930 Test: blockdev write read size > 128k ...passed 00:06:44.930 Test: blockdev write read invalid size ...passed 00:06:44.930 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:44.930 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:44.930 Test: blockdev write read max offset ...passed 00:06:44.930 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:44.930 Test: blockdev writev readv 8 blocks ...passed 00:06:44.930 Test: blockdev writev readv 30 x 1block ...passed 00:06:44.930 Test: blockdev writev readv block ...passed 00:06:44.930 Test: blockdev writev readv size > 128k ...passed 00:06:44.930 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:44.930 Test: blockdev comparev and writev ...[2024-11-17 13:55:23.114268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d6e2c000 len:0x1000 00:06:44.930 [2024-11-17 13:55:23.114302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:44.930 passed 00:06:44.930 Test: blockdev nvme passthru rw ...passed 00:06:44.930 Test: blockdev nvme passthru vendor specific ...[2024-11-17 13:55:23.116034] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:44.930 [2024-11-17 13:55:23.116060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:44.930 passed 00:06:44.930 Test: blockdev nvme admin passthru ...passed 00:06:44.930 Test: blockdev copy ...passed 00:06:44.930 Suite: bdevio tests on: Nvme0n1 00:06:44.930 Test: blockdev write read block ...passed 00:06:44.930 Test: blockdev write zeroes read block ...passed 00:06:44.930 Test: blockdev write zeroes read no split ...passed 00:06:44.930 Test: blockdev write zeroes read split ...passed 00:06:44.930 Test: blockdev write zeroes read split partial ...passed 00:06:44.930 Test: blockdev reset ...[2024-11-17 13:55:23.135525] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:06:44.930 passed 00:06:44.930 Test: blockdev write read 8 blocks ...[2024-11-17 13:55:23.137580] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:44.930 passed 00:06:44.930 Test: blockdev write read size > 128k ...passed 00:06:44.930 Test: blockdev write read invalid size ...passed 00:06:44.930 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:44.930 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:44.930 Test: blockdev write read max offset ...passed 00:06:44.930 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:44.930 Test: blockdev writev readv 8 blocks ...passed 00:06:44.930 Test: blockdev writev readv 30 x 1block ...passed 00:06:44.930 Test: blockdev writev readv block ...passed 00:06:44.930 Test: blockdev writev readv size > 128k ...passed 00:06:44.930 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:44.930 Test: blockdev comparev and writev ...passed 00:06:44.930 Test: blockdev nvme passthru rw ...[2024-11-17 13:55:23.144939] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:44.930 separate metadata which is not supported yet. 00:06:44.930 passed 00:06:44.930 Test: blockdev nvme passthru vendor specific ...[2024-11-17 13:55:23.145625] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:44.930 [2024-11-17 13:55:23.145655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:44.930 passed 00:06:44.930 Test: blockdev nvme admin passthru ...passed 00:06:44.930 Test: blockdev copy ...passed 00:06:44.930 00:06:44.930 Run Summary: Type Total Ran Passed Failed Inactive 00:06:44.930 suites 6 6 n/a 0 0 00:06:44.930 tests 138 138 138 0 0 00:06:44.930 asserts 893 893 893 0 n/a 00:06:44.930 00:06:44.930 Elapsed time = 0.469 seconds 00:06:44.930 0 00:06:44.930 13:55:23 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 72295 00:06:44.930 13:55:23 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 72295 ']' 00:06:44.930 13:55:23 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 72295 00:06:44.930 13:55:23 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:06:44.930 13:55:23 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:44.930 13:55:23 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72295 00:06:44.930 13:55:23 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:44.930 13:55:23 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:44.930 13:55:23 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72295' 00:06:44.930 killing process with pid 72295 00:06:44.930 13:55:23 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 72295 00:06:44.930 13:55:23 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 72295 00:06:45.189 13:55:23 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:45.189 00:06:45.189 real 0m1.423s 00:06:45.189 user 0m3.573s 00:06:45.189 sys 0m0.271s 00:06:45.189 13:55:23 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:45.189 13:55:23 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:45.189 ************************************ 00:06:45.189 END TEST bdev_bounds 00:06:45.189 ************************************ 00:06:45.189 13:55:23 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:45.189 13:55:23 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:45.189 13:55:23 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:45.189 13:55:23 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:45.189 ************************************ 00:06:45.189 START TEST bdev_nbd 00:06:45.189 ************************************ 00:06:45.189 13:55:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:45.189 13:55:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:45.189 13:55:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:45.189 13:55:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.189 13:55:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:45.189 13:55:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:45.189 13:55:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:45.189 13:55:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:06:45.189 13:55:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:45.189 13:55:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:45.189 13:55:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:45.189 13:55:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:06:45.189 13:55:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:45.189 13:55:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:45.189 13:55:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:45.189 13:55:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:45.189 13:55:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=72338 00:06:45.189 13:55:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:45.189 13:55:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 72338 /var/tmp/spdk-nbd.sock 00:06:45.189 13:55:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 72338 ']' 00:06:45.189 13:55:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:45.189 13:55:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:45.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:45.189 13:55:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:45.189 13:55:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:45.189 13:55:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:45.189 13:55:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:45.459 [2024-11-17 13:55:23.488440] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:45.459 [2024-11-17 13:55:23.488550] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:45.459 [2024-11-17 13:55:23.638895] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.459 [2024-11-17 13:55:23.670573] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.399 13:55:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:46.399 13:55:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:06:46.399 13:55:24 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:46.399 13:55:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.399 13:55:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:46.399 13:55:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:46.399 13:55:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:46.399 13:55:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.399 13:55:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:46.399 13:55:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:46.399 13:55:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:46.399 13:55:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:46.399 13:55:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:46.399 13:55:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:46.399 13:55:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:46.399 13:55:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:46.399 13:55:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:46.399 13:55:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:46.399 13:55:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:46.399 13:55:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:46.399 13:55:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:46.399 13:55:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:46.399 13:55:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:46.399 13:55:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:46.399 13:55:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:46.399 13:55:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:46.399 13:55:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:46.399 1+0 records in 00:06:46.399 1+0 records out 00:06:46.399 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0002789 s, 14.7 MB/s 00:06:46.399 13:55:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:46.399 13:55:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:46.399 13:55:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:46.399 13:55:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:46.399 13:55:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:46.399 13:55:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:46.399 13:55:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:46.400 13:55:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:06:46.657 13:55:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:46.657 13:55:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:46.657 13:55:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:46.657 13:55:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:46.657 13:55:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:46.657 13:55:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:46.657 13:55:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:46.657 13:55:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:46.657 13:55:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:46.657 13:55:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:46.657 13:55:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:46.657 13:55:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:46.657 1+0 records in 00:06:46.657 1+0 records out 00:06:46.657 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000412094 s, 9.9 MB/s 00:06:46.657 13:55:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:46.657 13:55:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:46.657 13:55:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:46.657 13:55:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:46.657 13:55:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:46.657 13:55:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:46.657 13:55:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:46.657 13:55:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:46.915 13:55:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:46.915 13:55:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:46.915 13:55:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:46.915 13:55:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:06:46.915 13:55:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:46.915 13:55:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:46.915 13:55:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:46.915 13:55:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:06:46.915 13:55:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:46.915 13:55:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:46.915 13:55:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:46.915 13:55:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:46.915 1+0 records in 00:06:46.915 1+0 records out 00:06:46.915 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000348595 s, 11.8 MB/s 00:06:46.915 13:55:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:46.915 13:55:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:46.915 13:55:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:46.915 13:55:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:46.915 13:55:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:46.915 13:55:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:46.915 13:55:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:46.915 13:55:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:47.172 13:55:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:47.172 13:55:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:47.172 13:55:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:47.172 13:55:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:06:47.172 13:55:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:47.172 13:55:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:47.172 13:55:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:47.172 13:55:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:06:47.172 13:55:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:47.172 13:55:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:47.172 13:55:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:47.172 13:55:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:47.172 1+0 records in 00:06:47.172 1+0 records out 00:06:47.172 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000397477 s, 10.3 MB/s 00:06:47.172 13:55:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:47.172 13:55:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:47.172 13:55:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:47.173 13:55:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:47.173 13:55:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:47.173 13:55:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:47.173 13:55:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:47.173 13:55:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:47.431 13:55:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:47.431 13:55:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:47.431 13:55:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:47.431 13:55:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:06:47.431 13:55:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:47.431 13:55:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:47.431 13:55:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:47.431 13:55:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:06:47.431 13:55:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:47.431 13:55:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:47.431 13:55:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:47.431 13:55:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:47.431 1+0 records in 00:06:47.431 1+0 records out 00:06:47.431 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000482629 s, 8.5 MB/s 00:06:47.431 13:55:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:47.431 13:55:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:47.431 13:55:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:47.431 13:55:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:47.431 13:55:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:47.431 13:55:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:47.431 13:55:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:47.431 13:55:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:47.689 13:55:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:47.689 13:55:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:47.689 13:55:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:47.689 13:55:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:06:47.689 13:55:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:47.689 13:55:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:47.689 13:55:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:47.689 13:55:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:06:47.689 13:55:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:47.689 13:55:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:47.689 13:55:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:47.690 13:55:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:47.690 1+0 records in 00:06:47.690 1+0 records out 00:06:47.690 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000598532 s, 6.8 MB/s 00:06:47.690 13:55:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:47.690 13:55:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:47.690 13:55:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:47.690 13:55:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:47.690 13:55:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:47.690 13:55:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:47.690 13:55:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:47.690 13:55:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:47.690 13:55:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:47.690 { 00:06:47.690 "nbd_device": "/dev/nbd0", 00:06:47.690 "bdev_name": "Nvme0n1" 00:06:47.690 }, 00:06:47.690 { 00:06:47.690 "nbd_device": "/dev/nbd1", 00:06:47.690 "bdev_name": "Nvme1n1" 00:06:47.690 }, 00:06:47.690 { 00:06:47.690 "nbd_device": "/dev/nbd2", 00:06:47.690 "bdev_name": "Nvme2n1" 00:06:47.690 }, 00:06:47.690 { 00:06:47.690 "nbd_device": "/dev/nbd3", 00:06:47.690 "bdev_name": "Nvme2n2" 00:06:47.690 }, 00:06:47.690 { 00:06:47.690 "nbd_device": "/dev/nbd4", 00:06:47.690 "bdev_name": "Nvme2n3" 00:06:47.690 }, 00:06:47.690 { 00:06:47.690 "nbd_device": "/dev/nbd5", 00:06:47.690 "bdev_name": "Nvme3n1" 00:06:47.690 } 00:06:47.690 ]' 00:06:47.690 13:55:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:47.690 13:55:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:47.690 13:55:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:47.690 { 00:06:47.690 "nbd_device": "/dev/nbd0", 00:06:47.690 "bdev_name": "Nvme0n1" 00:06:47.690 }, 00:06:47.690 { 00:06:47.690 "nbd_device": "/dev/nbd1", 00:06:47.690 "bdev_name": "Nvme1n1" 00:06:47.690 }, 00:06:47.690 { 00:06:47.690 "nbd_device": "/dev/nbd2", 00:06:47.690 "bdev_name": "Nvme2n1" 00:06:47.690 }, 00:06:47.690 { 00:06:47.690 "nbd_device": "/dev/nbd3", 00:06:47.690 "bdev_name": "Nvme2n2" 00:06:47.690 }, 00:06:47.690 { 00:06:47.690 "nbd_device": "/dev/nbd4", 00:06:47.690 "bdev_name": "Nvme2n3" 00:06:47.690 }, 00:06:47.690 { 00:06:47.690 "nbd_device": "/dev/nbd5", 00:06:47.690 "bdev_name": "Nvme3n1" 00:06:47.690 } 00:06:47.690 ]' 00:06:47.690 13:55:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:06:47.690 13:55:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.690 13:55:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:06:47.690 13:55:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:47.690 13:55:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:47.690 13:55:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:47.690 13:55:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:47.948 13:55:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:47.948 13:55:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:47.948 13:55:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:47.948 13:55:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:47.948 13:55:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:47.948 13:55:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:47.948 13:55:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:47.948 13:55:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:47.948 13:55:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:47.948 13:55:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:48.206 13:55:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:48.206 13:55:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:48.206 13:55:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:48.206 13:55:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:48.206 13:55:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:48.206 13:55:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:48.206 13:55:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:48.206 13:55:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:48.206 13:55:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:48.206 13:55:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:48.464 13:55:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:48.464 13:55:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:48.464 13:55:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:48.464 13:55:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:48.464 13:55:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:48.464 13:55:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:48.464 13:55:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:48.464 13:55:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:48.464 13:55:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:48.464 13:55:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:48.722 13:55:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:48.722 13:55:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:48.722 13:55:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:48.722 13:55:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:48.722 13:55:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:48.722 13:55:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:48.722 13:55:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:48.722 13:55:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:48.722 13:55:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:48.722 13:55:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:48.980 13:55:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:48.980 13:55:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:48.980 13:55:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:48.980 13:55:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:48.980 13:55:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:48.980 13:55:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:48.980 13:55:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:48.980 13:55:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:48.980 13:55:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:48.980 13:55:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:48.980 13:55:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:48.980 13:55:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:48.980 13:55:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:48.980 13:55:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:48.980 13:55:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:48.980 13:55:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:48.980 13:55:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:48.980 13:55:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:48.980 13:55:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:48.980 13:55:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:48.980 13:55:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:49.238 13:55:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:49.238 13:55:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:49.238 13:55:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:49.238 13:55:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:49.238 13:55:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:49.238 13:55:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:49.238 13:55:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:49.238 13:55:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:49.238 13:55:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:49.238 13:55:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:49.238 13:55:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:49.238 13:55:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:49.238 13:55:27 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:49.238 13:55:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.238 13:55:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:49.238 13:55:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:49.238 13:55:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:49.238 13:55:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:49.238 13:55:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:49.238 13:55:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.238 13:55:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:49.238 13:55:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:49.238 13:55:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:49.238 13:55:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:49.238 13:55:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:49.238 13:55:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:49.238 13:55:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:49.238 13:55:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:49.497 /dev/nbd0 00:06:49.497 13:55:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:49.497 13:55:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:49.497 13:55:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:49.497 13:55:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:49.497 13:55:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:49.497 13:55:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:49.497 13:55:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:49.497 13:55:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:49.497 13:55:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:49.497 13:55:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:49.497 13:55:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:49.497 1+0 records in 00:06:49.497 1+0 records out 00:06:49.497 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000348316 s, 11.8 MB/s 00:06:49.497 13:55:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:49.497 13:55:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:49.497 13:55:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:49.497 13:55:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:49.497 13:55:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:49.497 13:55:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:49.497 13:55:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:49.497 13:55:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:06:49.756 /dev/nbd1 00:06:49.756 13:55:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:49.756 13:55:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:49.756 13:55:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:49.756 13:55:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:49.756 13:55:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:49.756 13:55:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:49.756 13:55:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:49.756 13:55:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:49.756 13:55:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:49.756 13:55:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:49.756 13:55:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:49.756 1+0 records in 00:06:49.756 1+0 records out 00:06:49.756 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000291746 s, 14.0 MB/s 00:06:49.756 13:55:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:49.756 13:55:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:49.756 13:55:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:49.756 13:55:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:49.756 13:55:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:49.756 13:55:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:49.756 13:55:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:49.756 13:55:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:06:50.014 /dev/nbd10 00:06:50.014 13:55:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:50.014 13:55:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:50.014 13:55:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:06:50.014 13:55:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:50.014 13:55:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:50.014 13:55:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:50.014 13:55:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:06:50.014 13:55:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:50.014 13:55:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:50.014 13:55:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:50.014 13:55:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:50.014 1+0 records in 00:06:50.014 1+0 records out 00:06:50.014 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000303936 s, 13.5 MB/s 00:06:50.014 13:55:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:50.014 13:55:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:50.014 13:55:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:50.014 13:55:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:50.014 13:55:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:50.014 13:55:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:50.014 13:55:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:50.014 13:55:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:06:50.273 /dev/nbd11 00:06:50.273 13:55:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:50.273 13:55:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:50.273 13:55:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:06:50.273 13:55:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:50.273 13:55:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:50.273 13:55:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:50.273 13:55:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:06:50.273 13:55:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:50.273 13:55:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:50.273 13:55:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:50.273 13:55:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:50.273 1+0 records in 00:06:50.273 1+0 records out 00:06:50.273 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000372799 s, 11.0 MB/s 00:06:50.273 13:55:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:50.273 13:55:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:50.273 13:55:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:50.273 13:55:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:50.273 13:55:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:50.273 13:55:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:50.273 13:55:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:50.273 13:55:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:06:50.531 /dev/nbd12 00:06:50.531 13:55:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:50.531 13:55:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:50.531 13:55:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:06:50.531 13:55:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:50.531 13:55:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:50.531 13:55:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:50.531 13:55:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:06:50.531 13:55:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:50.531 13:55:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:50.532 13:55:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:50.532 13:55:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:50.532 1+0 records in 00:06:50.532 1+0 records out 00:06:50.532 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000358674 s, 11.4 MB/s 00:06:50.532 13:55:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:50.532 13:55:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:50.532 13:55:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:50.532 13:55:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:50.532 13:55:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:50.532 13:55:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:50.532 13:55:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:50.532 13:55:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:06:50.790 /dev/nbd13 00:06:50.790 13:55:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:50.790 13:55:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:50.790 13:55:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:06:50.790 13:55:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:50.790 13:55:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:50.790 13:55:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:50.790 13:55:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:06:50.790 13:55:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:50.790 13:55:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:50.790 13:55:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:50.790 13:55:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:50.790 1+0 records in 00:06:50.790 1+0 records out 00:06:50.790 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000405657 s, 10.1 MB/s 00:06:50.790 13:55:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:50.790 13:55:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:50.790 13:55:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:50.790 13:55:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:50.790 13:55:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:50.790 13:55:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:50.790 13:55:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:50.790 13:55:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:50.790 13:55:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.790 13:55:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:50.790 13:55:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:50.790 { 00:06:50.790 "nbd_device": "/dev/nbd0", 00:06:50.790 "bdev_name": "Nvme0n1" 00:06:50.790 }, 00:06:50.790 { 00:06:50.790 "nbd_device": "/dev/nbd1", 00:06:50.790 "bdev_name": "Nvme1n1" 00:06:50.790 }, 00:06:50.790 { 00:06:50.790 "nbd_device": "/dev/nbd10", 00:06:50.790 "bdev_name": "Nvme2n1" 00:06:50.790 }, 00:06:50.790 { 00:06:50.790 "nbd_device": "/dev/nbd11", 00:06:50.790 "bdev_name": "Nvme2n2" 00:06:50.790 }, 00:06:50.790 { 00:06:50.790 "nbd_device": "/dev/nbd12", 00:06:50.790 "bdev_name": "Nvme2n3" 00:06:50.790 }, 00:06:50.790 { 00:06:50.790 "nbd_device": "/dev/nbd13", 00:06:50.790 "bdev_name": "Nvme3n1" 00:06:50.790 } 00:06:50.790 ]' 00:06:50.790 13:55:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:50.790 { 00:06:50.790 "nbd_device": "/dev/nbd0", 00:06:50.790 "bdev_name": "Nvme0n1" 00:06:50.790 }, 00:06:50.790 { 00:06:50.790 "nbd_device": "/dev/nbd1", 00:06:50.790 "bdev_name": "Nvme1n1" 00:06:50.790 }, 00:06:50.790 { 00:06:50.790 "nbd_device": "/dev/nbd10", 00:06:50.790 "bdev_name": "Nvme2n1" 00:06:50.790 }, 00:06:50.790 { 00:06:50.790 "nbd_device": "/dev/nbd11", 00:06:50.790 "bdev_name": "Nvme2n2" 00:06:50.790 }, 00:06:50.790 { 00:06:50.790 "nbd_device": "/dev/nbd12", 00:06:50.790 "bdev_name": "Nvme2n3" 00:06:50.790 }, 00:06:50.790 { 00:06:50.790 "nbd_device": "/dev/nbd13", 00:06:50.790 "bdev_name": "Nvme3n1" 00:06:50.790 } 00:06:50.790 ]' 00:06:50.790 13:55:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:51.049 13:55:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:51.049 /dev/nbd1 00:06:51.049 /dev/nbd10 00:06:51.049 /dev/nbd11 00:06:51.049 /dev/nbd12 00:06:51.049 /dev/nbd13' 00:06:51.049 13:55:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:51.049 /dev/nbd1 00:06:51.049 /dev/nbd10 00:06:51.049 /dev/nbd11 00:06:51.049 /dev/nbd12 00:06:51.049 /dev/nbd13' 00:06:51.049 13:55:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:51.049 13:55:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:06:51.049 13:55:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:06:51.049 13:55:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:06:51.049 13:55:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:06:51.049 13:55:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:06:51.049 13:55:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:51.049 13:55:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:51.049 13:55:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:51.049 13:55:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:51.049 13:55:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:51.049 13:55:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:51.049 256+0 records in 00:06:51.049 256+0 records out 00:06:51.049 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00660984 s, 159 MB/s 00:06:51.049 13:55:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:51.049 13:55:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:51.049 256+0 records in 00:06:51.049 256+0 records out 00:06:51.049 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0497259 s, 21.1 MB/s 00:06:51.049 13:55:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:51.049 13:55:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:51.049 256+0 records in 00:06:51.049 256+0 records out 00:06:51.049 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0501359 s, 20.9 MB/s 00:06:51.049 13:55:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:51.049 13:55:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:51.049 256+0 records in 00:06:51.049 256+0 records out 00:06:51.049 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0484722 s, 21.6 MB/s 00:06:51.049 13:55:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:51.049 13:55:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:51.049 256+0 records in 00:06:51.049 256+0 records out 00:06:51.049 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0509388 s, 20.6 MB/s 00:06:51.310 13:55:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:51.310 13:55:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:51.310 256+0 records in 00:06:51.310 256+0 records out 00:06:51.310 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0482706 s, 21.7 MB/s 00:06:51.310 13:55:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:51.310 13:55:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:51.310 256+0 records in 00:06:51.310 256+0 records out 00:06:51.310 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0682636 s, 15.4 MB/s 00:06:51.310 13:55:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:06:51.310 13:55:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:51.310 13:55:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:51.310 13:55:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:51.310 13:55:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:51.310 13:55:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:51.310 13:55:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:51.310 13:55:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:51.310 13:55:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:51.310 13:55:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:51.310 13:55:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:51.310 13:55:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:51.310 13:55:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:51.310 13:55:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:51.310 13:55:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:51.310 13:55:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:51.310 13:55:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:51.310 13:55:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:51.310 13:55:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:51.310 13:55:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:51.310 13:55:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:51.310 13:55:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.310 13:55:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:51.310 13:55:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:51.310 13:55:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:51.310 13:55:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:51.310 13:55:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:51.568 13:55:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:51.569 13:55:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:51.569 13:55:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:51.569 13:55:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:51.569 13:55:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:51.569 13:55:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:51.569 13:55:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:51.569 13:55:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:51.569 13:55:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:51.569 13:55:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:51.826 13:55:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:51.826 13:55:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:51.826 13:55:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:51.826 13:55:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:51.826 13:55:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:51.826 13:55:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:51.826 13:55:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:51.826 13:55:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:51.826 13:55:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:51.826 13:55:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:52.084 13:55:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:52.084 13:55:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:52.084 13:55:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:52.084 13:55:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:52.084 13:55:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:52.084 13:55:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:52.084 13:55:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:52.084 13:55:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:52.084 13:55:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:52.084 13:55:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:52.341 13:55:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:52.341 13:55:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:52.341 13:55:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:52.341 13:55:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:52.341 13:55:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:52.341 13:55:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:52.341 13:55:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:52.341 13:55:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:52.341 13:55:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:52.341 13:55:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:52.341 13:55:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:52.341 13:55:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:52.341 13:55:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:52.341 13:55:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:52.341 13:55:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:52.341 13:55:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:52.341 13:55:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:52.341 13:55:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:52.341 13:55:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:52.341 13:55:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:52.672 13:55:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:52.672 13:55:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:52.672 13:55:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:52.672 13:55:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:52.672 13:55:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:52.672 13:55:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:52.672 13:55:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:52.672 13:55:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:52.672 13:55:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:52.673 13:55:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.673 13:55:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:52.933 13:55:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:52.933 13:55:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:52.933 13:55:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:52.933 13:55:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:52.933 13:55:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:52.933 13:55:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:52.933 13:55:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:52.933 13:55:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:52.933 13:55:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:52.933 13:55:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:52.933 13:55:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:52.933 13:55:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:52.933 13:55:31 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:52.933 13:55:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.933 13:55:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:52.933 13:55:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:53.194 malloc_lvol_verify 00:06:53.194 13:55:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:53.455 262311bc-f592-40eb-8c2a-d8d113884020 00:06:53.455 13:55:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:53.455 4cb64640-f8ae-47dd-bcfd-2930bd840f45 00:06:53.455 13:55:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:53.715 /dev/nbd0 00:06:53.715 13:55:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:53.715 13:55:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:53.715 13:55:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:53.715 13:55:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:53.715 13:55:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:53.715 mke2fs 1.47.0 (5-Feb-2023) 00:06:53.715 Discarding device blocks: 0/4096 done 00:06:53.715 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:53.715 00:06:53.715 Allocating group tables: 0/1 done 00:06:53.715 Writing inode tables: 0/1 done 00:06:53.715 Creating journal (1024 blocks): done 00:06:53.715 Writing superblocks and filesystem accounting information: 0/1 done 00:06:53.715 00:06:53.715 13:55:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:53.715 13:55:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.715 13:55:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:53.715 13:55:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:53.715 13:55:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:53.715 13:55:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:53.715 13:55:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:53.975 13:55:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:53.975 13:55:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:53.975 13:55:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:53.975 13:55:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:53.975 13:55:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:53.975 13:55:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:53.975 13:55:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:53.975 13:55:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:53.975 13:55:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 72338 00:06:53.975 13:55:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 72338 ']' 00:06:53.975 13:55:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 72338 00:06:53.975 13:55:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:06:53.975 13:55:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:53.975 13:55:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72338 00:06:53.975 killing process with pid 72338 00:06:53.975 13:55:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:53.975 13:55:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:53.975 13:55:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72338' 00:06:53.975 13:55:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 72338 00:06:53.975 13:55:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 72338 00:06:54.237 13:55:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:54.237 00:06:54.237 real 0m8.954s 00:06:54.237 user 0m13.297s 00:06:54.237 sys 0m2.899s 00:06:54.237 ************************************ 00:06:54.237 13:55:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:54.237 13:55:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:54.237 END TEST bdev_nbd 00:06:54.237 ************************************ 00:06:54.237 skipping fio tests on NVMe due to multi-ns failures. 00:06:54.237 13:55:32 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:06:54.237 13:55:32 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:06:54.237 13:55:32 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:54.237 13:55:32 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:54.237 13:55:32 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:54.237 13:55:32 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:06:54.237 13:55:32 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:54.237 13:55:32 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:54.237 ************************************ 00:06:54.237 START TEST bdev_verify 00:06:54.237 ************************************ 00:06:54.237 13:55:32 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:54.237 [2024-11-17 13:55:32.483053] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:54.237 [2024-11-17 13:55:32.483181] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72704 ] 00:06:54.499 [2024-11-17 13:55:32.629843] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:54.499 [2024-11-17 13:55:32.663391] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:54.499 [2024-11-17 13:55:32.663468] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.761 Running I/O for 5 seconds... 00:06:57.092 21760.00 IOPS, 85.00 MiB/s [2024-11-17T13:55:36.334Z] 23040.00 IOPS, 90.00 MiB/s [2024-11-17T13:55:37.273Z] 23381.33 IOPS, 91.33 MiB/s [2024-11-17T13:55:38.208Z] 23424.00 IOPS, 91.50 MiB/s [2024-11-17T13:55:38.208Z] 23334.40 IOPS, 91.15 MiB/s 00:06:59.907 Latency(us) 00:06:59.907 [2024-11-17T13:55:38.208Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:59.907 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:59.907 Verification LBA range: start 0x0 length 0xbd0bd 00:06:59.907 Nvme0n1 : 5.06 1921.08 7.50 0.00 0.00 66459.32 12552.66 73803.62 00:06:59.907 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:59.907 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:59.907 Nvme0n1 : 5.07 1945.27 7.60 0.00 0.00 65654.89 10536.17 72997.02 00:06:59.907 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:59.907 Verification LBA range: start 0x0 length 0xa0000 00:06:59.907 Nvme1n1 : 5.07 1920.50 7.50 0.00 0.00 66204.96 13712.15 58074.98 00:06:59.907 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:59.907 Verification LBA range: start 0xa0000 length 0xa0000 00:06:59.907 Nvme1n1 : 5.07 1943.83 7.59 0.00 0.00 65604.58 12149.37 64931.05 00:06:59.907 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:59.907 Verification LBA range: start 0x0 length 0x80000 00:06:59.907 Nvme2n1 : 5.07 1919.95 7.50 0.00 0.00 66109.42 14115.45 55655.19 00:06:59.907 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:59.907 Verification LBA range: start 0x80000 length 0x80000 00:06:59.907 Nvme2n1 : 5.07 1943.33 7.59 0.00 0.00 65414.65 13308.85 58074.98 00:06:59.907 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:59.907 Verification LBA range: start 0x0 length 0x80000 00:06:59.907 Nvme2n2 : 5.07 1917.88 7.49 0.00 0.00 66047.99 15224.52 57268.38 00:06:59.907 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:59.907 Verification LBA range: start 0x80000 length 0x80000 00:06:59.907 Nvme2n2 : 5.08 1941.81 7.59 0.00 0.00 65294.13 14115.45 56865.08 00:06:59.907 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:59.907 Verification LBA range: start 0x0 length 0x80000 00:06:59.907 Nvme2n3 : 5.07 1917.02 7.49 0.00 0.00 65953.00 14317.10 60091.47 00:06:59.907 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:59.907 Verification LBA range: start 0x80000 length 0x80000 00:06:59.907 Nvme2n3 : 5.08 1941.32 7.58 0.00 0.00 65189.03 12905.55 58478.28 00:06:59.907 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:59.907 Verification LBA range: start 0x0 length 0x20000 00:06:59.907 Nvme3n1 : 5.08 1927.13 7.53 0.00 0.00 65558.57 2003.89 61704.66 00:06:59.907 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:59.907 Verification LBA range: start 0x20000 length 0x20000 00:06:59.907 Nvme3n1 : 5.08 1940.79 7.58 0.00 0.00 65080.46 10183.29 62107.96 00:06:59.907 [2024-11-17T13:55:38.208Z] =================================================================================================================== 00:06:59.907 [2024-11-17T13:55:38.208Z] Total : 23179.91 90.55 0.00 0.00 65711.94 2003.89 73803.62 00:07:01.281 00:07:01.281 real 0m6.868s 00:07:01.281 user 0m13.033s 00:07:01.281 sys 0m0.218s 00:07:01.281 13:55:39 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:01.281 13:55:39 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:01.281 ************************************ 00:07:01.281 END TEST bdev_verify 00:07:01.281 ************************************ 00:07:01.281 13:55:39 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:01.281 13:55:39 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:07:01.281 13:55:39 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:01.281 13:55:39 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:01.281 ************************************ 00:07:01.281 START TEST bdev_verify_big_io 00:07:01.281 ************************************ 00:07:01.281 13:55:39 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:01.281 [2024-11-17 13:55:39.387944] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:01.282 [2024-11-17 13:55:39.388060] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72797 ] 00:07:01.282 [2024-11-17 13:55:39.534914] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:01.282 [2024-11-17 13:55:39.566541] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.282 [2024-11-17 13:55:39.566643] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.849 Running I/O for 5 seconds... 00:07:06.944 845.00 IOPS, 52.81 MiB/s [2024-11-17T13:55:46.183Z] 2662.00 IOPS, 166.38 MiB/s [2024-11-17T13:55:46.183Z] 2915.00 IOPS, 182.19 MiB/s 00:07:07.882 Latency(us) 00:07:07.882 [2024-11-17T13:55:46.183Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:07.882 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:07.882 Verification LBA range: start 0x0 length 0xbd0b 00:07:07.882 Nvme0n1 : 5.66 135.67 8.48 0.00 0.00 909779.04 10586.58 1167952.34 00:07:07.882 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:07.882 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:07.882 Nvme0n1 : 5.91 149.60 9.35 0.00 0.00 752020.24 27424.30 871124.68 00:07:07.882 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:07.882 Verification LBA range: start 0x0 length 0xa000 00:07:07.882 Nvme1n1 : 5.66 135.58 8.47 0.00 0.00 877308.59 104051.00 974369.08 00:07:07.882 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:07.882 Verification LBA range: start 0xa000 length 0xa000 00:07:07.882 Nvme1n1 : 5.91 151.59 9.47 0.00 0.00 715655.48 33675.42 871124.68 00:07:07.882 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:07.882 Verification LBA range: start 0x0 length 0x8000 00:07:07.882 Nvme2n1 : 5.82 142.99 8.94 0.00 0.00 807555.73 63721.16 858219.13 00:07:07.882 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:07.882 Verification LBA range: start 0x8000 length 0x8000 00:07:07.882 Nvme2n1 : 6.03 180.35 11.27 0.00 0.00 583212.58 152.81 871124.68 00:07:07.882 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:07.882 Verification LBA range: start 0x0 length 0x8000 00:07:07.882 Nvme2n2 : 5.91 138.58 8.66 0.00 0.00 802274.14 58074.98 1555118.87 00:07:07.882 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:07.882 Verification LBA range: start 0x8000 length 0x8000 00:07:07.882 Nvme2n2 : 5.69 135.06 8.44 0.00 0.00 918936.62 19156.68 1161499.57 00:07:07.882 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:07.882 Verification LBA range: start 0x0 length 0x8000 00:07:07.882 Nvme2n3 : 5.97 147.46 9.22 0.00 0.00 730413.39 54848.59 1432516.14 00:07:07.882 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:07.882 Verification LBA range: start 0x8000 length 0x8000 00:07:07.882 Nvme2n3 : 5.69 135.02 8.44 0.00 0.00 886523.54 71787.13 967916.31 00:07:07.882 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:07.882 Verification LBA range: start 0x0 length 0x2000 00:07:07.882 Nvme3n1 : 6.04 167.16 10.45 0.00 0.00 625660.98 920.02 1626099.40 00:07:07.882 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:07.883 Verification LBA range: start 0x2000 length 0x2000 00:07:07.883 Nvme3n1 : 5.81 135.74 8.48 0.00 0.00 843959.02 111310.38 877577.45 00:07:07.883 [2024-11-17T13:55:46.184Z] =================================================================================================================== 00:07:07.883 [2024-11-17T13:55:46.184Z] Total : 1754.79 109.67 0.00 0.00 775962.91 152.81 1626099.40 00:07:08.826 ************************************ 00:07:08.826 END TEST bdev_verify_big_io 00:07:08.826 ************************************ 00:07:08.826 00:07:08.826 real 0m7.671s 00:07:08.826 user 0m14.664s 00:07:08.826 sys 0m0.194s 00:07:08.826 13:55:47 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:08.826 13:55:47 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:08.826 13:55:47 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:08.826 13:55:47 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:08.826 13:55:47 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:08.826 13:55:47 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:08.826 ************************************ 00:07:08.826 START TEST bdev_write_zeroes 00:07:08.826 ************************************ 00:07:08.826 13:55:47 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:08.826 [2024-11-17 13:55:47.101523] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:08.826 [2024-11-17 13:55:47.101633] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72895 ] 00:07:09.085 [2024-11-17 13:55:47.250664] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.085 [2024-11-17 13:55:47.281964] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.651 Running I/O for 1 seconds... 00:07:10.590 75648.00 IOPS, 295.50 MiB/s 00:07:10.590 Latency(us) 00:07:10.590 [2024-11-17T13:55:48.891Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:10.590 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:10.590 Nvme0n1 : 1.02 12535.12 48.97 0.00 0.00 10190.73 8519.68 24197.91 00:07:10.590 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:10.590 Nvme1n1 : 1.02 12520.75 48.91 0.00 0.00 10189.04 8519.68 23996.26 00:07:10.590 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:10.590 Nvme2n1 : 1.02 12506.56 48.85 0.00 0.00 10179.97 8570.09 23391.31 00:07:10.590 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:10.590 Nvme2n2 : 1.02 12492.27 48.80 0.00 0.00 10160.76 8721.33 22383.06 00:07:10.590 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:10.590 Nvme2n3 : 1.03 12478.18 48.74 0.00 0.00 10139.83 7360.20 22685.54 00:07:10.590 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:10.590 Nvme3n1 : 1.03 12464.12 48.69 0.00 0.00 10126.78 6276.33 24298.73 00:07:10.590 [2024-11-17T13:55:48.891Z] =================================================================================================================== 00:07:10.590 [2024-11-17T13:55:48.891Z] Total : 74996.99 292.96 0.00 0.00 10164.52 6276.33 24298.73 00:07:10.590 00:07:10.590 real 0m1.826s 00:07:10.590 user 0m1.558s 00:07:10.590 sys 0m0.157s 00:07:10.590 13:55:48 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:10.591 13:55:48 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:10.591 ************************************ 00:07:10.591 END TEST bdev_write_zeroes 00:07:10.591 ************************************ 00:07:10.849 13:55:48 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:10.849 13:55:48 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:10.849 13:55:48 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:10.849 13:55:48 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:10.849 ************************************ 00:07:10.849 START TEST bdev_json_nonenclosed 00:07:10.849 ************************************ 00:07:10.849 13:55:48 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:10.849 [2024-11-17 13:55:48.956249] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:10.849 [2024-11-17 13:55:48.956343] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72932 ] 00:07:10.850 [2024-11-17 13:55:49.098258] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.850 [2024-11-17 13:55:49.129534] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.850 [2024-11-17 13:55:49.129618] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:10.850 [2024-11-17 13:55:49.129632] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:10.850 [2024-11-17 13:55:49.129649] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:11.108 00:07:11.108 real 0m0.301s 00:07:11.108 user 0m0.105s 00:07:11.108 sys 0m0.092s 00:07:11.108 13:55:49 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:11.108 13:55:49 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:11.108 ************************************ 00:07:11.108 END TEST bdev_json_nonenclosed 00:07:11.108 ************************************ 00:07:11.108 13:55:49 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:11.108 13:55:49 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:11.108 13:55:49 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:11.108 13:55:49 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:11.108 ************************************ 00:07:11.108 START TEST bdev_json_nonarray 00:07:11.108 ************************************ 00:07:11.108 13:55:49 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:11.108 [2024-11-17 13:55:49.307561] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:11.108 [2024-11-17 13:55:49.307670] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72957 ] 00:07:11.367 [2024-11-17 13:55:49.454814] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.367 [2024-11-17 13:55:49.486048] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.367 [2024-11-17 13:55:49.486143] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:11.367 [2024-11-17 13:55:49.486161] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:11.367 [2024-11-17 13:55:49.486171] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:11.367 00:07:11.367 real 0m0.310s 00:07:11.367 user 0m0.120s 00:07:11.367 sys 0m0.086s 00:07:11.367 13:55:49 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:11.367 13:55:49 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:11.367 ************************************ 00:07:11.367 END TEST bdev_json_nonarray 00:07:11.367 ************************************ 00:07:11.367 13:55:49 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:07:11.367 13:55:49 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:07:11.367 13:55:49 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:07:11.367 13:55:49 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:07:11.367 13:55:49 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:07:11.367 13:55:49 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:11.367 13:55:49 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:11.367 13:55:49 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:07:11.367 13:55:49 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:07:11.367 13:55:49 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:07:11.367 13:55:49 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:07:11.367 00:07:11.367 real 0m30.325s 00:07:11.367 user 0m48.842s 00:07:11.367 sys 0m4.740s 00:07:11.367 13:55:49 blockdev_nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:11.367 13:55:49 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:11.367 ************************************ 00:07:11.367 END TEST blockdev_nvme 00:07:11.367 ************************************ 00:07:11.367 13:55:49 -- spdk/autotest.sh@209 -- # uname -s 00:07:11.367 13:55:49 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:07:11.367 13:55:49 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:11.367 13:55:49 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:11.367 13:55:49 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:11.367 13:55:49 -- common/autotest_common.sh@10 -- # set +x 00:07:11.367 ************************************ 00:07:11.367 START TEST blockdev_nvme_gpt 00:07:11.367 ************************************ 00:07:11.367 13:55:49 blockdev_nvme_gpt -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:11.624 * Looking for test storage... 00:07:11.624 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:11.624 13:55:49 blockdev_nvme_gpt -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:11.624 13:55:49 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # lcov --version 00:07:11.624 13:55:49 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:11.624 13:55:49 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:11.624 13:55:49 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:11.624 13:55:49 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:11.624 13:55:49 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:11.624 13:55:49 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:07:11.624 13:55:49 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:07:11.624 13:55:49 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:07:11.624 13:55:49 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:07:11.624 13:55:49 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:07:11.624 13:55:49 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:07:11.624 13:55:49 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:07:11.624 13:55:49 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:11.624 13:55:49 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:07:11.624 13:55:49 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:07:11.624 13:55:49 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:11.624 13:55:49 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:11.624 13:55:49 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:07:11.624 13:55:49 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:07:11.624 13:55:49 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:11.624 13:55:49 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:07:11.624 13:55:49 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:07:11.624 13:55:49 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:07:11.624 13:55:49 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:07:11.624 13:55:49 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:11.624 13:55:49 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:07:11.624 13:55:49 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:07:11.624 13:55:49 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:11.624 13:55:49 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:11.624 13:55:49 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:07:11.624 13:55:49 blockdev_nvme_gpt -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:11.624 13:55:49 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:11.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.624 --rc genhtml_branch_coverage=1 00:07:11.624 --rc genhtml_function_coverage=1 00:07:11.624 --rc genhtml_legend=1 00:07:11.624 --rc geninfo_all_blocks=1 00:07:11.624 --rc geninfo_unexecuted_blocks=1 00:07:11.624 00:07:11.624 ' 00:07:11.624 13:55:49 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:11.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.624 --rc genhtml_branch_coverage=1 00:07:11.624 --rc genhtml_function_coverage=1 00:07:11.624 --rc genhtml_legend=1 00:07:11.624 --rc geninfo_all_blocks=1 00:07:11.624 --rc geninfo_unexecuted_blocks=1 00:07:11.624 00:07:11.624 ' 00:07:11.624 13:55:49 blockdev_nvme_gpt -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:11.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.624 --rc genhtml_branch_coverage=1 00:07:11.624 --rc genhtml_function_coverage=1 00:07:11.624 --rc genhtml_legend=1 00:07:11.624 --rc geninfo_all_blocks=1 00:07:11.624 --rc geninfo_unexecuted_blocks=1 00:07:11.624 00:07:11.624 ' 00:07:11.624 13:55:49 blockdev_nvme_gpt -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:11.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.624 --rc genhtml_branch_coverage=1 00:07:11.624 --rc genhtml_function_coverage=1 00:07:11.624 --rc genhtml_legend=1 00:07:11.624 --rc geninfo_all_blocks=1 00:07:11.624 --rc geninfo_unexecuted_blocks=1 00:07:11.624 00:07:11.624 ' 00:07:11.624 13:55:49 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:11.624 13:55:49 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:07:11.624 13:55:49 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:11.624 13:55:49 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:11.624 13:55:49 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:11.624 13:55:49 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:11.624 13:55:49 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:11.624 13:55:49 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:11.624 13:55:49 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:07:11.624 13:55:49 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:07:11.624 13:55:49 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:07:11.624 13:55:49 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:07:11.624 13:55:49 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:07:11.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.624 13:55:49 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:07:11.624 13:55:49 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:07:11.624 13:55:49 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:07:11.624 13:55:49 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:07:11.624 13:55:49 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:07:11.624 13:55:49 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:07:11.624 13:55:49 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:07:11.624 13:55:49 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:07:11.624 13:55:49 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:07:11.624 13:55:49 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:07:11.625 13:55:49 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:07:11.625 13:55:49 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=73030 00:07:11.625 13:55:49 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:11.625 13:55:49 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 73030 00:07:11.625 13:55:49 blockdev_nvme_gpt -- common/autotest_common.sh@831 -- # '[' -z 73030 ']' 00:07:11.625 13:55:49 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.625 13:55:49 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:11.625 13:55:49 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.625 13:55:49 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:11.625 13:55:49 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:11.625 13:55:49 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:11.625 [2024-11-17 13:55:49.846020] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:11.625 [2024-11-17 13:55:49.846121] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73030 ] 00:07:11.881 [2024-11-17 13:55:49.984566] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.882 [2024-11-17 13:55:50.017025] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.451 13:55:50 blockdev_nvme_gpt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:12.451 13:55:50 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # return 0 00:07:12.451 13:55:50 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:07:12.451 13:55:50 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:07:12.451 13:55:50 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:12.710 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:12.996 Waiting for block devices as requested 00:07:12.996 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:12.996 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:12.996 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:13.279 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:18.571 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:18.571 13:55:56 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:07:18.571 13:55:56 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:07:18.571 13:55:56 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:07:18.571 13:55:56 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # local nvme bdf 00:07:18.571 13:55:56 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:18.571 13:55:56 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:07:18.571 13:55:56 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:07:18.571 13:55:56 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:18.571 13:55:56 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:18.571 13:55:56 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:18.571 13:55:56 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:07:18.572 13:55:56 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:07:18.572 13:55:56 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:18.572 13:55:56 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:18.572 13:55:56 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:18.572 13:55:56 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:07:18.572 13:55:56 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:07:18.572 13:55:56 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:07:18.572 13:55:56 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:18.572 13:55:56 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:18.572 13:55:56 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:07:18.572 13:55:56 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:07:18.572 13:55:56 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:07:18.572 13:55:56 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:18.572 13:55:56 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:18.572 13:55:56 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:07:18.572 13:55:56 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:07:18.572 13:55:56 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:07:18.572 13:55:56 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:18.572 13:55:56 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:18.572 13:55:56 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:07:18.572 13:55:56 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:07:18.572 13:55:56 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:07:18.572 13:55:56 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:18.572 13:55:56 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:18.572 13:55:56 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:07:18.572 13:55:56 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:07:18.572 13:55:56 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:07:18.572 13:55:56 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:18.572 13:55:56 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:07:18.572 13:55:56 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:07:18.572 13:55:56 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:07:18.572 13:55:56 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:07:18.572 13:55:56 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:07:18.572 13:55:56 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:07:18.572 13:55:56 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:07:18.572 13:55:56 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:07:18.572 BYT; 00:07:18.572 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:07:18.572 13:55:56 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:07:18.572 BYT; 00:07:18.572 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:07:18.572 13:55:56 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:07:18.572 13:55:56 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:07:18.572 13:55:56 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:07:18.572 13:55:56 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:07:18.572 13:55:56 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:18.572 13:55:56 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:07:18.572 13:55:56 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:07:18.572 13:55:56 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:07:18.572 13:55:56 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:18.572 13:55:56 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:18.572 13:55:56 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:07:18.572 13:55:56 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:07:18.572 13:55:56 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:18.572 13:55:56 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:07:18.572 13:55:56 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:18.572 13:55:56 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:18.572 13:55:56 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:18.572 13:55:56 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:07:18.572 13:55:56 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:07:18.572 13:55:56 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:18.572 13:55:56 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:18.572 13:55:56 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:07:18.572 13:55:56 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:07:18.572 13:55:56 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:18.572 13:55:56 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:07:18.572 13:55:56 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:18.572 13:55:56 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:18.572 13:55:56 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:18.572 13:55:56 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:07:19.507 The operation has completed successfully. 00:07:19.507 13:55:57 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:07:20.440 The operation has completed successfully. 00:07:20.440 13:55:58 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:20.698 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:21.264 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:21.264 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:21.264 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:21.264 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:21.264 13:55:59 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:07:21.264 13:55:59 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.264 13:55:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:21.264 [] 00:07:21.264 13:55:59 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.264 13:55:59 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:07:21.264 13:55:59 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:07:21.264 13:55:59 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:21.264 13:55:59 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:21.264 13:55:59 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:21.264 13:55:59 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.264 13:55:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:21.521 13:55:59 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.521 13:55:59 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:07:21.521 13:55:59 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.521 13:55:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:21.521 13:55:59 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.521 13:55:59 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:07:21.521 13:55:59 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:07:21.521 13:55:59 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.521 13:55:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:21.521 13:55:59 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.521 13:55:59 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:07:21.521 13:55:59 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.521 13:55:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:21.521 13:55:59 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.778 13:55:59 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:21.778 13:55:59 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.778 13:55:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:21.778 13:55:59 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.778 13:55:59 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:07:21.778 13:55:59 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:07:21.778 13:55:59 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:07:21.778 13:55:59 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.778 13:55:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:21.778 13:55:59 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.778 13:55:59 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:07:21.778 13:55:59 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:07:21.779 13:55:59 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "e4e58b2b-f131-4e15-917e-dffcf1496501"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "e4e58b2b-f131-4e15-917e-dffcf1496501",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "7a2b8b13-04c1-4a30-9412-a1a5f86ddd81"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "7a2b8b13-04c1-4a30-9412-a1a5f86ddd81",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "1849d27c-e310-441e-bd23-1b0951b40fc5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1849d27c-e310-441e-bd23-1b0951b40fc5",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "33f6452a-32e7-48d0-b4dd-da67b46b6664"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "33f6452a-32e7-48d0-b4dd-da67b46b6664",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "0f9b8dfe-3fbb-405a-a5c3-d21fac1f09c2"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "0f9b8dfe-3fbb-405a-a5c3-d21fac1f09c2",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:21.779 13:55:59 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:07:21.779 13:55:59 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:07:21.779 13:55:59 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:07:21.779 13:55:59 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 73030 00:07:21.779 13:55:59 blockdev_nvme_gpt -- common/autotest_common.sh@950 -- # '[' -z 73030 ']' 00:07:21.779 13:55:59 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # kill -0 73030 00:07:21.779 13:55:59 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # uname 00:07:21.779 13:55:59 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:21.779 13:55:59 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73030 00:07:21.779 killing process with pid 73030 00:07:21.779 13:55:59 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:21.779 13:55:59 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:21.779 13:55:59 blockdev_nvme_gpt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73030' 00:07:21.779 13:55:59 blockdev_nvme_gpt -- common/autotest_common.sh@969 -- # kill 73030 00:07:21.779 13:55:59 blockdev_nvme_gpt -- common/autotest_common.sh@974 -- # wait 73030 00:07:22.037 13:56:00 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:22.037 13:56:00 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:22.037 13:56:00 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:07:22.037 13:56:00 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:22.037 13:56:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:22.037 ************************************ 00:07:22.037 START TEST bdev_hello_world 00:07:22.037 ************************************ 00:07:22.037 13:56:00 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:22.037 [2024-11-17 13:56:00.272337] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:22.037 [2024-11-17 13:56:00.272453] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73645 ] 00:07:22.295 [2024-11-17 13:56:00.419386] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.295 [2024-11-17 13:56:00.450438] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.552 [2024-11-17 13:56:00.816373] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:22.552 [2024-11-17 13:56:00.816418] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:22.552 [2024-11-17 13:56:00.816435] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:22.552 [2024-11-17 13:56:00.818485] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:22.552 [2024-11-17 13:56:00.818871] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:22.552 [2024-11-17 13:56:00.818899] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:22.552 [2024-11-17 13:56:00.819078] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:22.552 00:07:22.552 [2024-11-17 13:56:00.819145] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:22.809 00:07:22.809 real 0m0.757s 00:07:22.809 user 0m0.490s 00:07:22.809 sys 0m0.164s 00:07:22.809 13:56:00 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:22.809 13:56:00 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:22.809 ************************************ 00:07:22.809 END TEST bdev_hello_world 00:07:22.809 ************************************ 00:07:22.809 13:56:01 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:07:22.809 13:56:01 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:22.809 13:56:01 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:22.809 13:56:01 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:22.809 ************************************ 00:07:22.809 START TEST bdev_bounds 00:07:22.809 ************************************ 00:07:22.809 13:56:01 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:07:22.809 13:56:01 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=73670 00:07:22.809 13:56:01 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:22.809 Process bdevio pid: 73670 00:07:22.809 13:56:01 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 73670' 00:07:22.809 13:56:01 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 73670 00:07:22.809 13:56:01 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 73670 ']' 00:07:22.809 13:56:01 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.809 13:56:01 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:22.809 13:56:01 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.809 13:56:01 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:22.809 13:56:01 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:22.809 13:56:01 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:22.809 [2024-11-17 13:56:01.073047] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:22.809 [2024-11-17 13:56:01.073164] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73670 ] 00:07:23.067 [2024-11-17 13:56:01.219635] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:23.067 [2024-11-17 13:56:01.252661] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:23.067 [2024-11-17 13:56:01.252923] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.067 [2024-11-17 13:56:01.252967] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:23.633 13:56:01 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:23.633 13:56:01 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:07:23.633 13:56:01 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:23.892 I/O targets: 00:07:23.892 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:23.892 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:07:23.892 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:07:23.892 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:23.892 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:23.892 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:23.892 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:23.892 00:07:23.892 00:07:23.892 CUnit - A unit testing framework for C - Version 2.1-3 00:07:23.892 http://cunit.sourceforge.net/ 00:07:23.892 00:07:23.892 00:07:23.892 Suite: bdevio tests on: Nvme3n1 00:07:23.892 Test: blockdev write read block ...passed 00:07:23.892 Test: blockdev write zeroes read block ...passed 00:07:23.892 Test: blockdev write zeroes read no split ...passed 00:07:23.892 Test: blockdev write zeroes read split ...passed 00:07:23.892 Test: blockdev write zeroes read split partial ...passed 00:07:23.892 Test: blockdev reset ...[2024-11-17 13:56:02.005607] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:07:23.892 [2024-11-17 13:56:02.007638] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:23.892 passed 00:07:23.892 Test: blockdev write read 8 blocks ...passed 00:07:23.892 Test: blockdev write read size > 128k ...passed 00:07:23.892 Test: blockdev write read invalid size ...passed 00:07:23.892 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:23.892 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:23.892 Test: blockdev write read max offset ...passed 00:07:23.892 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:23.892 Test: blockdev writev readv 8 blocks ...passed 00:07:23.892 Test: blockdev writev readv 30 x 1block ...passed 00:07:23.892 Test: blockdev writev readv block ...passed 00:07:23.892 Test: blockdev writev readv size > 128k ...passed 00:07:23.892 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:23.892 Test: blockdev comparev and writev ...[2024-11-17 13:56:02.013388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2a500e000 len:0x1000 00:07:23.892 [2024-11-17 13:56:02.013436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:passed 00:07:23.892 Test: blockdev nvme passthru rw ...0 sqhd:0018 p:1 m:0 dnr:1 00:07:23.892 passed 00:07:23.892 Test: blockdev nvme passthru vendor specific ...passed 00:07:23.892 Test: blockdev nvme admin passthru ...[2024-11-17 13:56:02.014071] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:23.892 [2024-11-17 13:56:02.014107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:23.892 passed 00:07:23.892 Test: blockdev copy ...passed 00:07:23.892 Suite: bdevio tests on: Nvme2n3 00:07:23.892 Test: blockdev write read block ...passed 00:07:23.892 Test: blockdev write zeroes read block ...passed 00:07:23.892 Test: blockdev write zeroes read no split ...passed 00:07:23.892 Test: blockdev write zeroes read split ...passed 00:07:23.892 Test: blockdev write zeroes read split partial ...passed 00:07:23.892 Test: blockdev reset ...[2024-11-17 13:56:02.028531] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:23.892 [2024-11-17 13:56:02.030395] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:23.892 passed 00:07:23.892 Test: blockdev write read 8 blocks ...passed 00:07:23.892 Test: blockdev write read size > 128k ...passed 00:07:23.892 Test: blockdev write read invalid size ...passed 00:07:23.892 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:23.892 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:23.892 Test: blockdev write read max offset ...passed 00:07:23.892 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:23.892 Test: blockdev writev readv 8 blocks ...passed 00:07:23.892 Test: blockdev writev readv 30 x 1block ...passed 00:07:23.892 Test: blockdev writev readv block ...passed 00:07:23.892 Test: blockdev writev readv size > 128k ...passed 00:07:23.892 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:23.892 Test: blockdev comparev and writev ...[2024-11-17 13:56:02.034896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2a500a000 len:0x1000 00:07:23.892 passed 00:07:23.892 Test: blockdev nvme passthru rw ...[2024-11-17 13:56:02.034935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:23.892 passed 00:07:23.892 Test: blockdev nvme passthru vendor specific ...[2024-11-17 13:56:02.035332] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:23.892 [2024-11-17 13:56:02.035355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:23.892 passed 00:07:23.892 Test: blockdev nvme admin passthru ...passed 00:07:23.892 Test: blockdev copy ...passed 00:07:23.892 Suite: bdevio tests on: Nvme2n2 00:07:23.892 Test: blockdev write read block ...passed 00:07:23.892 Test: blockdev write zeroes read block ...passed 00:07:23.892 Test: blockdev write zeroes read no split ...passed 00:07:23.892 Test: blockdev write zeroes read split ...passed 00:07:23.892 Test: blockdev write zeroes read split partial ...passed 00:07:23.893 Test: blockdev reset ...[2024-11-17 13:56:02.050530] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:23.893 [2024-11-17 13:56:02.052152] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:23.893 passed 00:07:23.893 Test: blockdev write read 8 blocks ...passed 00:07:23.893 Test: blockdev write read size > 128k ...passed 00:07:23.893 Test: blockdev write read invalid size ...passed 00:07:23.893 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:23.893 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:23.893 Test: blockdev write read max offset ...passed 00:07:23.893 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:23.893 Test: blockdev writev readv 8 blocks ...passed 00:07:23.893 Test: blockdev writev readv 30 x 1block ...passed 00:07:23.893 Test: blockdev writev readv block ...passed 00:07:23.893 Test: blockdev writev readv size > 128k ...passed 00:07:23.893 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:23.893 Test: blockdev comparev and writev ...[2024-11-17 13:56:02.056593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d1e05000 len:0x1000 00:07:23.893 [2024-11-17 13:56:02.056630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:23.893 passed 00:07:23.893 Test: blockdev nvme passthru rw ...passed 00:07:23.893 Test: blockdev nvme passthru vendor specific ...[2024-11-17 13:56:02.057640] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:23.893 passed 00:07:23.893 Test: blockdev nvme admin passthru ...[2024-11-17 13:56:02.057760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:23.893 passed 00:07:23.893 Test: blockdev copy ...passed 00:07:23.893 Suite: bdevio tests on: Nvme2n1 00:07:23.893 Test: blockdev write read block ...passed 00:07:23.893 Test: blockdev write zeroes read block ...passed 00:07:23.893 Test: blockdev write zeroes read no split ...passed 00:07:23.893 Test: blockdev write zeroes read split ...passed 00:07:23.893 Test: blockdev write zeroes read split partial ...passed 00:07:23.893 Test: blockdev reset ...[2024-11-17 13:56:02.072657] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:23.893 [2024-11-17 13:56:02.074673] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:23.893 passed 00:07:23.893 Test: blockdev write read 8 blocks ...passed 00:07:23.893 Test: blockdev write read size > 128k ...passed 00:07:23.893 Test: blockdev write read invalid size ...passed 00:07:23.893 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:23.893 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:23.893 Test: blockdev write read max offset ...passed 00:07:23.893 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:23.893 Test: blockdev writev readv 8 blocks ...passed 00:07:23.893 Test: blockdev writev readv 30 x 1block ...passed 00:07:23.893 Test: blockdev writev readv block ...passed 00:07:23.893 Test: blockdev writev readv size > 128k ...passed 00:07:23.893 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:23.893 Test: blockdev comparev and writev ...[2024-11-17 13:56:02.079376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c3c02000 len:0x1000 00:07:23.893 [2024-11-17 13:56:02.079419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:23.893 passed 00:07:23.893 Test: blockdev nvme passthru rw ...passed 00:07:23.893 Test: blockdev nvme passthru vendor specific ...[2024-11-17 13:56:02.080040] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:23.893 passed 00:07:23.893 Test: blockdev nvme admin passthru ...[2024-11-17 13:56:02.080066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:23.893 passed 00:07:23.893 Test: blockdev copy ...passed 00:07:23.893 Suite: bdevio tests on: Nvme1n1p2 00:07:23.893 Test: blockdev write read block ...passed 00:07:23.893 Test: blockdev write zeroes read block ...passed 00:07:23.893 Test: blockdev write zeroes read no split ...passed 00:07:23.893 Test: blockdev write zeroes read split ...passed 00:07:23.893 Test: blockdev write zeroes read split partial ...passed 00:07:23.893 Test: blockdev reset ...[2024-11-17 13:56:02.094605] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:07:23.893 [2024-11-17 13:56:02.096042] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:23.893 passed 00:07:23.893 Test: blockdev write read 8 blocks ...passed 00:07:23.893 Test: blockdev write read size > 128k ...passed 00:07:23.893 Test: blockdev write read invalid size ...passed 00:07:23.893 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:23.893 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:23.893 Test: blockdev write read max offset ...passed 00:07:23.893 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:23.893 Test: blockdev writev readv 8 blocks ...passed 00:07:23.893 Test: blockdev writev readv 30 x 1block ...passed 00:07:23.893 Test: blockdev writev readv block ...passed 00:07:23.893 Test: blockdev writev readv size > 128k ...passed 00:07:23.893 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:23.893 Test: blockdev comparev and writev ...[2024-11-17 13:56:02.100790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2d503b000 len:0x1000 00:07:23.893 [2024-11-17 13:56:02.100827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:23.893 passed 00:07:23.893 Test: blockdev nvme passthru rw ...passed 00:07:23.893 Test: blockdev nvme passthru vendor specific ...passed 00:07:23.893 Test: blockdev nvme admin passthru ...passed 00:07:23.893 Test: blockdev copy ...passed 00:07:23.893 Suite: bdevio tests on: Nvme1n1p1 00:07:23.893 Test: blockdev write read block ...passed 00:07:23.893 Test: blockdev write zeroes read block ...passed 00:07:23.893 Test: blockdev write zeroes read no split ...passed 00:07:23.893 Test: blockdev write zeroes read split ...passed 00:07:23.893 Test: blockdev write zeroes read split partial ...passed 00:07:23.893 Test: blockdev reset ...[2024-11-17 13:56:02.111596] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:07:23.893 [2024-11-17 13:56:02.112971] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:23.893 passed 00:07:23.893 Test: blockdev write read 8 blocks ...passed 00:07:23.893 Test: blockdev write read size > 128k ...passed 00:07:23.893 Test: blockdev write read invalid size ...passed 00:07:23.893 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:23.893 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:23.893 Test: blockdev write read max offset ...passed 00:07:23.893 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:23.893 Test: blockdev writev readv 8 blocks ...passed 00:07:23.893 Test: blockdev writev readv 30 x 1block ...passed 00:07:23.893 Test: blockdev writev readv block ...passed 00:07:23.893 Test: blockdev writev readv size > 128k ...passed 00:07:23.893 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:23.893 Test: blockdev comparev and writev ...[2024-11-17 13:56:02.117446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2d5037000 len:0x1000 00:07:23.893 [2024-11-17 13:56:02.117482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:23.893 passed 00:07:23.893 Test: blockdev nvme passthru rw ...passed 00:07:23.893 Test: blockdev nvme passthru vendor specific ...passed 00:07:23.893 Test: blockdev nvme admin passthru ...passed 00:07:23.893 Test: blockdev copy ...passed 00:07:23.893 Suite: bdevio tests on: Nvme0n1 00:07:23.893 Test: blockdev write read block ...passed 00:07:23.893 Test: blockdev write zeroes read block ...passed 00:07:23.893 Test: blockdev write zeroes read no split ...passed 00:07:23.893 Test: blockdev write zeroes read split ...passed 00:07:23.893 Test: blockdev write zeroes read split partial ...passed 00:07:23.893 Test: blockdev reset ...[2024-11-17 13:56:02.129009] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:07:23.893 [2024-11-17 13:56:02.130372] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:23.893 passed 00:07:23.893 Test: blockdev write read 8 blocks ...passed 00:07:23.893 Test: blockdev write read size > 128k ...passed 00:07:23.893 Test: blockdev write read invalid size ...passed 00:07:23.893 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:23.893 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:23.893 Test: blockdev write read max offset ...passed 00:07:23.893 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:23.893 Test: blockdev writev readv 8 blocks ...passed 00:07:23.893 Test: blockdev writev readv 30 x 1block ...passed 00:07:23.893 Test: blockdev writev readv block ...passed 00:07:23.893 Test: blockdev writev readv size > 128k ...passed 00:07:23.893 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:23.893 Test: blockdev comparev and writev ...[2024-11-17 13:56:02.133962] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:23.893 separate metadata which is not supported yet. 00:07:23.893 passed 00:07:23.893 Test: blockdev nvme passthru rw ...passed 00:07:23.893 Test: blockdev nvme passthru vendor specific ...[2024-11-17 13:56:02.134325] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:23.893 [2024-11-17 13:56:02.134366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:23.893 passed 00:07:23.893 Test: blockdev nvme admin passthru ...passed 00:07:23.893 Test: blockdev copy ...passed 00:07:23.893 00:07:23.893 Run Summary: Type Total Ran Passed Failed Inactive 00:07:23.893 suites 7 7 n/a 0 0 00:07:23.893 tests 161 161 161 0 0 00:07:23.893 asserts 1025 1025 1025 0 n/a 00:07:23.893 00:07:23.893 Elapsed time = 0.349 seconds 00:07:23.894 0 00:07:23.894 13:56:02 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 73670 00:07:23.894 13:56:02 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 73670 ']' 00:07:23.894 13:56:02 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 73670 00:07:23.894 13:56:02 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:07:23.894 13:56:02 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:23.894 13:56:02 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73670 00:07:23.894 13:56:02 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:23.894 13:56:02 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:23.894 killing process with pid 73670 00:07:23.894 13:56:02 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73670' 00:07:23.894 13:56:02 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@969 -- # kill 73670 00:07:23.894 13:56:02 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@974 -- # wait 73670 00:07:24.152 13:56:02 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:24.152 00:07:24.152 real 0m1.316s 00:07:24.152 user 0m3.323s 00:07:24.152 sys 0m0.254s 00:07:24.152 13:56:02 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:24.152 ************************************ 00:07:24.152 END TEST bdev_bounds 00:07:24.152 13:56:02 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:24.152 ************************************ 00:07:24.152 13:56:02 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:24.152 13:56:02 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:24.152 13:56:02 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:24.152 13:56:02 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:24.152 ************************************ 00:07:24.152 START TEST bdev_nbd 00:07:24.152 ************************************ 00:07:24.152 13:56:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:24.152 13:56:02 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:24.152 13:56:02 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:24.152 13:56:02 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:24.152 13:56:02 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:24.152 13:56:02 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:24.152 13:56:02 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:24.152 13:56:02 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:07:24.152 13:56:02 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:24.152 13:56:02 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:24.152 13:56:02 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:24.152 13:56:02 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:07:24.152 13:56:02 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:24.152 13:56:02 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:24.152 13:56:02 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:24.152 13:56:02 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:24.152 13:56:02 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=73723 00:07:24.152 13:56:02 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:24.152 13:56:02 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 73723 /var/tmp/spdk-nbd.sock 00:07:24.152 13:56:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 73723 ']' 00:07:24.152 13:56:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:24.152 13:56:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:24.152 13:56:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:24.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:24.152 13:56:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:24.152 13:56:02 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:24.152 13:56:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:24.152 [2024-11-17 13:56:02.442405] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:24.152 [2024-11-17 13:56:02.442518] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:24.410 [2024-11-17 13:56:02.590995] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.410 [2024-11-17 13:56:02.623583] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.975 13:56:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:24.975 13:56:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:07:24.975 13:56:03 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:24.975 13:56:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:24.975 13:56:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:24.975 13:56:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:24.975 13:56:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:24.975 13:56:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:24.975 13:56:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:24.975 13:56:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:24.975 13:56:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:24.975 13:56:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:24.975 13:56:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:24.975 13:56:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:24.975 13:56:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:25.234 13:56:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:25.234 13:56:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:25.234 13:56:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:25.234 13:56:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:25.234 13:56:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:25.234 13:56:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:25.234 13:56:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:25.234 13:56:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:25.234 13:56:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:25.234 13:56:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:25.234 13:56:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:25.234 13:56:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:25.234 1+0 records in 00:07:25.234 1+0 records out 00:07:25.234 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000241936 s, 16.9 MB/s 00:07:25.234 13:56:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:25.234 13:56:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:25.234 13:56:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:25.234 13:56:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:25.234 13:56:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:25.234 13:56:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:25.234 13:56:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:25.234 13:56:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:07:25.493 13:56:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:25.493 13:56:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:25.493 13:56:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:25.493 13:56:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:25.493 13:56:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:25.493 13:56:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:25.493 13:56:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:25.493 13:56:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:25.493 13:56:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:25.493 13:56:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:25.493 13:56:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:25.493 13:56:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:25.493 1+0 records in 00:07:25.493 1+0 records out 00:07:25.493 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000466594 s, 8.8 MB/s 00:07:25.493 13:56:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:25.493 13:56:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:25.493 13:56:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:25.493 13:56:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:25.493 13:56:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:25.493 13:56:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:25.493 13:56:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:25.493 13:56:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:07:25.754 13:56:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:25.754 13:56:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:25.754 13:56:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:25.754 13:56:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:07:25.754 13:56:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:25.754 13:56:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:25.754 13:56:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:25.754 13:56:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:07:25.754 13:56:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:25.754 13:56:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:25.754 13:56:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:25.754 13:56:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:25.754 1+0 records in 00:07:25.754 1+0 records out 00:07:25.754 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000621914 s, 6.6 MB/s 00:07:25.754 13:56:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:25.754 13:56:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:25.754 13:56:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:25.754 13:56:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:25.754 13:56:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:25.754 13:56:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:25.754 13:56:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:25.754 13:56:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:26.025 13:56:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:26.025 13:56:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:26.025 13:56:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:26.025 13:56:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:07:26.025 13:56:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:26.025 13:56:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:26.025 13:56:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:26.025 13:56:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:07:26.025 13:56:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:26.025 13:56:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:26.025 13:56:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:26.025 13:56:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:26.025 1+0 records in 00:07:26.025 1+0 records out 00:07:26.025 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000959613 s, 4.3 MB/s 00:07:26.025 13:56:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:26.025 13:56:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:26.025 13:56:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:26.025 13:56:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:26.025 13:56:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:26.025 13:56:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:26.025 13:56:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:26.025 13:56:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:26.025 13:56:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:26.025 13:56:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:26.025 13:56:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:26.025 13:56:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:07:26.025 13:56:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:26.025 13:56:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:26.025 13:56:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:26.025 13:56:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:07:26.025 13:56:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:26.025 13:56:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:26.025 13:56:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:26.025 13:56:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:26.025 1+0 records in 00:07:26.025 1+0 records out 00:07:26.025 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00061559 s, 6.7 MB/s 00:07:26.025 13:56:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:26.025 13:56:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:26.025 13:56:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:26.295 13:56:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:26.295 13:56:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:26.295 13:56:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:26.295 13:56:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:26.295 13:56:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:26.295 13:56:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:26.295 13:56:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:26.295 13:56:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:26.295 13:56:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:07:26.295 13:56:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:26.295 13:56:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:26.295 13:56:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:26.295 13:56:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:07:26.295 13:56:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:26.295 13:56:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:26.295 13:56:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:26.295 13:56:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:26.295 1+0 records in 00:07:26.295 1+0 records out 00:07:26.295 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000959619 s, 4.3 MB/s 00:07:26.295 13:56:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:26.295 13:56:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:26.295 13:56:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:26.295 13:56:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:26.295 13:56:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:26.295 13:56:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:26.295 13:56:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:26.295 13:56:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:26.554 13:56:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:07:26.554 13:56:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:07:26.554 13:56:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:07:26.554 13:56:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd6 00:07:26.554 13:56:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:26.554 13:56:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:26.554 13:56:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:26.554 13:56:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd6 /proc/partitions 00:07:26.554 13:56:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:26.554 13:56:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:26.554 13:56:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:26.554 13:56:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:26.554 1+0 records in 00:07:26.554 1+0 records out 00:07:26.554 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000474003 s, 8.6 MB/s 00:07:26.554 13:56:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:26.554 13:56:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:26.554 13:56:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:26.554 13:56:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:26.554 13:56:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:26.554 13:56:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:26.554 13:56:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:26.554 13:56:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:26.813 13:56:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:26.813 { 00:07:26.813 "nbd_device": "/dev/nbd0", 00:07:26.813 "bdev_name": "Nvme0n1" 00:07:26.813 }, 00:07:26.813 { 00:07:26.813 "nbd_device": "/dev/nbd1", 00:07:26.813 "bdev_name": "Nvme1n1p1" 00:07:26.813 }, 00:07:26.813 { 00:07:26.813 "nbd_device": "/dev/nbd2", 00:07:26.813 "bdev_name": "Nvme1n1p2" 00:07:26.813 }, 00:07:26.813 { 00:07:26.813 "nbd_device": "/dev/nbd3", 00:07:26.813 "bdev_name": "Nvme2n1" 00:07:26.813 }, 00:07:26.813 { 00:07:26.813 "nbd_device": "/dev/nbd4", 00:07:26.813 "bdev_name": "Nvme2n2" 00:07:26.813 }, 00:07:26.813 { 00:07:26.813 "nbd_device": "/dev/nbd5", 00:07:26.813 "bdev_name": "Nvme2n3" 00:07:26.813 }, 00:07:26.813 { 00:07:26.813 "nbd_device": "/dev/nbd6", 00:07:26.813 "bdev_name": "Nvme3n1" 00:07:26.813 } 00:07:26.813 ]' 00:07:26.813 13:56:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:26.813 13:56:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:26.813 { 00:07:26.813 "nbd_device": "/dev/nbd0", 00:07:26.813 "bdev_name": "Nvme0n1" 00:07:26.813 }, 00:07:26.813 { 00:07:26.813 "nbd_device": "/dev/nbd1", 00:07:26.813 "bdev_name": "Nvme1n1p1" 00:07:26.813 }, 00:07:26.813 { 00:07:26.813 "nbd_device": "/dev/nbd2", 00:07:26.813 "bdev_name": "Nvme1n1p2" 00:07:26.813 }, 00:07:26.813 { 00:07:26.813 "nbd_device": "/dev/nbd3", 00:07:26.813 "bdev_name": "Nvme2n1" 00:07:26.813 }, 00:07:26.813 { 00:07:26.813 "nbd_device": "/dev/nbd4", 00:07:26.813 "bdev_name": "Nvme2n2" 00:07:26.813 }, 00:07:26.813 { 00:07:26.813 "nbd_device": "/dev/nbd5", 00:07:26.813 "bdev_name": "Nvme2n3" 00:07:26.813 }, 00:07:26.813 { 00:07:26.813 "nbd_device": "/dev/nbd6", 00:07:26.813 "bdev_name": "Nvme3n1" 00:07:26.813 } 00:07:26.813 ]' 00:07:26.814 13:56:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:26.814 13:56:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:07:26.814 13:56:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:26.814 13:56:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:07:26.814 13:56:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:26.814 13:56:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:26.814 13:56:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:26.814 13:56:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:27.072 13:56:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:27.072 13:56:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:27.072 13:56:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:27.072 13:56:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:27.072 13:56:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:27.072 13:56:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:27.072 13:56:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:27.072 13:56:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:27.072 13:56:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:27.072 13:56:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:27.331 13:56:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:27.331 13:56:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:27.331 13:56:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:27.331 13:56:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:27.331 13:56:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:27.331 13:56:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:27.331 13:56:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:27.331 13:56:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:27.331 13:56:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:27.331 13:56:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:27.589 13:56:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:27.589 13:56:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:27.589 13:56:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:27.589 13:56:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:27.589 13:56:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:27.589 13:56:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:27.589 13:56:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:27.589 13:56:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:27.589 13:56:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:27.589 13:56:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:27.589 13:56:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:27.589 13:56:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:27.589 13:56:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:27.589 13:56:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:27.589 13:56:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:27.589 13:56:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:27.589 13:56:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:27.589 13:56:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:27.589 13:56:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:27.589 13:56:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:27.846 13:56:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:27.846 13:56:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:27.846 13:56:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:27.846 13:56:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:27.847 13:56:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:27.847 13:56:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:27.847 13:56:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:27.847 13:56:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:27.847 13:56:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:27.847 13:56:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:28.104 13:56:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:28.104 13:56:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:28.104 13:56:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:28.104 13:56:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:28.104 13:56:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:28.104 13:56:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:28.104 13:56:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:28.104 13:56:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:28.104 13:56:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:28.104 13:56:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:07:28.363 13:56:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:07:28.363 13:56:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:07:28.363 13:56:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:07:28.363 13:56:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:28.363 13:56:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:28.363 13:56:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:07:28.363 13:56:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:28.363 13:56:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:28.363 13:56:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:28.363 13:56:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:28.363 13:56:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:28.620 13:56:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:28.620 13:56:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:28.620 13:56:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:28.620 13:56:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:28.620 13:56:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:28.620 13:56:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:28.620 13:56:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:28.620 13:56:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:28.620 13:56:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:28.620 13:56:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:28.620 13:56:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:28.620 13:56:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:28.620 13:56:06 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:28.621 13:56:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:28.621 13:56:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:28.621 13:56:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:28.621 13:56:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:28.621 13:56:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:28.621 13:56:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:28.621 13:56:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:28.621 13:56:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:28.621 13:56:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:28.621 13:56:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:28.621 13:56:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:28.621 13:56:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:28.621 13:56:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:28.621 13:56:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:28.621 13:56:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:28.879 /dev/nbd0 00:07:28.879 13:56:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:28.879 13:56:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:28.879 13:56:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:28.879 13:56:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:28.879 13:56:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:28.879 13:56:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:28.879 13:56:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:28.879 13:56:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:28.879 13:56:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:28.879 13:56:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:28.879 13:56:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:28.879 1+0 records in 00:07:28.879 1+0 records out 00:07:28.879 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000433391 s, 9.5 MB/s 00:07:28.879 13:56:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:28.879 13:56:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:28.879 13:56:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:28.879 13:56:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:28.879 13:56:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:28.879 13:56:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:28.879 13:56:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:28.879 13:56:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:07:28.879 /dev/nbd1 00:07:28.879 13:56:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:28.879 13:56:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:28.879 13:56:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:28.879 13:56:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:28.879 13:56:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:28.879 13:56:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:28.879 13:56:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:29.137 13:56:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:29.137 13:56:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:29.137 13:56:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:29.137 13:56:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:29.137 1+0 records in 00:07:29.138 1+0 records out 00:07:29.138 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00038881 s, 10.5 MB/s 00:07:29.138 13:56:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:29.138 13:56:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:29.138 13:56:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:29.138 13:56:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:29.138 13:56:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:29.138 13:56:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:29.138 13:56:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:29.138 13:56:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:07:29.138 /dev/nbd10 00:07:29.138 13:56:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:29.138 13:56:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:29.138 13:56:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:07:29.138 13:56:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:29.138 13:56:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:29.138 13:56:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:29.138 13:56:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:07:29.138 13:56:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:29.138 13:56:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:29.138 13:56:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:29.138 13:56:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:29.138 1+0 records in 00:07:29.138 1+0 records out 00:07:29.138 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000284015 s, 14.4 MB/s 00:07:29.138 13:56:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:29.138 13:56:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:29.138 13:56:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:29.138 13:56:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:29.138 13:56:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:29.138 13:56:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:29.138 13:56:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:29.138 13:56:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:07:29.395 /dev/nbd11 00:07:29.395 13:56:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:29.395 13:56:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:29.395 13:56:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:07:29.395 13:56:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:29.395 13:56:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:29.395 13:56:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:29.395 13:56:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:07:29.395 13:56:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:29.396 13:56:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:29.396 13:56:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:29.396 13:56:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:29.396 1+0 records in 00:07:29.396 1+0 records out 00:07:29.396 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000545137 s, 7.5 MB/s 00:07:29.396 13:56:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:29.396 13:56:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:29.396 13:56:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:29.396 13:56:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:29.396 13:56:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:29.396 13:56:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:29.396 13:56:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:29.396 13:56:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:07:29.653 /dev/nbd12 00:07:29.654 13:56:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:29.654 13:56:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:29.654 13:56:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:07:29.654 13:56:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:29.654 13:56:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:29.654 13:56:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:29.654 13:56:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:07:29.654 13:56:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:29.654 13:56:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:29.654 13:56:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:29.654 13:56:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:29.654 1+0 records in 00:07:29.654 1+0 records out 00:07:29.654 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000328506 s, 12.5 MB/s 00:07:29.654 13:56:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:29.654 13:56:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:29.654 13:56:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:29.654 13:56:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:29.654 13:56:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:29.654 13:56:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:29.654 13:56:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:29.654 13:56:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:07:29.912 /dev/nbd13 00:07:29.912 13:56:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:29.912 13:56:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:29.912 13:56:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:07:29.912 13:56:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:29.912 13:56:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:29.912 13:56:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:29.912 13:56:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:07:29.912 13:56:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:29.912 13:56:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:29.912 13:56:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:29.912 13:56:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:29.912 1+0 records in 00:07:29.912 1+0 records out 00:07:29.912 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000576188 s, 7.1 MB/s 00:07:29.912 13:56:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:29.912 13:56:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:29.912 13:56:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:29.912 13:56:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:29.912 13:56:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:29.912 13:56:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:29.912 13:56:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:29.912 13:56:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:07:29.912 /dev/nbd14 00:07:30.171 13:56:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:07:30.171 13:56:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:07:30.171 13:56:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd14 00:07:30.171 13:56:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:30.171 13:56:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:30.171 13:56:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:30.171 13:56:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd14 /proc/partitions 00:07:30.171 13:56:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:30.171 13:56:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:30.171 13:56:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:30.171 13:56:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:30.171 1+0 records in 00:07:30.171 1+0 records out 00:07:30.171 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000659376 s, 6.2 MB/s 00:07:30.171 13:56:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:30.171 13:56:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:30.171 13:56:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:30.171 13:56:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:30.171 13:56:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:30.171 13:56:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:30.171 13:56:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:30.171 13:56:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:30.171 13:56:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:30.171 13:56:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:30.171 13:56:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:30.171 { 00:07:30.171 "nbd_device": "/dev/nbd0", 00:07:30.171 "bdev_name": "Nvme0n1" 00:07:30.171 }, 00:07:30.171 { 00:07:30.171 "nbd_device": "/dev/nbd1", 00:07:30.171 "bdev_name": "Nvme1n1p1" 00:07:30.171 }, 00:07:30.171 { 00:07:30.171 "nbd_device": "/dev/nbd10", 00:07:30.171 "bdev_name": "Nvme1n1p2" 00:07:30.171 }, 00:07:30.171 { 00:07:30.171 "nbd_device": "/dev/nbd11", 00:07:30.171 "bdev_name": "Nvme2n1" 00:07:30.171 }, 00:07:30.171 { 00:07:30.171 "nbd_device": "/dev/nbd12", 00:07:30.171 "bdev_name": "Nvme2n2" 00:07:30.171 }, 00:07:30.171 { 00:07:30.171 "nbd_device": "/dev/nbd13", 00:07:30.171 "bdev_name": "Nvme2n3" 00:07:30.171 }, 00:07:30.171 { 00:07:30.171 "nbd_device": "/dev/nbd14", 00:07:30.171 "bdev_name": "Nvme3n1" 00:07:30.171 } 00:07:30.171 ]' 00:07:30.171 13:56:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:30.171 { 00:07:30.171 "nbd_device": "/dev/nbd0", 00:07:30.171 "bdev_name": "Nvme0n1" 00:07:30.171 }, 00:07:30.171 { 00:07:30.171 "nbd_device": "/dev/nbd1", 00:07:30.171 "bdev_name": "Nvme1n1p1" 00:07:30.171 }, 00:07:30.171 { 00:07:30.171 "nbd_device": "/dev/nbd10", 00:07:30.171 "bdev_name": "Nvme1n1p2" 00:07:30.171 }, 00:07:30.171 { 00:07:30.171 "nbd_device": "/dev/nbd11", 00:07:30.171 "bdev_name": "Nvme2n1" 00:07:30.171 }, 00:07:30.171 { 00:07:30.171 "nbd_device": "/dev/nbd12", 00:07:30.171 "bdev_name": "Nvme2n2" 00:07:30.171 }, 00:07:30.171 { 00:07:30.171 "nbd_device": "/dev/nbd13", 00:07:30.171 "bdev_name": "Nvme2n3" 00:07:30.171 }, 00:07:30.171 { 00:07:30.171 "nbd_device": "/dev/nbd14", 00:07:30.171 "bdev_name": "Nvme3n1" 00:07:30.171 } 00:07:30.171 ]' 00:07:30.171 13:56:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:30.171 13:56:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:30.171 /dev/nbd1 00:07:30.171 /dev/nbd10 00:07:30.171 /dev/nbd11 00:07:30.171 /dev/nbd12 00:07:30.172 /dev/nbd13 00:07:30.172 /dev/nbd14' 00:07:30.172 13:56:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:30.172 /dev/nbd1 00:07:30.172 /dev/nbd10 00:07:30.172 /dev/nbd11 00:07:30.172 /dev/nbd12 00:07:30.172 /dev/nbd13 00:07:30.172 /dev/nbd14' 00:07:30.172 13:56:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:30.172 13:56:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:07:30.172 13:56:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:07:30.172 13:56:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:07:30.172 13:56:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:07:30.172 13:56:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:07:30.172 13:56:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:30.172 13:56:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:30.172 13:56:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:30.172 13:56:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:30.172 13:56:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:30.172 13:56:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:30.172 256+0 records in 00:07:30.172 256+0 records out 00:07:30.172 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00615763 s, 170 MB/s 00:07:30.172 13:56:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:30.172 13:56:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:30.430 256+0 records in 00:07:30.430 256+0 records out 00:07:30.430 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0774875 s, 13.5 MB/s 00:07:30.430 13:56:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:30.430 13:56:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:30.430 256+0 records in 00:07:30.430 256+0 records out 00:07:30.430 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.075784 s, 13.8 MB/s 00:07:30.430 13:56:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:30.430 13:56:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:30.430 256+0 records in 00:07:30.430 256+0 records out 00:07:30.430 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0772459 s, 13.6 MB/s 00:07:30.430 13:56:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:30.430 13:56:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:30.688 256+0 records in 00:07:30.688 256+0 records out 00:07:30.688 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0817775 s, 12.8 MB/s 00:07:30.688 13:56:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:30.688 13:56:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:30.688 256+0 records in 00:07:30.688 256+0 records out 00:07:30.688 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0821876 s, 12.8 MB/s 00:07:30.688 13:56:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:30.688 13:56:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:30.688 256+0 records in 00:07:30.688 256+0 records out 00:07:30.688 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0739929 s, 14.2 MB/s 00:07:30.688 13:56:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:30.688 13:56:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:07:30.947 256+0 records in 00:07:30.947 256+0 records out 00:07:30.947 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0744502 s, 14.1 MB/s 00:07:30.947 13:56:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:07:30.947 13:56:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:30.947 13:56:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:30.947 13:56:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:30.947 13:56:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:30.947 13:56:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:30.947 13:56:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:30.947 13:56:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:30.947 13:56:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:30.947 13:56:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:30.947 13:56:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:30.947 13:56:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:30.947 13:56:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:30.947 13:56:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:30.947 13:56:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:30.947 13:56:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:30.947 13:56:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:30.947 13:56:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:30.947 13:56:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:30.947 13:56:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:30.947 13:56:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:07:30.947 13:56:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:30.947 13:56:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:30.947 13:56:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:30.947 13:56:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:30.947 13:56:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:30.947 13:56:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:30.947 13:56:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:30.947 13:56:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:31.205 13:56:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:31.205 13:56:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:31.205 13:56:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:31.205 13:56:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:31.205 13:56:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:31.205 13:56:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:31.205 13:56:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:31.205 13:56:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:31.205 13:56:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:31.205 13:56:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:31.205 13:56:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:31.205 13:56:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:31.205 13:56:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:31.205 13:56:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:31.205 13:56:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:31.205 13:56:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:31.205 13:56:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:31.205 13:56:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:31.205 13:56:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:31.205 13:56:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:31.463 13:56:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:31.463 13:56:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:31.463 13:56:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:31.463 13:56:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:31.463 13:56:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:31.463 13:56:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:31.463 13:56:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:31.463 13:56:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:31.463 13:56:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:31.463 13:56:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:31.721 13:56:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:31.721 13:56:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:31.721 13:56:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:31.721 13:56:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:31.721 13:56:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:31.721 13:56:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:31.721 13:56:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:31.721 13:56:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:31.721 13:56:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:31.721 13:56:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:31.979 13:56:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:31.979 13:56:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:31.979 13:56:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:31.979 13:56:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:31.979 13:56:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:31.979 13:56:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:31.979 13:56:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:31.979 13:56:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:31.979 13:56:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:31.979 13:56:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:32.238 13:56:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:32.238 13:56:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:32.238 13:56:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:32.239 13:56:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:32.239 13:56:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:32.239 13:56:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:32.239 13:56:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:32.239 13:56:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:32.239 13:56:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:32.239 13:56:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:07:32.239 13:56:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:07:32.239 13:56:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:07:32.239 13:56:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:07:32.239 13:56:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:32.239 13:56:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:32.239 13:56:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:07:32.239 13:56:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:32.239 13:56:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:32.239 13:56:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:32.239 13:56:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:32.239 13:56:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:32.511 13:56:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:32.511 13:56:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:32.511 13:56:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:32.511 13:56:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:32.511 13:56:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:32.511 13:56:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:32.511 13:56:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:32.511 13:56:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:32.511 13:56:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:32.511 13:56:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:32.511 13:56:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:32.511 13:56:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:32.511 13:56:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:32.511 13:56:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:32.511 13:56:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:32.512 13:56:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:32.770 malloc_lvol_verify 00:07:32.770 13:56:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:32.770 aa82a364-61fb-4453-b94f-54c84a99b5c0 00:07:32.770 13:56:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:33.029 3790edbd-88e3-433c-8bdd-7bee0e46d64c 00:07:33.029 13:56:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:33.287 /dev/nbd0 00:07:33.287 13:56:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:33.287 13:56:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:33.287 13:56:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:33.287 13:56:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:33.287 13:56:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:33.287 mke2fs 1.47.0 (5-Feb-2023) 00:07:33.287 Discarding device blocks: 0/4096 done 00:07:33.287 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:33.287 00:07:33.287 Allocating group tables: 0/1 done 00:07:33.287 Writing inode tables: 0/1 done 00:07:33.287 Creating journal (1024 blocks): done 00:07:33.287 Writing superblocks and filesystem accounting information: 0/1 done 00:07:33.287 00:07:33.287 13:56:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:33.287 13:56:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:33.287 13:56:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:33.287 13:56:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:33.287 13:56:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:33.287 13:56:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:33.287 13:56:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:33.546 13:56:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:33.546 13:56:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:33.546 13:56:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:33.546 13:56:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:33.546 13:56:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:33.546 13:56:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:33.546 13:56:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:33.546 13:56:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:33.546 13:56:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 73723 00:07:33.546 13:56:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 73723 ']' 00:07:33.546 13:56:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 73723 00:07:33.546 13:56:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:07:33.546 13:56:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:33.546 13:56:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73723 00:07:33.546 13:56:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:33.546 killing process with pid 73723 00:07:33.546 13:56:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:33.546 13:56:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73723' 00:07:33.546 13:56:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@969 -- # kill 73723 00:07:33.546 13:56:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@974 -- # wait 73723 00:07:33.546 13:56:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:33.546 00:07:33.546 real 0m9.427s 00:07:33.546 user 0m13.606s 00:07:33.546 sys 0m3.273s 00:07:33.546 13:56:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:33.546 ************************************ 00:07:33.546 END TEST bdev_nbd 00:07:33.546 ************************************ 00:07:33.546 13:56:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:33.546 13:56:11 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:07:33.546 13:56:11 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:07:33.546 13:56:11 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:07:33.546 skipping fio tests on NVMe due to multi-ns failures. 00:07:33.546 13:56:11 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:33.546 13:56:11 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:33.546 13:56:11 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:33.546 13:56:11 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:07:33.546 13:56:11 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:33.546 13:56:11 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:33.546 ************************************ 00:07:33.546 START TEST bdev_verify 00:07:33.546 ************************************ 00:07:33.546 13:56:11 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:33.805 [2024-11-17 13:56:11.899366] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:33.805 [2024-11-17 13:56:11.899469] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74118 ] 00:07:33.805 [2024-11-17 13:56:12.045341] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:33.805 [2024-11-17 13:56:12.074110] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:33.805 [2024-11-17 13:56:12.074210] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.372 Running I/O for 5 seconds... 00:07:36.681 19840.00 IOPS, 77.50 MiB/s [2024-11-17T13:56:15.940Z] 19296.00 IOPS, 75.38 MiB/s [2024-11-17T13:56:16.890Z] 20906.67 IOPS, 81.67 MiB/s [2024-11-17T13:56:17.823Z] 20608.00 IOPS, 80.50 MiB/s [2024-11-17T13:56:17.823Z] 20480.00 IOPS, 80.00 MiB/s 00:07:39.522 Latency(us) 00:07:39.522 [2024-11-17T13:56:17.823Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:39.522 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:39.522 Verification LBA range: start 0x0 length 0xbd0bd 00:07:39.522 Nvme0n1 : 5.09 1458.50 5.70 0.00 0.00 87544.39 12855.14 90338.86 00:07:39.522 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:39.522 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:39.522 Nvme0n1 : 5.06 1417.55 5.54 0.00 0.00 89851.48 15526.99 88322.36 00:07:39.522 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:39.522 Verification LBA range: start 0x0 length 0x4ff80 00:07:39.522 Nvme1n1p1 : 5.09 1458.04 5.70 0.00 0.00 87384.31 15426.17 83079.48 00:07:39.522 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:39.522 Verification LBA range: start 0x4ff80 length 0x4ff80 00:07:39.522 Nvme1n1p1 : 5.08 1422.32 5.56 0.00 0.00 89390.52 11241.94 77030.01 00:07:39.522 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:39.522 Verification LBA range: start 0x0 length 0x4ff7f 00:07:39.522 Nvme1n1p2 : 5.09 1457.61 5.69 0.00 0.00 87218.32 17442.66 84692.68 00:07:39.522 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:39.522 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:07:39.522 Nvme1n1p2 : 5.09 1421.92 5.55 0.00 0.00 89269.13 10737.82 74610.22 00:07:39.522 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:39.522 Verification LBA range: start 0x0 length 0x80000 00:07:39.522 Nvme2n1 : 5.09 1457.22 5.69 0.00 0.00 87034.65 16938.54 85902.57 00:07:39.522 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:39.522 Verification LBA range: start 0x80000 length 0x80000 00:07:39.522 Nvme2n1 : 5.09 1421.53 5.55 0.00 0.00 89105.41 10384.94 70980.53 00:07:39.522 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:39.522 Verification LBA range: start 0x0 length 0x80000 00:07:39.522 Nvme2n2 : 5.10 1456.81 5.69 0.00 0.00 86851.61 16232.76 81466.29 00:07:39.522 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:39.522 Verification LBA range: start 0x80000 length 0x80000 00:07:39.522 Nvme2n2 : 5.09 1421.13 5.55 0.00 0.00 88933.81 10132.87 72593.72 00:07:39.522 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:39.522 Verification LBA range: start 0x0 length 0x80000 00:07:39.522 Nvme2n3 : 5.10 1456.40 5.69 0.00 0.00 86675.14 14720.39 81869.59 00:07:39.522 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:39.522 Verification LBA range: start 0x80000 length 0x80000 00:07:39.522 Nvme2n3 : 5.10 1431.22 5.59 0.00 0.00 88314.82 6503.19 76223.41 00:07:39.522 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:39.522 Verification LBA range: start 0x0 length 0x20000 00:07:39.522 Nvme3n1 : 5.10 1455.37 5.69 0.00 0.00 86514.52 9679.16 84692.68 00:07:39.522 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:39.522 Verification LBA range: start 0x20000 length 0x20000 00:07:39.522 Nvme3n1 : 5.10 1430.24 5.59 0.00 0.00 88148.64 8822.15 79449.80 00:07:39.522 [2024-11-17T13:56:17.823Z] =================================================================================================================== 00:07:39.522 [2024-11-17T13:56:17.823Z] Total : 20165.87 78.77 0.00 0.00 88003.14 6503.19 90338.86 00:07:40.088 00:07:40.088 real 0m6.396s 00:07:40.088 user 0m12.108s 00:07:40.088 sys 0m0.178s 00:07:40.088 13:56:18 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:40.088 ************************************ 00:07:40.088 END TEST bdev_verify 00:07:40.088 ************************************ 00:07:40.088 13:56:18 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:40.088 13:56:18 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:40.088 13:56:18 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:07:40.088 13:56:18 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:40.088 13:56:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:40.088 ************************************ 00:07:40.088 START TEST bdev_verify_big_io 00:07:40.088 ************************************ 00:07:40.088 13:56:18 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:40.088 [2024-11-17 13:56:18.369965] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:40.088 [2024-11-17 13:56:18.370075] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74205 ] 00:07:40.345 [2024-11-17 13:56:18.516639] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:40.345 [2024-11-17 13:56:18.550203] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.345 [2024-11-17 13:56:18.550280] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:40.911 Running I/O for 5 seconds... 00:07:45.333 16.00 IOPS, 1.00 MiB/s [2024-11-17T13:56:25.535Z] 1346.50 IOPS, 84.16 MiB/s [2024-11-17T13:56:25.535Z] 2156.00 IOPS, 134.75 MiB/s 00:07:47.234 Latency(us) 00:07:47.234 [2024-11-17T13:56:25.535Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:47.234 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:47.234 Verification LBA range: start 0x0 length 0xbd0b 00:07:47.234 Nvme0n1 : 5.89 92.22 5.76 0.00 0.00 1322633.83 20366.57 1387346.71 00:07:47.234 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:47.234 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:47.234 Nvme0n1 : 5.80 105.65 6.60 0.00 0.00 1142690.10 32465.53 1380893.93 00:07:47.234 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:47.234 Verification LBA range: start 0x0 length 0x4ff8 00:07:47.234 Nvme1n1p1 : 5.99 96.13 6.01 0.00 0.00 1242249.93 93565.24 1200216.22 00:07:47.234 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:47.234 Verification LBA range: start 0x4ff8 length 0x4ff8 00:07:47.234 Nvme1n1p1 : 5.80 110.26 6.89 0.00 0.00 1081271.53 110503.78 1193763.45 00:07:47.234 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:47.234 Verification LBA range: start 0x0 length 0x4ff7 00:07:47.234 Nvme1n1p2 : 5.99 96.09 6.01 0.00 0.00 1197932.96 102034.51 1051802.39 00:07:47.234 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:47.234 Verification LBA range: start 0x4ff7 length 0x4ff7 00:07:47.234 Nvme1n1p2 : 5.93 112.64 7.04 0.00 0.00 1020655.43 123409.33 1013085.74 00:07:47.234 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:47.234 Verification LBA range: start 0x0 length 0x8000 00:07:47.234 Nvme2n1 : 6.11 100.07 6.25 0.00 0.00 1114199.79 57671.68 1077613.49 00:07:47.234 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:47.234 Verification LBA range: start 0x8000 length 0x8000 00:07:47.234 Nvme2n1 : 6.05 116.36 7.27 0.00 0.00 956389.08 110503.78 935652.43 00:07:47.234 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:47.234 Verification LBA range: start 0x0 length 0x8000 00:07:47.234 Nvme2n2 : 6.11 104.79 6.55 0.00 0.00 1036798.74 50613.96 1103424.59 00:07:47.234 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:47.234 Verification LBA range: start 0x8000 length 0x8000 00:07:47.234 Nvme2n2 : 6.18 120.32 7.52 0.00 0.00 894007.80 86305.87 955010.76 00:07:47.234 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:47.234 Verification LBA range: start 0x0 length 0x8000 00:07:47.234 Nvme2n3 : 6.15 108.20 6.76 0.00 0.00 966932.77 35893.56 1135688.47 00:07:47.234 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:47.234 Verification LBA range: start 0x8000 length 0x8000 00:07:47.234 Nvme2n3 : 6.22 83.58 5.22 0.00 0.00 1256956.05 31860.58 2245565.83 00:07:47.234 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:47.234 Verification LBA range: start 0x0 length 0x2000 00:07:47.234 Nvme3n1 : 6.23 127.48 7.97 0.00 0.00 795924.11 715.22 1161499.57 00:07:47.234 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:47.234 Verification LBA range: start 0x2000 length 0x2000 00:07:47.234 Nvme3n1 : 6.23 91.88 5.74 0.00 0.00 1104199.11 2470.20 2271376.94 00:07:47.234 [2024-11-17T13:56:25.535Z] =================================================================================================================== 00:07:47.234 [2024-11-17T13:56:25.535Z] Total : 1465.66 91.60 0.00 0.00 1064851.82 715.22 2271376.94 00:07:49.137 00:07:49.137 real 0m8.769s 00:07:49.137 user 0m16.763s 00:07:49.137 sys 0m0.244s 00:07:49.137 ************************************ 00:07:49.137 END TEST bdev_verify_big_io 00:07:49.137 ************************************ 00:07:49.137 13:56:27 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:49.137 13:56:27 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:49.137 13:56:27 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:49.137 13:56:27 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:49.137 13:56:27 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:49.137 13:56:27 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:49.137 ************************************ 00:07:49.137 START TEST bdev_write_zeroes 00:07:49.137 ************************************ 00:07:49.137 13:56:27 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:49.137 [2024-11-17 13:56:27.206860] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:49.137 [2024-11-17 13:56:27.206977] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74309 ] 00:07:49.137 [2024-11-17 13:56:27.354289] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.137 [2024-11-17 13:56:27.388339] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.702 Running I/O for 1 seconds... 00:07:50.637 58210.00 IOPS, 227.38 MiB/s 00:07:50.637 Latency(us) 00:07:50.637 [2024-11-17T13:56:28.938Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:50.637 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:50.637 Nvme0n1 : 1.02 8290.65 32.39 0.00 0.00 15404.64 5999.06 33675.42 00:07:50.637 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:50.637 Nvme1n1p1 : 1.02 8309.73 32.46 0.00 0.00 15348.00 11897.30 26617.70 00:07:50.637 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:50.637 Nvme1n1p2 : 1.03 8299.60 32.42 0.00 0.00 15314.53 11292.36 25811.10 00:07:50.637 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:50.637 Nvme2n1 : 1.03 8290.28 32.38 0.00 0.00 15304.38 11695.66 24500.38 00:07:50.637 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:50.637 Nvme2n2 : 1.03 8280.93 32.35 0.00 0.00 15301.10 11645.24 24903.68 00:07:50.637 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:50.637 Nvme2n3 : 1.03 8271.67 32.31 0.00 0.00 15292.01 11645.24 25105.33 00:07:50.637 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:50.637 Nvme3n1 : 1.03 8262.41 32.28 0.00 0.00 15249.56 10334.52 26819.35 00:07:50.637 [2024-11-17T13:56:28.938Z] =================================================================================================================== 00:07:50.637 [2024-11-17T13:56:28.938Z] Total : 58005.26 226.58 0.00 0.00 15316.27 5999.06 33675.42 00:07:50.895 00:07:50.895 real 0m1.851s 00:07:50.895 user 0m1.565s 00:07:50.895 sys 0m0.173s 00:07:50.895 13:56:28 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:50.895 ************************************ 00:07:50.895 END TEST bdev_write_zeroes 00:07:50.895 ************************************ 00:07:50.895 13:56:28 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:50.895 13:56:29 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:50.895 13:56:29 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:50.895 13:56:29 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:50.895 13:56:29 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:50.895 ************************************ 00:07:50.895 START TEST bdev_json_nonenclosed 00:07:50.895 ************************************ 00:07:50.895 13:56:29 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:50.895 [2024-11-17 13:56:29.110028] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:50.895 [2024-11-17 13:56:29.110133] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74351 ] 00:07:51.152 [2024-11-17 13:56:29.258328] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.152 [2024-11-17 13:56:29.290614] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.152 [2024-11-17 13:56:29.290717] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:51.152 [2024-11-17 13:56:29.290741] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:51.152 [2024-11-17 13:56:29.290756] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:51.152 00:07:51.152 real 0m0.318s 00:07:51.152 user 0m0.121s 00:07:51.152 sys 0m0.094s 00:07:51.152 13:56:29 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:51.152 ************************************ 00:07:51.152 END TEST bdev_json_nonenclosed 00:07:51.152 ************************************ 00:07:51.152 13:56:29 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:51.152 13:56:29 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:51.152 13:56:29 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:51.152 13:56:29 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:51.152 13:56:29 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:51.152 ************************************ 00:07:51.152 START TEST bdev_json_nonarray 00:07:51.152 ************************************ 00:07:51.152 13:56:29 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:51.410 [2024-11-17 13:56:29.491033] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:51.410 [2024-11-17 13:56:29.491140] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74371 ] 00:07:51.410 [2024-11-17 13:56:29.636148] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.410 [2024-11-17 13:56:29.668974] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.411 [2024-11-17 13:56:29.669063] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:51.411 [2024-11-17 13:56:29.669079] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:51.411 [2024-11-17 13:56:29.669090] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:51.670 00:07:51.670 real 0m0.318s 00:07:51.670 user 0m0.108s 00:07:51.670 sys 0m0.106s 00:07:51.670 13:56:29 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:51.670 ************************************ 00:07:51.670 END TEST bdev_json_nonarray 00:07:51.670 ************************************ 00:07:51.670 13:56:29 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:51.670 13:56:29 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:07:51.670 13:56:29 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:07:51.670 13:56:29 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:07:51.670 13:56:29 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:51.670 13:56:29 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:51.670 13:56:29 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:51.670 ************************************ 00:07:51.670 START TEST bdev_gpt_uuid 00:07:51.670 ************************************ 00:07:51.670 13:56:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1125 -- # bdev_gpt_uuid 00:07:51.670 13:56:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:07:51.670 13:56:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:07:51.670 13:56:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=74391 00:07:51.670 13:56:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:51.670 13:56:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:51.670 13:56:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 74391 00:07:51.670 13:56:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@831 -- # '[' -z 74391 ']' 00:07:51.670 13:56:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.670 13:56:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:51.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.670 13:56:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.670 13:56:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:51.670 13:56:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:51.670 [2024-11-17 13:56:29.876301] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:51.670 [2024-11-17 13:56:29.876416] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74391 ] 00:07:51.931 [2024-11-17 13:56:30.023463] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.931 [2024-11-17 13:56:30.057767] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.502 13:56:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:52.502 13:56:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # return 0 00:07:52.502 13:56:30 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:52.502 13:56:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.502 13:56:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:53.073 Some configs were skipped because the RPC state that can call them passed over. 00:07:53.073 13:56:31 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.073 13:56:31 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:07:53.073 13:56:31 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.073 13:56:31 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:53.073 13:56:31 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.073 13:56:31 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:07:53.073 13:56:31 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.073 13:56:31 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:53.073 13:56:31 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.073 13:56:31 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:07:53.073 { 00:07:53.073 "name": "Nvme1n1p1", 00:07:53.073 "aliases": [ 00:07:53.073 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:07:53.073 ], 00:07:53.073 "product_name": "GPT Disk", 00:07:53.073 "block_size": 4096, 00:07:53.073 "num_blocks": 655104, 00:07:53.073 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:53.073 "assigned_rate_limits": { 00:07:53.073 "rw_ios_per_sec": 0, 00:07:53.073 "rw_mbytes_per_sec": 0, 00:07:53.073 "r_mbytes_per_sec": 0, 00:07:53.073 "w_mbytes_per_sec": 0 00:07:53.073 }, 00:07:53.073 "claimed": false, 00:07:53.073 "zoned": false, 00:07:53.073 "supported_io_types": { 00:07:53.073 "read": true, 00:07:53.073 "write": true, 00:07:53.073 "unmap": true, 00:07:53.073 "flush": true, 00:07:53.073 "reset": true, 00:07:53.073 "nvme_admin": false, 00:07:53.073 "nvme_io": false, 00:07:53.073 "nvme_io_md": false, 00:07:53.073 "write_zeroes": true, 00:07:53.073 "zcopy": false, 00:07:53.073 "get_zone_info": false, 00:07:53.073 "zone_management": false, 00:07:53.073 "zone_append": false, 00:07:53.073 "compare": true, 00:07:53.073 "compare_and_write": false, 00:07:53.073 "abort": true, 00:07:53.073 "seek_hole": false, 00:07:53.073 "seek_data": false, 00:07:53.073 "copy": true, 00:07:53.073 "nvme_iov_md": false 00:07:53.073 }, 00:07:53.073 "driver_specific": { 00:07:53.073 "gpt": { 00:07:53.073 "base_bdev": "Nvme1n1", 00:07:53.073 "offset_blocks": 256, 00:07:53.073 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:07:53.073 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:53.073 "partition_name": "SPDK_TEST_first" 00:07:53.073 } 00:07:53.073 } 00:07:53.073 } 00:07:53.073 ]' 00:07:53.073 13:56:31 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:07:53.073 13:56:31 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:07:53.073 13:56:31 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:07:53.073 13:56:31 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:53.073 13:56:31 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:53.073 13:56:31 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:53.073 13:56:31 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:53.073 13:56:31 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.073 13:56:31 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:53.073 13:56:31 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.073 13:56:31 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:07:53.073 { 00:07:53.073 "name": "Nvme1n1p2", 00:07:53.073 "aliases": [ 00:07:53.073 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:07:53.073 ], 00:07:53.073 "product_name": "GPT Disk", 00:07:53.073 "block_size": 4096, 00:07:53.073 "num_blocks": 655103, 00:07:53.073 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:53.073 "assigned_rate_limits": { 00:07:53.073 "rw_ios_per_sec": 0, 00:07:53.073 "rw_mbytes_per_sec": 0, 00:07:53.073 "r_mbytes_per_sec": 0, 00:07:53.073 "w_mbytes_per_sec": 0 00:07:53.073 }, 00:07:53.073 "claimed": false, 00:07:53.073 "zoned": false, 00:07:53.073 "supported_io_types": { 00:07:53.073 "read": true, 00:07:53.073 "write": true, 00:07:53.073 "unmap": true, 00:07:53.073 "flush": true, 00:07:53.073 "reset": true, 00:07:53.073 "nvme_admin": false, 00:07:53.073 "nvme_io": false, 00:07:53.073 "nvme_io_md": false, 00:07:53.073 "write_zeroes": true, 00:07:53.073 "zcopy": false, 00:07:53.073 "get_zone_info": false, 00:07:53.073 "zone_management": false, 00:07:53.073 "zone_append": false, 00:07:53.073 "compare": true, 00:07:53.073 "compare_and_write": false, 00:07:53.073 "abort": true, 00:07:53.073 "seek_hole": false, 00:07:53.073 "seek_data": false, 00:07:53.073 "copy": true, 00:07:53.073 "nvme_iov_md": false 00:07:53.073 }, 00:07:53.073 "driver_specific": { 00:07:53.073 "gpt": { 00:07:53.073 "base_bdev": "Nvme1n1", 00:07:53.073 "offset_blocks": 655360, 00:07:53.073 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:07:53.073 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:53.073 "partition_name": "SPDK_TEST_second" 00:07:53.073 } 00:07:53.073 } 00:07:53.073 } 00:07:53.073 ]' 00:07:53.073 13:56:31 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:07:53.073 13:56:31 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:07:53.073 13:56:31 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:07:53.073 13:56:31 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:53.073 13:56:31 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:53.073 13:56:31 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:53.074 13:56:31 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 74391 00:07:53.074 13:56:31 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@950 -- # '[' -z 74391 ']' 00:07:53.074 13:56:31 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # kill -0 74391 00:07:53.074 13:56:31 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # uname 00:07:53.074 13:56:31 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:53.074 13:56:31 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74391 00:07:53.074 13:56:31 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:53.074 killing process with pid 74391 00:07:53.074 13:56:31 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:53.074 13:56:31 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74391' 00:07:53.074 13:56:31 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@969 -- # kill 74391 00:07:53.074 13:56:31 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@974 -- # wait 74391 00:07:53.641 00:07:53.641 real 0m1.849s 00:07:53.641 user 0m2.082s 00:07:53.641 sys 0m0.328s 00:07:53.641 ************************************ 00:07:53.641 END TEST bdev_gpt_uuid 00:07:53.641 ************************************ 00:07:53.641 13:56:31 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:53.641 13:56:31 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:53.641 13:56:31 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:07:53.641 13:56:31 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:07:53.641 13:56:31 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:07:53.641 13:56:31 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:53.641 13:56:31 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:53.641 13:56:31 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:07:53.641 13:56:31 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:07:53.641 13:56:31 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:07:53.641 13:56:31 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:53.899 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:53.899 Waiting for block devices as requested 00:07:54.157 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:54.157 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:54.157 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:54.416 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:59.769 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:59.769 13:56:37 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:07:59.769 13:56:37 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:07:59.769 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:07:59.769 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:07:59.769 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:59.769 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:59.769 13:56:37 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:07:59.769 00:07:59.769 real 0m48.185s 00:07:59.769 user 1m1.707s 00:07:59.769 sys 0m7.238s 00:07:59.769 13:56:37 blockdev_nvme_gpt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:59.769 13:56:37 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:59.769 ************************************ 00:07:59.769 END TEST blockdev_nvme_gpt 00:07:59.769 ************************************ 00:07:59.769 13:56:37 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:59.769 13:56:37 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:59.769 13:56:37 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:59.769 13:56:37 -- common/autotest_common.sh@10 -- # set +x 00:07:59.769 ************************************ 00:07:59.769 START TEST nvme 00:07:59.769 ************************************ 00:07:59.769 13:56:37 nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:59.769 * Looking for test storage... 00:07:59.769 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:07:59.769 13:56:37 nvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:59.769 13:56:37 nvme -- common/autotest_common.sh@1681 -- # lcov --version 00:07:59.769 13:56:37 nvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:59.769 13:56:37 nvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:59.769 13:56:37 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:59.769 13:56:37 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:59.769 13:56:37 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:59.769 13:56:37 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:59.769 13:56:37 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:59.769 13:56:37 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:59.769 13:56:37 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:59.769 13:56:37 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:59.769 13:56:37 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:59.769 13:56:37 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:59.769 13:56:37 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:59.769 13:56:37 nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:59.769 13:56:37 nvme -- scripts/common.sh@345 -- # : 1 00:07:59.769 13:56:37 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:59.769 13:56:37 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:59.769 13:56:37 nvme -- scripts/common.sh@365 -- # decimal 1 00:07:59.769 13:56:37 nvme -- scripts/common.sh@353 -- # local d=1 00:07:59.769 13:56:37 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:59.769 13:56:37 nvme -- scripts/common.sh@355 -- # echo 1 00:07:59.769 13:56:37 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:59.769 13:56:37 nvme -- scripts/common.sh@366 -- # decimal 2 00:07:59.769 13:56:37 nvme -- scripts/common.sh@353 -- # local d=2 00:07:59.769 13:56:37 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:59.769 13:56:37 nvme -- scripts/common.sh@355 -- # echo 2 00:07:59.769 13:56:37 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:59.769 13:56:37 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:59.769 13:56:37 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:59.769 13:56:37 nvme -- scripts/common.sh@368 -- # return 0 00:07:59.769 13:56:37 nvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:59.769 13:56:37 nvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:59.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.769 --rc genhtml_branch_coverage=1 00:07:59.769 --rc genhtml_function_coverage=1 00:07:59.769 --rc genhtml_legend=1 00:07:59.769 --rc geninfo_all_blocks=1 00:07:59.769 --rc geninfo_unexecuted_blocks=1 00:07:59.769 00:07:59.769 ' 00:07:59.769 13:56:37 nvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:59.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.769 --rc genhtml_branch_coverage=1 00:07:59.769 --rc genhtml_function_coverage=1 00:07:59.769 --rc genhtml_legend=1 00:07:59.769 --rc geninfo_all_blocks=1 00:07:59.769 --rc geninfo_unexecuted_blocks=1 00:07:59.769 00:07:59.769 ' 00:07:59.769 13:56:37 nvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:59.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.769 --rc genhtml_branch_coverage=1 00:07:59.769 --rc genhtml_function_coverage=1 00:07:59.769 --rc genhtml_legend=1 00:07:59.769 --rc geninfo_all_blocks=1 00:07:59.769 --rc geninfo_unexecuted_blocks=1 00:07:59.769 00:07:59.769 ' 00:07:59.769 13:56:37 nvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:59.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.769 --rc genhtml_branch_coverage=1 00:07:59.769 --rc genhtml_function_coverage=1 00:07:59.769 --rc genhtml_legend=1 00:07:59.769 --rc geninfo_all_blocks=1 00:07:59.769 --rc geninfo_unexecuted_blocks=1 00:07:59.769 00:07:59.769 ' 00:07:59.769 13:56:37 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:00.334 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:00.592 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:00.592 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:00.592 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:00.851 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:00.851 13:56:38 nvme -- nvme/nvme.sh@79 -- # uname 00:08:00.851 13:56:38 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:08:00.851 13:56:38 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:08:00.851 13:56:38 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:08:00.851 13:56:38 nvme -- common/autotest_common.sh@1082 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:08:00.851 13:56:38 nvme -- common/autotest_common.sh@1068 -- # _randomize_va_space=2 00:08:00.851 13:56:38 nvme -- common/autotest_common.sh@1069 -- # echo 0 00:08:00.851 13:56:38 nvme -- common/autotest_common.sh@1071 -- # stubpid=75015 00:08:00.851 Waiting for stub to ready for secondary processes... 00:08:00.851 13:56:38 nvme -- common/autotest_common.sh@1072 -- # echo Waiting for stub to ready for secondary processes... 00:08:00.851 13:56:38 nvme -- common/autotest_common.sh@1070 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:08:00.851 13:56:38 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:00.851 13:56:38 nvme -- common/autotest_common.sh@1075 -- # [[ -e /proc/75015 ]] 00:08:00.851 13:56:38 nvme -- common/autotest_common.sh@1076 -- # sleep 1s 00:08:00.851 [2024-11-17 13:56:38.986337] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:00.851 [2024-11-17 13:56:38.986455] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:08:01.783 [2024-11-17 13:56:39.732620] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:01.783 [2024-11-17 13:56:39.751542] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:01.783 [2024-11-17 13:56:39.751638] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:01.783 [2024-11-17 13:56:39.751740] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:08:01.783 [2024-11-17 13:56:39.765701] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:08:01.783 [2024-11-17 13:56:39.765807] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:01.783 [2024-11-17 13:56:39.779168] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:08:01.783 [2024-11-17 13:56:39.779352] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:08:01.783 [2024-11-17 13:56:39.779858] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:01.783 [2024-11-17 13:56:39.780175] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:08:01.783 [2024-11-17 13:56:39.780328] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:08:01.783 [2024-11-17 13:56:39.781318] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:01.783 [2024-11-17 13:56:39.781600] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:08:01.783 [2024-11-17 13:56:39.781727] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:08:01.783 [2024-11-17 13:56:39.782920] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:01.783 [2024-11-17 13:56:39.783171] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:08:01.783 [2024-11-17 13:56:39.783286] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:08:01.783 [2024-11-17 13:56:39.783405] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:08:01.783 [2024-11-17 13:56:39.783532] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:08:01.783 13:56:39 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:01.783 done. 00:08:01.783 13:56:39 nvme -- common/autotest_common.sh@1078 -- # echo done. 00:08:01.783 13:56:39 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:01.783 13:56:39 nvme -- common/autotest_common.sh@1101 -- # '[' 10 -le 1 ']' 00:08:01.783 13:56:39 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:01.783 13:56:39 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:01.783 ************************************ 00:08:01.783 START TEST nvme_reset 00:08:01.783 ************************************ 00:08:01.783 13:56:39 nvme.nvme_reset -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:02.041 Initializing NVMe Controllers 00:08:02.041 Skipping QEMU NVMe SSD at 0000:00:10.0 00:08:02.041 Skipping QEMU NVMe SSD at 0000:00:11.0 00:08:02.041 Skipping QEMU NVMe SSD at 0000:00:13.0 00:08:02.041 Skipping QEMU NVMe SSD at 0000:00:12.0 00:08:02.041 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:08:02.041 00:08:02.041 real 0m0.184s 00:08:02.041 user 0m0.058s 00:08:02.041 sys 0m0.082s 00:08:02.041 13:56:40 nvme.nvme_reset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:02.041 ************************************ 00:08:02.041 END TEST nvme_reset 00:08:02.041 ************************************ 00:08:02.041 13:56:40 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:08:02.041 13:56:40 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:08:02.041 13:56:40 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:02.041 13:56:40 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:02.041 13:56:40 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:02.041 ************************************ 00:08:02.041 START TEST nvme_identify 00:08:02.041 ************************************ 00:08:02.041 13:56:40 nvme.nvme_identify -- common/autotest_common.sh@1125 -- # nvme_identify 00:08:02.041 13:56:40 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:08:02.041 13:56:40 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:08:02.041 13:56:40 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:08:02.041 13:56:40 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:08:02.041 13:56:40 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:02.041 13:56:40 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # local bdfs 00:08:02.041 13:56:40 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:02.041 13:56:40 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:02.041 13:56:40 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:02.041 13:56:40 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:08:02.041 13:56:40 nvme.nvme_identify -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:02.041 13:56:40 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:08:02.302 [2024-11-17 13:56:40.388474] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 75036 terminated unexpected 00:08:02.302 ===================================================== 00:08:02.302 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:02.302 ===================================================== 00:08:02.302 Controller Capabilities/Features 00:08:02.302 ================================ 00:08:02.302 Vendor ID: 1b36 00:08:02.302 Subsystem Vendor ID: 1af4 00:08:02.302 Serial Number: 12340 00:08:02.302 Model Number: QEMU NVMe Ctrl 00:08:02.302 Firmware Version: 8.0.0 00:08:02.302 Recommended Arb Burst: 6 00:08:02.302 IEEE OUI Identifier: 00 54 52 00:08:02.302 Multi-path I/O 00:08:02.302 May have multiple subsystem ports: No 00:08:02.302 May have multiple controllers: No 00:08:02.302 Associated with SR-IOV VF: No 00:08:02.302 Max Data Transfer Size: 524288 00:08:02.302 Max Number of Namespaces: 256 00:08:02.302 Max Number of I/O Queues: 64 00:08:02.302 NVMe Specification Version (VS): 1.4 00:08:02.302 NVMe Specification Version (Identify): 1.4 00:08:02.302 Maximum Queue Entries: 2048 00:08:02.302 Contiguous Queues Required: Yes 00:08:02.302 Arbitration Mechanisms Supported 00:08:02.302 Weighted Round Robin: Not Supported 00:08:02.302 Vendor Specific: Not Supported 00:08:02.302 Reset Timeout: 7500 ms 00:08:02.302 Doorbell Stride: 4 bytes 00:08:02.302 NVM Subsystem Reset: Not Supported 00:08:02.302 Command Sets Supported 00:08:02.302 NVM Command Set: Supported 00:08:02.302 Boot Partition: Not Supported 00:08:02.302 Memory Page Size Minimum: 4096 bytes 00:08:02.302 Memory Page Size Maximum: 65536 bytes 00:08:02.302 Persistent Memory Region: Not Supported 00:08:02.302 Optional Asynchronous Events Supported 00:08:02.302 Namespace Attribute Notices: Supported 00:08:02.302 Firmware Activation Notices: Not Supported 00:08:02.302 ANA Change Notices: Not Supported 00:08:02.302 PLE Aggregate Log Change Notices: Not Supported 00:08:02.302 LBA Status Info Alert Notices: Not Supported 00:08:02.302 EGE Aggregate Log Change Notices: Not Supported 00:08:02.302 Normal NVM Subsystem Shutdown event: Not Supported 00:08:02.302 Zone Descriptor Change Notices: Not Supported 00:08:02.302 Discovery Log Change Notices: Not Supported 00:08:02.302 Controller Attributes 00:08:02.302 128-bit Host Identifier: Not Supported 00:08:02.302 Non-Operational Permissive Mode: Not Supported 00:08:02.302 NVM Sets: Not Supported 00:08:02.302 Read Recovery Levels: Not Supported 00:08:02.302 Endurance Groups: Not Supported 00:08:02.302 Predictable Latency Mode: Not Supported 00:08:02.302 Traffic Based Keep ALive: Not Supported 00:08:02.302 Namespace Granularity: Not Supported 00:08:02.302 SQ Associations: Not Supported 00:08:02.302 UUID List: Not Supported 00:08:02.302 Multi-Domain Subsystem: Not Supported 00:08:02.302 Fixed Capacity Management: Not Supported 00:08:02.302 Variable Capacity Management: Not Supported 00:08:02.302 Delete Endurance Group: Not Supported 00:08:02.302 Delete NVM Set: Not Supported 00:08:02.302 Extended LBA Formats Supported: Supported 00:08:02.302 Flexible Data Placement Supported: Not Supported 00:08:02.302 00:08:02.302 Controller Memory Buffer Support 00:08:02.302 ================================ 00:08:02.302 Supported: No 00:08:02.302 00:08:02.302 Persistent Memory Region Support 00:08:02.302 ================================ 00:08:02.302 Supported: No 00:08:02.302 00:08:02.302 Admin Command Set Attributes 00:08:02.302 ============================ 00:08:02.302 Security Send/Receive: Not Supported 00:08:02.302 Format NVM: Supported 00:08:02.302 Firmware Activate/Download: Not Supported 00:08:02.302 Namespace Management: Supported 00:08:02.302 Device Self-Test: Not Supported 00:08:02.302 Directives: Supported 00:08:02.302 NVMe-MI: Not Supported 00:08:02.302 Virtualization Management: Not Supported 00:08:02.302 Doorbell Buffer Config: Supported 00:08:02.302 Get LBA Status Capability: Not Supported 00:08:02.302 Command & Feature Lockdown Capability: Not Supported 00:08:02.302 Abort Command Limit: 4 00:08:02.302 Async Event Request Limit: 4 00:08:02.302 Number of Firmware Slots: N/A 00:08:02.302 Firmware Slot 1 Read-Only: N/A 00:08:02.302 Firmware Activation Without Reset: N/A 00:08:02.302 Multiple Update Detection Support: N/A 00:08:02.302 Firmware Update Granularity: No Information Provided 00:08:02.302 Per-Namespace SMART Log: Yes 00:08:02.302 Asymmetric Namespace Access Log Page: Not Supported 00:08:02.302 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:02.302 Command Effects Log Page: Supported 00:08:02.302 Get Log Page Extended Data: Supported 00:08:02.302 Telemetry Log Pages: Not Supported 00:08:02.302 Persistent Event Log Pages: Not Supported 00:08:02.302 Supported Log Pages Log Page: May Support 00:08:02.302 Commands Supported & Effects Log Page: Not Supported 00:08:02.302 Feature Identifiers & Effects Log Page:May Support 00:08:02.302 NVMe-MI Commands & Effects Log Page: May Support 00:08:02.303 Data Area 4 for Telemetry Log: Not Supported 00:08:02.303 Error Log Page Entries Supported: 1 00:08:02.303 Keep Alive: Not Supported 00:08:02.303 00:08:02.303 NVM Command Set Attributes 00:08:02.303 ========================== 00:08:02.303 Submission Queue Entry Size 00:08:02.303 Max: 64 00:08:02.303 Min: 64 00:08:02.303 Completion Queue Entry Size 00:08:02.303 Max: 16 00:08:02.303 Min: 16 00:08:02.303 Number of Namespaces: 256 00:08:02.303 Compare Command: Supported 00:08:02.303 Write Uncorrectable Command: Not Supported 00:08:02.303 Dataset Management Command: Supported 00:08:02.303 Write Zeroes Command: Supported 00:08:02.303 Set Features Save Field: Supported 00:08:02.303 Reservations: Not Supported 00:08:02.303 Timestamp: Supported 00:08:02.303 Copy: Supported 00:08:02.303 Volatile Write Cache: Present 00:08:02.303 Atomic Write Unit (Normal): 1 00:08:02.303 Atomic Write Unit (PFail): 1 00:08:02.303 Atomic Compare & Write Unit: 1 00:08:02.303 Fused Compare & Write: Not Supported 00:08:02.303 Scatter-Gather List 00:08:02.303 SGL Command Set: Supported 00:08:02.303 SGL Keyed: Not Supported 00:08:02.303 SGL Bit Bucket Descriptor: Not Supported 00:08:02.303 SGL Metadata Pointer: Not Supported 00:08:02.303 Oversized SGL: Not Supported 00:08:02.303 SGL Metadata Address: Not Supported 00:08:02.303 SGL Offset: Not Supported 00:08:02.303 Transport SGL Data Block: Not Supported 00:08:02.303 Replay Protected Memory Block: Not Supported 00:08:02.303 00:08:02.303 Firmware Slot Information 00:08:02.303 ========================= 00:08:02.303 Active slot: 1 00:08:02.303 Slot 1 Firmware Revision: 1.0 00:08:02.303 00:08:02.303 00:08:02.303 Commands Supported and Effects 00:08:02.303 ============================== 00:08:02.303 Admin Commands 00:08:02.303 -------------- 00:08:02.303 Delete I/O Submission Queue (00h): Supported 00:08:02.303 Create I/O Submission Queue (01h): Supported 00:08:02.303 Get Log Page (02h): Supported 00:08:02.303 Delete I/O Completion Queue (04h): Supported 00:08:02.303 Create I/O Completion Queue (05h): Supported 00:08:02.303 Identify (06h): Supported 00:08:02.303 Abort (08h): Supported 00:08:02.303 Set Features (09h): Supported 00:08:02.303 Get Features (0Ah): Supported 00:08:02.303 Asynchronous Event Request (0Ch): Supported 00:08:02.303 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:02.303 Directive Send (19h): Supported 00:08:02.303 Directive Receive (1Ah): Supported 00:08:02.303 Virtualization Management (1Ch): Supported 00:08:02.303 Doorbell Buffer Config (7Ch): Supported 00:08:02.303 Format NVM (80h): Supported LBA-Change 00:08:02.303 I/O Commands 00:08:02.303 ------------ 00:08:02.303 Flush (00h): Supported LBA-Change 00:08:02.303 Write (01h): Supported LBA-Change 00:08:02.303 Read (02h): Supported 00:08:02.303 Compare (05h): Supported 00:08:02.303 Write Zeroes (08h): Supported LBA-Change 00:08:02.303 Dataset Management (09h): Supported LBA-Change 00:08:02.303 Unknown (0Ch): Supported 00:08:02.303 Unknown (12h): Supported 00:08:02.303 Copy (19h): Supported LBA-Change 00:08:02.303 Unknown (1Dh): Supported LBA-Change 00:08:02.303 00:08:02.303 Error Log 00:08:02.303 ========= 00:08:02.303 00:08:02.303 Arbitration 00:08:02.303 =========== 00:08:02.303 Arbitration Burst: no limit 00:08:02.303 00:08:02.303 Power Management 00:08:02.303 ================ 00:08:02.303 Number of Power States: 1 00:08:02.303 Current Power State: Power State #0 00:08:02.303 Power State #0: 00:08:02.303 Max Power: 25.00 W 00:08:02.303 Non-Operational State: Operational 00:08:02.303 Entry Latency: 16 microseconds 00:08:02.303 Exit Latency: 4 microseconds 00:08:02.303 Relative Read Throughput: 0 00:08:02.303 Relative Read Latency: 0 00:08:02.303 Relative Write Throughput: 0 00:08:02.303 Relative Write Latency: 0 00:08:02.303 Idle Power[2024-11-17 13:56:40.389586] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0] process 75036 terminated unexpected 00:08:02.303 : Not Reported 00:08:02.303 Active Power: Not Reported 00:08:02.303 Non-Operational Permissive Mode: Not Supported 00:08:02.303 00:08:02.303 Health Information 00:08:02.303 ================== 00:08:02.303 Critical Warnings: 00:08:02.303 Available Spare Space: OK 00:08:02.303 Temperature: OK 00:08:02.303 Device Reliability: OK 00:08:02.303 Read Only: No 00:08:02.303 Volatile Memory Backup: OK 00:08:02.303 Current Temperature: 323 Kelvin (50 Celsius) 00:08:02.303 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:02.303 Available Spare: 0% 00:08:02.303 Available Spare Threshold: 0% 00:08:02.303 Life Percentage Used: 0% 00:08:02.303 Data Units Read: 714 00:08:02.303 Data Units Written: 642 00:08:02.303 Host Read Commands: 37996 00:08:02.303 Host Write Commands: 37782 00:08:02.303 Controller Busy Time: 0 minutes 00:08:02.303 Power Cycles: 0 00:08:02.303 Power On Hours: 0 hours 00:08:02.303 Unsafe Shutdowns: 0 00:08:02.303 Unrecoverable Media Errors: 0 00:08:02.303 Lifetime Error Log Entries: 0 00:08:02.303 Warning Temperature Time: 0 minutes 00:08:02.303 Critical Temperature Time: 0 minutes 00:08:02.303 00:08:02.303 Number of Queues 00:08:02.303 ================ 00:08:02.303 Number of I/O Submission Queues: 64 00:08:02.303 Number of I/O Completion Queues: 64 00:08:02.303 00:08:02.303 ZNS Specific Controller Data 00:08:02.303 ============================ 00:08:02.303 Zone Append Size Limit: 0 00:08:02.303 00:08:02.303 00:08:02.303 Active Namespaces 00:08:02.303 ================= 00:08:02.303 Namespace ID:1 00:08:02.303 Error Recovery Timeout: Unlimited 00:08:02.303 Command Set Identifier: NVM (00h) 00:08:02.303 Deallocate: Supported 00:08:02.303 Deallocated/Unwritten Error: Supported 00:08:02.303 Deallocated Read Value: All 0x00 00:08:02.303 Deallocate in Write Zeroes: Not Supported 00:08:02.303 Deallocated Guard Field: 0xFFFF 00:08:02.303 Flush: Supported 00:08:02.303 Reservation: Not Supported 00:08:02.303 Metadata Transferred as: Separate Metadata Buffer 00:08:02.303 Namespace Sharing Capabilities: Private 00:08:02.303 Size (in LBAs): 1548666 (5GiB) 00:08:02.303 Capacity (in LBAs): 1548666 (5GiB) 00:08:02.303 Utilization (in LBAs): 1548666 (5GiB) 00:08:02.303 Thin Provisioning: Not Supported 00:08:02.303 Per-NS Atomic Units: No 00:08:02.303 Maximum Single Source Range Length: 128 00:08:02.303 Maximum Copy Length: 128 00:08:02.303 Maximum Source Range Count: 128 00:08:02.303 NGUID/EUI64 Never Reused: No 00:08:02.303 Namespace Write Protected: No 00:08:02.303 Number of LBA Formats: 8 00:08:02.303 Current LBA Format: LBA Format #07 00:08:02.303 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:02.303 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:02.303 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:02.303 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:02.303 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:02.303 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:02.303 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:02.303 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:02.303 00:08:02.303 NVM Specific Namespace Data 00:08:02.303 =========================== 00:08:02.303 Logical Block Storage Tag Mask: 0 00:08:02.303 Protection Information Capabilities: 00:08:02.303 16b Guard Protection Information Storage Tag Support: No 00:08:02.303 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:02.303 Storage Tag Check Read Support: No 00:08:02.303 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.303 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.303 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.303 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.303 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.303 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.303 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.303 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.303 ===================================================== 00:08:02.303 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:02.303 ===================================================== 00:08:02.303 Controller Capabilities/Features 00:08:02.303 ================================ 00:08:02.303 Vendor ID: 1b36 00:08:02.303 Subsystem Vendor ID: 1af4 00:08:02.303 Serial Number: 12341 00:08:02.303 Model Number: QEMU NVMe Ctrl 00:08:02.303 Firmware Version: 8.0.0 00:08:02.303 Recommended Arb Burst: 6 00:08:02.303 IEEE OUI Identifier: 00 54 52 00:08:02.303 Multi-path I/O 00:08:02.303 May have multiple subsystem ports: No 00:08:02.303 May have multiple controllers: No 00:08:02.304 Associated with SR-IOV VF: No 00:08:02.304 Max Data Transfer Size: 524288 00:08:02.304 Max Number of Namespaces: 256 00:08:02.304 Max Number of I/O Queues: 64 00:08:02.304 NVMe Specification Version (VS): 1.4 00:08:02.304 NVMe Specification Version (Identify): 1.4 00:08:02.304 Maximum Queue Entries: 2048 00:08:02.304 Contiguous Queues Required: Yes 00:08:02.304 Arbitration Mechanisms Supported 00:08:02.304 Weighted Round Robin: Not Supported 00:08:02.304 Vendor Specific: Not Supported 00:08:02.304 Reset Timeout: 7500 ms 00:08:02.304 Doorbell Stride: 4 bytes 00:08:02.304 NVM Subsystem Reset: Not Supported 00:08:02.304 Command Sets Supported 00:08:02.304 NVM Command Set: Supported 00:08:02.304 Boot Partition: Not Supported 00:08:02.304 Memory Page Size Minimum: 4096 bytes 00:08:02.304 Memory Page Size Maximum: 65536 bytes 00:08:02.304 Persistent Memory Region: Not Supported 00:08:02.304 Optional Asynchronous Events Supported 00:08:02.304 Namespace Attribute Notices: Supported 00:08:02.304 Firmware Activation Notices: Not Supported 00:08:02.304 ANA Change Notices: Not Supported 00:08:02.304 PLE Aggregate Log Change Notices: Not Supported 00:08:02.304 LBA Status Info Alert Notices: Not Supported 00:08:02.304 EGE Aggregate Log Change Notices: Not Supported 00:08:02.304 Normal NVM Subsystem Shutdown event: Not Supported 00:08:02.304 Zone Descriptor Change Notices: Not Supported 00:08:02.304 Discovery Log Change Notices: Not Supported 00:08:02.304 Controller Attributes 00:08:02.304 128-bit Host Identifier: Not Supported 00:08:02.304 Non-Operational Permissive Mode: Not Supported 00:08:02.304 NVM Sets: Not Supported 00:08:02.304 Read Recovery Levels: Not Supported 00:08:02.304 Endurance Groups: Not Supported 00:08:02.304 Predictable Latency Mode: Not Supported 00:08:02.304 Traffic Based Keep ALive: Not Supported 00:08:02.304 Namespace Granularity: Not Supported 00:08:02.304 SQ Associations: Not Supported 00:08:02.304 UUID List: Not Supported 00:08:02.304 Multi-Domain Subsystem: Not Supported 00:08:02.304 Fixed Capacity Management: Not Supported 00:08:02.304 Variable Capacity Management: Not Supported 00:08:02.304 Delete Endurance Group: Not Supported 00:08:02.304 Delete NVM Set: Not Supported 00:08:02.304 Extended LBA Formats Supported: Supported 00:08:02.304 Flexible Data Placement Supported: Not Supported 00:08:02.304 00:08:02.304 Controller Memory Buffer Support 00:08:02.304 ================================ 00:08:02.304 Supported: No 00:08:02.304 00:08:02.304 Persistent Memory Region Support 00:08:02.304 ================================ 00:08:02.304 Supported: No 00:08:02.304 00:08:02.304 Admin Command Set Attributes 00:08:02.304 ============================ 00:08:02.304 Security Send/Receive: Not Supported 00:08:02.304 Format NVM: Supported 00:08:02.304 Firmware Activate/Download: Not Supported 00:08:02.304 Namespace Management: Supported 00:08:02.304 Device Self-Test: Not Supported 00:08:02.304 Directives: Supported 00:08:02.304 NVMe-MI: Not Supported 00:08:02.304 Virtualization Management: Not Supported 00:08:02.304 Doorbell Buffer Config: Supported 00:08:02.304 Get LBA Status Capability: Not Supported 00:08:02.304 Command & Feature Lockdown Capability: Not Supported 00:08:02.304 Abort Command Limit: 4 00:08:02.304 Async Event Request Limit: 4 00:08:02.304 Number of Firmware Slots: N/A 00:08:02.304 Firmware Slot 1 Read-Only: N/A 00:08:02.304 Firmware Activation Without Reset: N/A 00:08:02.304 Multiple Update Detection Support: N/A 00:08:02.304 Firmware Update Granularity: No Information Provided 00:08:02.304 Per-Namespace SMART Log: Yes 00:08:02.304 Asymmetric Namespace Access Log Page: Not Supported 00:08:02.304 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:02.304 Command Effects Log Page: Supported 00:08:02.304 Get Log Page Extended Data: Supported 00:08:02.304 Telemetry Log Pages: Not Supported 00:08:02.304 Persistent Event Log Pages: Not Supported 00:08:02.304 Supported Log Pages Log Page: May Support 00:08:02.304 Commands Supported & Effects Log Page: Not Supported 00:08:02.304 Feature Identifiers & Effects Log Page:May Support 00:08:02.304 NVMe-MI Commands & Effects Log Page: May Support 00:08:02.304 Data Area 4 for Telemetry Log: Not Supported 00:08:02.304 Error Log Page Entries Supported: 1 00:08:02.304 Keep Alive: Not Supported 00:08:02.304 00:08:02.304 NVM Command Set Attributes 00:08:02.304 ========================== 00:08:02.304 Submission Queue Entry Size 00:08:02.304 Max: 64 00:08:02.304 Min: 64 00:08:02.304 Completion Queue Entry Size 00:08:02.304 Max: 16 00:08:02.304 Min: 16 00:08:02.304 Number of Namespaces: 256 00:08:02.304 Compare Command: Supported 00:08:02.304 Write Uncorrectable Command: Not Supported 00:08:02.304 Dataset Management Command: Supported 00:08:02.304 Write Zeroes Command: Supported 00:08:02.304 Set Features Save Field: Supported 00:08:02.304 Reservations: Not Supported 00:08:02.304 Timestamp: Supported 00:08:02.304 Copy: Supported 00:08:02.304 Volatile Write Cache: Present 00:08:02.304 Atomic Write Unit (Normal): 1 00:08:02.304 Atomic Write Unit (PFail): 1 00:08:02.304 Atomic Compare & Write Unit: 1 00:08:02.304 Fused Compare & Write: Not Supported 00:08:02.304 Scatter-Gather List 00:08:02.304 SGL Command Set: Supported 00:08:02.304 SGL Keyed: Not Supported 00:08:02.304 SGL Bit Bucket Descriptor: Not Supported 00:08:02.304 SGL Metadata Pointer: Not Supported 00:08:02.304 Oversized SGL: Not Supported 00:08:02.304 SGL Metadata Address: Not Supported 00:08:02.304 SGL Offset: Not Supported 00:08:02.304 Transport SGL Data Block: Not Supported 00:08:02.304 Replay Protected Memory Block: Not Supported 00:08:02.304 00:08:02.304 Firmware Slot Information 00:08:02.304 ========================= 00:08:02.304 Active slot: 1 00:08:02.304 Slot 1 Firmware Revision: 1.0 00:08:02.304 00:08:02.304 00:08:02.304 Commands Supported and Effects 00:08:02.304 ============================== 00:08:02.304 Admin Commands 00:08:02.304 -------------- 00:08:02.304 Delete I/O Submission Queue (00h): Supported 00:08:02.304 Create I/O Submission Queue (01h): Supported 00:08:02.304 Get Log Page (02h): Supported 00:08:02.304 Delete I/O Completion Queue (04h): Supported 00:08:02.304 Create I/O Completion Queue (05h): Supported 00:08:02.304 Identify (06h): Supported 00:08:02.304 Abort (08h): Supported 00:08:02.304 Set Features (09h): Supported 00:08:02.304 Get Features (0Ah): Supported 00:08:02.304 Asynchronous Event Request (0Ch): Supported 00:08:02.304 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:02.304 Directive Send (19h): Supported 00:08:02.304 Directive Receive (1Ah): Supported 00:08:02.304 Virtualization Management (1Ch): Supported 00:08:02.304 Doorbell Buffer Config (7Ch): Supported 00:08:02.304 Format NVM (80h): Supported LBA-Change 00:08:02.304 I/O Commands 00:08:02.304 ------------ 00:08:02.304 Flush (00h): Supported LBA-Change 00:08:02.304 Write (01h): Supported LBA-Change 00:08:02.304 Read (02h): Supported 00:08:02.304 Compare (05h): Supported 00:08:02.304 Write Zeroes (08h): Supported LBA-Change 00:08:02.304 Dataset Management (09h): Supported LBA-Change 00:08:02.304 Unknown (0Ch): Supported 00:08:02.304 Unknown (12h): Supported 00:08:02.304 Copy (19h): Supported LBA-Change 00:08:02.304 Unknown (1Dh): Supported LBA-Change 00:08:02.304 00:08:02.304 Error Log 00:08:02.304 ========= 00:08:02.304 00:08:02.304 Arbitration 00:08:02.304 =========== 00:08:02.304 Arbitration Burst: no limit 00:08:02.304 00:08:02.304 Power Management 00:08:02.304 ================ 00:08:02.304 Number of Power States: 1 00:08:02.304 Current Power State: Power State #0 00:08:02.304 Power State #0: 00:08:02.304 Max Power: 25.00 W 00:08:02.304 Non-Operational State: Operational 00:08:02.304 Entry Latency: 16 microseconds 00:08:02.304 Exit Latency: 4 microseconds 00:08:02.304 Relative Read Throughput: 0 00:08:02.304 Relative Read Latency: 0 00:08:02.304 Relative Write Throughput: 0 00:08:02.304 Relative Write Latency: 0 00:08:02.304 Idle Power: Not Reported 00:08:02.304 Active Power: Not Reported 00:08:02.304 Non-Operational Permissive Mode: Not Supported 00:08:02.304 00:08:02.304 Health Information 00:08:02.304 ================== 00:08:02.304 Critical Warnings: 00:08:02.304 Available Spare Space: OK 00:08:02.304 Temperature: [2024-11-17 13:56:40.390377] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0] process 75036 terminated unexpected 00:08:02.305 OK 00:08:02.305 Device Reliability: OK 00:08:02.305 Read Only: No 00:08:02.305 Volatile Memory Backup: OK 00:08:02.305 Current Temperature: 323 Kelvin (50 Celsius) 00:08:02.305 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:02.305 Available Spare: 0% 00:08:02.305 Available Spare Threshold: 0% 00:08:02.305 Life Percentage Used: 0% 00:08:02.305 Data Units Read: 1060 00:08:02.305 Data Units Written: 932 00:08:02.305 Host Read Commands: 55000 00:08:02.305 Host Write Commands: 53851 00:08:02.305 Controller Busy Time: 0 minutes 00:08:02.305 Power Cycles: 0 00:08:02.305 Power On Hours: 0 hours 00:08:02.305 Unsafe Shutdowns: 0 00:08:02.305 Unrecoverable Media Errors: 0 00:08:02.305 Lifetime Error Log Entries: 0 00:08:02.305 Warning Temperature Time: 0 minutes 00:08:02.305 Critical Temperature Time: 0 minutes 00:08:02.305 00:08:02.305 Number of Queues 00:08:02.305 ================ 00:08:02.305 Number of I/O Submission Queues: 64 00:08:02.305 Number of I/O Completion Queues: 64 00:08:02.305 00:08:02.305 ZNS Specific Controller Data 00:08:02.305 ============================ 00:08:02.305 Zone Append Size Limit: 0 00:08:02.305 00:08:02.305 00:08:02.305 Active Namespaces 00:08:02.305 ================= 00:08:02.305 Namespace ID:1 00:08:02.305 Error Recovery Timeout: Unlimited 00:08:02.305 Command Set Identifier: NVM (00h) 00:08:02.305 Deallocate: Supported 00:08:02.305 Deallocated/Unwritten Error: Supported 00:08:02.305 Deallocated Read Value: All 0x00 00:08:02.305 Deallocate in Write Zeroes: Not Supported 00:08:02.305 Deallocated Guard Field: 0xFFFF 00:08:02.305 Flush: Supported 00:08:02.305 Reservation: Not Supported 00:08:02.305 Namespace Sharing Capabilities: Private 00:08:02.305 Size (in LBAs): 1310720 (5GiB) 00:08:02.305 Capacity (in LBAs): 1310720 (5GiB) 00:08:02.305 Utilization (in LBAs): 1310720 (5GiB) 00:08:02.305 Thin Provisioning: Not Supported 00:08:02.305 Per-NS Atomic Units: No 00:08:02.305 Maximum Single Source Range Length: 128 00:08:02.305 Maximum Copy Length: 128 00:08:02.305 Maximum Source Range Count: 128 00:08:02.305 NGUID/EUI64 Never Reused: No 00:08:02.305 Namespace Write Protected: No 00:08:02.305 Number of LBA Formats: 8 00:08:02.305 Current LBA Format: LBA Format #04 00:08:02.305 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:02.305 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:02.305 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:02.305 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:02.305 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:02.305 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:02.305 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:02.305 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:02.305 00:08:02.305 NVM Specific Namespace Data 00:08:02.305 =========================== 00:08:02.305 Logical Block Storage Tag Mask: 0 00:08:02.305 Protection Information Capabilities: 00:08:02.305 16b Guard Protection Information Storage Tag Support: No 00:08:02.305 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:02.305 Storage Tag Check Read Support: No 00:08:02.305 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.305 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.305 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.305 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.305 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.305 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.305 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.305 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.305 ===================================================== 00:08:02.305 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:02.305 ===================================================== 00:08:02.305 Controller Capabilities/Features 00:08:02.305 ================================ 00:08:02.305 Vendor ID: 1b36 00:08:02.305 Subsystem Vendor ID: 1af4 00:08:02.305 Serial Number: 12343 00:08:02.305 Model Number: QEMU NVMe Ctrl 00:08:02.305 Firmware Version: 8.0.0 00:08:02.305 Recommended Arb Burst: 6 00:08:02.305 IEEE OUI Identifier: 00 54 52 00:08:02.305 Multi-path I/O 00:08:02.305 May have multiple subsystem ports: No 00:08:02.305 May have multiple controllers: Yes 00:08:02.305 Associated with SR-IOV VF: No 00:08:02.305 Max Data Transfer Size: 524288 00:08:02.305 Max Number of Namespaces: 256 00:08:02.305 Max Number of I/O Queues: 64 00:08:02.305 NVMe Specification Version (VS): 1.4 00:08:02.305 NVMe Specification Version (Identify): 1.4 00:08:02.305 Maximum Queue Entries: 2048 00:08:02.305 Contiguous Queues Required: Yes 00:08:02.305 Arbitration Mechanisms Supported 00:08:02.305 Weighted Round Robin: Not Supported 00:08:02.305 Vendor Specific: Not Supported 00:08:02.305 Reset Timeout: 7500 ms 00:08:02.305 Doorbell Stride: 4 bytes 00:08:02.305 NVM Subsystem Reset: Not Supported 00:08:02.305 Command Sets Supported 00:08:02.305 NVM Command Set: Supported 00:08:02.305 Boot Partition: Not Supported 00:08:02.305 Memory Page Size Minimum: 4096 bytes 00:08:02.305 Memory Page Size Maximum: 65536 bytes 00:08:02.305 Persistent Memory Region: Not Supported 00:08:02.305 Optional Asynchronous Events Supported 00:08:02.305 Namespace Attribute Notices: Supported 00:08:02.305 Firmware Activation Notices: Not Supported 00:08:02.305 ANA Change Notices: Not Supported 00:08:02.305 PLE Aggregate Log Change Notices: Not Supported 00:08:02.305 LBA Status Info Alert Notices: Not Supported 00:08:02.305 EGE Aggregate Log Change Notices: Not Supported 00:08:02.305 Normal NVM Subsystem Shutdown event: Not Supported 00:08:02.305 Zone Descriptor Change Notices: Not Supported 00:08:02.305 Discovery Log Change Notices: Not Supported 00:08:02.305 Controller Attributes 00:08:02.305 128-bit Host Identifier: Not Supported 00:08:02.305 Non-Operational Permissive Mode: Not Supported 00:08:02.305 NVM Sets: Not Supported 00:08:02.305 Read Recovery Levels: Not Supported 00:08:02.305 Endurance Groups: Supported 00:08:02.305 Predictable Latency Mode: Not Supported 00:08:02.305 Traffic Based Keep ALive: Not Supported 00:08:02.305 Namespace Granularity: Not Supported 00:08:02.305 SQ Associations: Not Supported 00:08:02.305 UUID List: Not Supported 00:08:02.305 Multi-Domain Subsystem: Not Supported 00:08:02.305 Fixed Capacity Management: Not Supported 00:08:02.305 Variable Capacity Management: Not Supported 00:08:02.305 Delete Endurance Group: Not Supported 00:08:02.305 Delete NVM Set: Not Supported 00:08:02.305 Extended LBA Formats Supported: Supported 00:08:02.305 Flexible Data Placement Supported: Supported 00:08:02.305 00:08:02.305 Controller Memory Buffer Support 00:08:02.305 ================================ 00:08:02.305 Supported: No 00:08:02.305 00:08:02.305 Persistent Memory Region Support 00:08:02.305 ================================ 00:08:02.305 Supported: No 00:08:02.305 00:08:02.305 Admin Command Set Attributes 00:08:02.305 ============================ 00:08:02.305 Security Send/Receive: Not Supported 00:08:02.305 Format NVM: Supported 00:08:02.305 Firmware Activate/Download: Not Supported 00:08:02.305 Namespace Management: Supported 00:08:02.305 Device Self-Test: Not Supported 00:08:02.305 Directives: Supported 00:08:02.305 NVMe-MI: Not Supported 00:08:02.305 Virtualization Management: Not Supported 00:08:02.305 Doorbell Buffer Config: Supported 00:08:02.305 Get LBA Status Capability: Not Supported 00:08:02.305 Command & Feature Lockdown Capability: Not Supported 00:08:02.305 Abort Command Limit: 4 00:08:02.305 Async Event Request Limit: 4 00:08:02.305 Number of Firmware Slots: N/A 00:08:02.305 Firmware Slot 1 Read-Only: N/A 00:08:02.305 Firmware Activation Without Reset: N/A 00:08:02.305 Multiple Update Detection Support: N/A 00:08:02.305 Firmware Update Granularity: No Information Provided 00:08:02.305 Per-Namespace SMART Log: Yes 00:08:02.305 Asymmetric Namespace Access Log Page: Not Supported 00:08:02.305 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:02.305 Command Effects Log Page: Supported 00:08:02.305 Get Log Page Extended Data: Supported 00:08:02.305 Telemetry Log Pages: Not Supported 00:08:02.305 Persistent Event Log Pages: Not Supported 00:08:02.305 Supported Log Pages Log Page: May Support 00:08:02.305 Commands Supported & Effects Log Page: Not Supported 00:08:02.305 Feature Identifiers & Effects Log Page:May Support 00:08:02.305 NVMe-MI Commands & Effects Log Page: May Support 00:08:02.305 Data Area 4 for Telemetry Log: Not Supported 00:08:02.305 Error Log Page Entries Supported: 1 00:08:02.305 Keep Alive: Not Supported 00:08:02.305 00:08:02.305 NVM Command Set Attributes 00:08:02.305 ========================== 00:08:02.306 Submission Queue Entry Size 00:08:02.306 Max: 64 00:08:02.306 Min: 64 00:08:02.306 Completion Queue Entry Size 00:08:02.306 Max: 16 00:08:02.306 Min: 16 00:08:02.306 Number of Namespaces: 256 00:08:02.306 Compare Command: Supported 00:08:02.306 Write Uncorrectable Command: Not Supported 00:08:02.306 Dataset Management Command: Supported 00:08:02.306 Write Zeroes Command: Supported 00:08:02.306 Set Features Save Field: Supported 00:08:02.306 Reservations: Not Supported 00:08:02.306 Timestamp: Supported 00:08:02.306 Copy: Supported 00:08:02.306 Volatile Write Cache: Present 00:08:02.306 Atomic Write Unit (Normal): 1 00:08:02.306 Atomic Write Unit (PFail): 1 00:08:02.306 Atomic Compare & Write Unit: 1 00:08:02.306 Fused Compare & Write: Not Supported 00:08:02.306 Scatter-Gather List 00:08:02.306 SGL Command Set: Supported 00:08:02.306 SGL Keyed: Not Supported 00:08:02.306 SGL Bit Bucket Descriptor: Not Supported 00:08:02.306 SGL Metadata Pointer: Not Supported 00:08:02.306 Oversized SGL: Not Supported 00:08:02.306 SGL Metadata Address: Not Supported 00:08:02.306 SGL Offset: Not Supported 00:08:02.306 Transport SGL Data Block: Not Supported 00:08:02.306 Replay Protected Memory Block: Not Supported 00:08:02.306 00:08:02.306 Firmware Slot Information 00:08:02.306 ========================= 00:08:02.306 Active slot: 1 00:08:02.306 Slot 1 Firmware Revision: 1.0 00:08:02.306 00:08:02.306 00:08:02.306 Commands Supported and Effects 00:08:02.306 ============================== 00:08:02.306 Admin Commands 00:08:02.306 -------------- 00:08:02.306 Delete I/O Submission Queue (00h): Supported 00:08:02.306 Create I/O Submission Queue (01h): Supported 00:08:02.306 Get Log Page (02h): Supported 00:08:02.306 Delete I/O Completion Queue (04h): Supported 00:08:02.306 Create I/O Completion Queue (05h): Supported 00:08:02.306 Identify (06h): Supported 00:08:02.306 Abort (08h): Supported 00:08:02.306 Set Features (09h): Supported 00:08:02.306 Get Features (0Ah): Supported 00:08:02.306 Asynchronous Event Request (0Ch): Supported 00:08:02.306 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:02.306 Directive Send (19h): Supported 00:08:02.306 Directive Receive (1Ah): Supported 00:08:02.306 Virtualization Management (1Ch): Supported 00:08:02.306 Doorbell Buffer Config (7Ch): Supported 00:08:02.306 Format NVM (80h): Supported LBA-Change 00:08:02.306 I/O Commands 00:08:02.306 ------------ 00:08:02.306 Flush (00h): Supported LBA-Change 00:08:02.306 Write (01h): Supported LBA-Change 00:08:02.306 Read (02h): Supported 00:08:02.306 Compare (05h): Supported 00:08:02.306 Write Zeroes (08h): Supported LBA-Change 00:08:02.306 Dataset Management (09h): Supported LBA-Change 00:08:02.306 Unknown (0Ch): Supported 00:08:02.306 Unknown (12h): Supported 00:08:02.306 Copy (19h): Supported LBA-Change 00:08:02.306 Unknown (1Dh): Supported LBA-Change 00:08:02.306 00:08:02.306 Error Log 00:08:02.306 ========= 00:08:02.306 00:08:02.306 Arbitration 00:08:02.306 =========== 00:08:02.306 Arbitration Burst: no limit 00:08:02.306 00:08:02.306 Power Management 00:08:02.306 ================ 00:08:02.306 Number of Power States: 1 00:08:02.306 Current Power State: Power State #0 00:08:02.306 Power State #0: 00:08:02.306 Max Power: 25.00 W 00:08:02.306 Non-Operational State: Operational 00:08:02.306 Entry Latency: 16 microseconds 00:08:02.306 Exit Latency: 4 microseconds 00:08:02.306 Relative Read Throughput: 0 00:08:02.306 Relative Read Latency: 0 00:08:02.306 Relative Write Throughput: 0 00:08:02.306 Relative Write Latency: 0 00:08:02.306 Idle Power: Not Reported 00:08:02.306 Active Power: Not Reported 00:08:02.306 Non-Operational Permissive Mode: Not Supported 00:08:02.306 00:08:02.306 Health Information 00:08:02.306 ================== 00:08:02.306 Critical Warnings: 00:08:02.306 Available Spare Space: OK 00:08:02.306 Temperature: OK 00:08:02.306 Device Reliability: OK 00:08:02.306 Read Only: No 00:08:02.306 Volatile Memory Backup: OK 00:08:02.306 Current Temperature: 323 Kelvin (50 Celsius) 00:08:02.306 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:02.306 Available Spare: 0% 00:08:02.306 Available Spare Threshold: 0% 00:08:02.306 Life Percentage Used: 0% 00:08:02.306 Data Units Read: 760 00:08:02.306 Data Units Written: 689 00:08:02.306 Host Read Commands: 38656 00:08:02.306 Host Write Commands: 38079 00:08:02.306 Controller Busy Time: 0 minutes 00:08:02.306 Power Cycles: 0 00:08:02.306 Power On Hours: 0 hours 00:08:02.306 Unsafe Shutdowns: 0 00:08:02.306 Unrecoverable Media Errors: 0 00:08:02.306 Lifetime Error Log Entries: 0 00:08:02.306 Warning Temperature Time: 0 minutes 00:08:02.306 Critical Temperature Time: 0 minutes 00:08:02.306 00:08:02.306 Number of Queues 00:08:02.306 ================ 00:08:02.306 Number of I/O Submission Queues: 64 00:08:02.306 Number of I/O Completion Queues: 64 00:08:02.306 00:08:02.306 ZNS Specific Controller Data 00:08:02.306 ============================ 00:08:02.306 Zone Append Size Limit: 0 00:08:02.306 00:08:02.306 00:08:02.306 Active Namespaces 00:08:02.306 ================= 00:08:02.306 Namespace ID:1 00:08:02.306 Error Recovery Timeout: Unlimited 00:08:02.306 Command Set Identifier: NVM (00h) 00:08:02.306 Deallocate: Supported 00:08:02.306 Deallocated/Unwritten Error: Supported 00:08:02.306 Deallocated Read Value: All 0x00 00:08:02.306 Deallocate in Write Zeroes: Not Supported 00:08:02.306 Deallocated Guard Field: 0xFFFF 00:08:02.306 Flush: Supported 00:08:02.306 Reservation: Not Supported 00:08:02.306 Namespace Sharing Capabilities: Multiple Controllers 00:08:02.306 Size (in LBAs): 262144 (1GiB) 00:08:02.306 Capacity (in LBAs): 262144 (1GiB) 00:08:02.306 Utilization (in LBAs): 262144 (1GiB) 00:08:02.306 Thin Provisioning: Not Supported 00:08:02.306 Per-NS Atomic Units: No 00:08:02.306 Maximum Single Source Range Length: 128 00:08:02.306 Maximum Copy Length: 128 00:08:02.306 Maximum Source Range Count: 128 00:08:02.306 NGUID/EUI64 Never Reused: No 00:08:02.306 Namespace Write Protected: No 00:08:02.306 Endurance group ID: 1 00:08:02.306 Number of LBA Formats: 8 00:08:02.306 Current LBA Format: LBA Format #04 00:08:02.306 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:02.306 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:02.306 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:02.306 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:02.306 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:02.306 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:02.306 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:02.306 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:02.306 00:08:02.306 Get Feature FDP: 00:08:02.306 ================ 00:08:02.306 Enabled: Yes 00:08:02.306 FDP configuration index: 0 00:08:02.306 00:08:02.306 FDP configurations log page 00:08:02.306 =========================== 00:08:02.306 Number of FDP configurations: 1 00:08:02.306 Version: 0 00:08:02.306 Size: 112 00:08:02.306 FDP Configuration Descriptor: 0 00:08:02.306 Descriptor Size: 96 00:08:02.306 Reclaim Group Identifier format: 2 00:08:02.306 FDP Volatile Write Cache: Not Present 00:08:02.306 FDP Configuration: Valid 00:08:02.306 Vendor Specific Size: 0 00:08:02.306 Number of Reclaim Groups: 2 00:08:02.306 Number of Recalim Unit Handles: 8 00:08:02.306 Max Placement Identifiers: 128 00:08:02.306 Number of Namespaces Suppprted: 256 00:08:02.306 Reclaim unit Nominal Size: 6000000 bytes 00:08:02.306 Estimated Reclaim Unit Time Limit: Not Reported 00:08:02.306 RUH Desc #000: RUH Type: Initially Isolated 00:08:02.306 RUH Desc #001: RUH Type: Initially Isolated 00:08:02.306 RUH Desc #002: RUH Type: Initially Isolated 00:08:02.306 RUH Desc #003: RUH Type: Initially Isolated 00:08:02.306 RUH Desc #004: RUH Type: Initially Isolated 00:08:02.306 RUH Desc #005: RUH Type: Initially Isolated 00:08:02.306 RUH Desc #006: RUH Type: Initially Isolated 00:08:02.306 RUH Desc #007: RUH Type: Initially Isolated 00:08:02.306 00:08:02.306 FDP reclaim unit handle usage log page 00:08:02.306 ====================================== 00:08:02.306 Number of Reclaim Unit Handles: 8 00:08:02.306 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:02.306 RUH Usage Desc #001: RUH Attributes: Unused 00:08:02.306 RUH Usage Desc #002: RUH Attributes: Unused 00:08:02.306 RUH Usage Desc #003: RUH Attributes: Unused 00:08:02.306 RUH Usage Desc #004: RUH Attributes: Unused 00:08:02.306 RUH Usage Desc #005: RUH Attributes: Unused 00:08:02.306 RUH Usage Desc #006: RUH Attributes: Unused 00:08:02.306 RUH Usage Desc #007: RUH Attributes: Unused 00:08:02.306 00:08:02.306 FDP statistics log page 00:08:02.306 ======================= 00:08:02.307 Host bytes with metadata written: 455647232 00:08:02.307 Medi[2024-11-17 13:56:40.391650] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0] process 75036 terminated unexpected 00:08:02.307 a bytes with metadata written: 455700480 00:08:02.307 Media bytes erased: 0 00:08:02.307 00:08:02.307 FDP events log page 00:08:02.307 =================== 00:08:02.307 Number of FDP events: 0 00:08:02.307 00:08:02.307 NVM Specific Namespace Data 00:08:02.307 =========================== 00:08:02.307 Logical Block Storage Tag Mask: 0 00:08:02.307 Protection Information Capabilities: 00:08:02.307 16b Guard Protection Information Storage Tag Support: No 00:08:02.307 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:02.307 Storage Tag Check Read Support: No 00:08:02.307 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.307 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.307 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.307 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.307 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.307 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.307 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.307 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.307 ===================================================== 00:08:02.307 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:02.307 ===================================================== 00:08:02.307 Controller Capabilities/Features 00:08:02.307 ================================ 00:08:02.307 Vendor ID: 1b36 00:08:02.307 Subsystem Vendor ID: 1af4 00:08:02.307 Serial Number: 12342 00:08:02.307 Model Number: QEMU NVMe Ctrl 00:08:02.307 Firmware Version: 8.0.0 00:08:02.307 Recommended Arb Burst: 6 00:08:02.307 IEEE OUI Identifier: 00 54 52 00:08:02.307 Multi-path I/O 00:08:02.307 May have multiple subsystem ports: No 00:08:02.307 May have multiple controllers: No 00:08:02.307 Associated with SR-IOV VF: No 00:08:02.307 Max Data Transfer Size: 524288 00:08:02.307 Max Number of Namespaces: 256 00:08:02.307 Max Number of I/O Queues: 64 00:08:02.307 NVMe Specification Version (VS): 1.4 00:08:02.307 NVMe Specification Version (Identify): 1.4 00:08:02.307 Maximum Queue Entries: 2048 00:08:02.307 Contiguous Queues Required: Yes 00:08:02.307 Arbitration Mechanisms Supported 00:08:02.307 Weighted Round Robin: Not Supported 00:08:02.307 Vendor Specific: Not Supported 00:08:02.307 Reset Timeout: 7500 ms 00:08:02.307 Doorbell Stride: 4 bytes 00:08:02.307 NVM Subsystem Reset: Not Supported 00:08:02.307 Command Sets Supported 00:08:02.307 NVM Command Set: Supported 00:08:02.307 Boot Partition: Not Supported 00:08:02.307 Memory Page Size Minimum: 4096 bytes 00:08:02.307 Memory Page Size Maximum: 65536 bytes 00:08:02.307 Persistent Memory Region: Not Supported 00:08:02.307 Optional Asynchronous Events Supported 00:08:02.307 Namespace Attribute Notices: Supported 00:08:02.307 Firmware Activation Notices: Not Supported 00:08:02.307 ANA Change Notices: Not Supported 00:08:02.307 PLE Aggregate Log Change Notices: Not Supported 00:08:02.307 LBA Status Info Alert Notices: Not Supported 00:08:02.307 EGE Aggregate Log Change Notices: Not Supported 00:08:02.307 Normal NVM Subsystem Shutdown event: Not Supported 00:08:02.307 Zone Descriptor Change Notices: Not Supported 00:08:02.307 Discovery Log Change Notices: Not Supported 00:08:02.307 Controller Attributes 00:08:02.307 128-bit Host Identifier: Not Supported 00:08:02.307 Non-Operational Permissive Mode: Not Supported 00:08:02.307 NVM Sets: Not Supported 00:08:02.307 Read Recovery Levels: Not Supported 00:08:02.307 Endurance Groups: Not Supported 00:08:02.307 Predictable Latency Mode: Not Supported 00:08:02.307 Traffic Based Keep ALive: Not Supported 00:08:02.307 Namespace Granularity: Not Supported 00:08:02.307 SQ Associations: Not Supported 00:08:02.307 UUID List: Not Supported 00:08:02.307 Multi-Domain Subsystem: Not Supported 00:08:02.307 Fixed Capacity Management: Not Supported 00:08:02.307 Variable Capacity Management: Not Supported 00:08:02.307 Delete Endurance Group: Not Supported 00:08:02.307 Delete NVM Set: Not Supported 00:08:02.307 Extended LBA Formats Supported: Supported 00:08:02.307 Flexible Data Placement Supported: Not Supported 00:08:02.307 00:08:02.307 Controller Memory Buffer Support 00:08:02.307 ================================ 00:08:02.307 Supported: No 00:08:02.307 00:08:02.307 Persistent Memory Region Support 00:08:02.307 ================================ 00:08:02.307 Supported: No 00:08:02.307 00:08:02.307 Admin Command Set Attributes 00:08:02.307 ============================ 00:08:02.307 Security Send/Receive: Not Supported 00:08:02.307 Format NVM: Supported 00:08:02.307 Firmware Activate/Download: Not Supported 00:08:02.307 Namespace Management: Supported 00:08:02.307 Device Self-Test: Not Supported 00:08:02.307 Directives: Supported 00:08:02.307 NVMe-MI: Not Supported 00:08:02.307 Virtualization Management: Not Supported 00:08:02.307 Doorbell Buffer Config: Supported 00:08:02.307 Get LBA Status Capability: Not Supported 00:08:02.307 Command & Feature Lockdown Capability: Not Supported 00:08:02.307 Abort Command Limit: 4 00:08:02.307 Async Event Request Limit: 4 00:08:02.307 Number of Firmware Slots: N/A 00:08:02.307 Firmware Slot 1 Read-Only: N/A 00:08:02.307 Firmware Activation Without Reset: N/A 00:08:02.307 Multiple Update Detection Support: N/A 00:08:02.307 Firmware Update Granularity: No Information Provided 00:08:02.307 Per-Namespace SMART Log: Yes 00:08:02.307 Asymmetric Namespace Access Log Page: Not Supported 00:08:02.307 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:02.307 Command Effects Log Page: Supported 00:08:02.307 Get Log Page Extended Data: Supported 00:08:02.307 Telemetry Log Pages: Not Supported 00:08:02.307 Persistent Event Log Pages: Not Supported 00:08:02.307 Supported Log Pages Log Page: May Support 00:08:02.307 Commands Supported & Effects Log Page: Not Supported 00:08:02.307 Feature Identifiers & Effects Log Page:May Support 00:08:02.307 NVMe-MI Commands & Effects Log Page: May Support 00:08:02.307 Data Area 4 for Telemetry Log: Not Supported 00:08:02.307 Error Log Page Entries Supported: 1 00:08:02.307 Keep Alive: Not Supported 00:08:02.307 00:08:02.307 NVM Command Set Attributes 00:08:02.307 ========================== 00:08:02.307 Submission Queue Entry Size 00:08:02.307 Max: 64 00:08:02.307 Min: 64 00:08:02.307 Completion Queue Entry Size 00:08:02.307 Max: 16 00:08:02.307 Min: 16 00:08:02.307 Number of Namespaces: 256 00:08:02.307 Compare Command: Supported 00:08:02.307 Write Uncorrectable Command: Not Supported 00:08:02.307 Dataset Management Command: Supported 00:08:02.307 Write Zeroes Command: Supported 00:08:02.307 Set Features Save Field: Supported 00:08:02.307 Reservations: Not Supported 00:08:02.307 Timestamp: Supported 00:08:02.307 Copy: Supported 00:08:02.307 Volatile Write Cache: Present 00:08:02.307 Atomic Write Unit (Normal): 1 00:08:02.307 Atomic Write Unit (PFail): 1 00:08:02.307 Atomic Compare & Write Unit: 1 00:08:02.307 Fused Compare & Write: Not Supported 00:08:02.307 Scatter-Gather List 00:08:02.307 SGL Command Set: Supported 00:08:02.307 SGL Keyed: Not Supported 00:08:02.307 SGL Bit Bucket Descriptor: Not Supported 00:08:02.308 SGL Metadata Pointer: Not Supported 00:08:02.308 Oversized SGL: Not Supported 00:08:02.308 SGL Metadata Address: Not Supported 00:08:02.308 SGL Offset: Not Supported 00:08:02.308 Transport SGL Data Block: Not Supported 00:08:02.308 Replay Protected Memory Block: Not Supported 00:08:02.308 00:08:02.308 Firmware Slot Information 00:08:02.308 ========================= 00:08:02.308 Active slot: 1 00:08:02.308 Slot 1 Firmware Revision: 1.0 00:08:02.308 00:08:02.308 00:08:02.308 Commands Supported and Effects 00:08:02.308 ============================== 00:08:02.308 Admin Commands 00:08:02.308 -------------- 00:08:02.308 Delete I/O Submission Queue (00h): Supported 00:08:02.308 Create I/O Submission Queue (01h): Supported 00:08:02.308 Get Log Page (02h): Supported 00:08:02.308 Delete I/O Completion Queue (04h): Supported 00:08:02.308 Create I/O Completion Queue (05h): Supported 00:08:02.308 Identify (06h): Supported 00:08:02.308 Abort (08h): Supported 00:08:02.308 Set Features (09h): Supported 00:08:02.308 Get Features (0Ah): Supported 00:08:02.308 Asynchronous Event Request (0Ch): Supported 00:08:02.308 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:02.308 Directive Send (19h): Supported 00:08:02.308 Directive Receive (1Ah): Supported 00:08:02.308 Virtualization Management (1Ch): Supported 00:08:02.308 Doorbell Buffer Config (7Ch): Supported 00:08:02.308 Format NVM (80h): Supported LBA-Change 00:08:02.308 I/O Commands 00:08:02.308 ------------ 00:08:02.308 Flush (00h): Supported LBA-Change 00:08:02.308 Write (01h): Supported LBA-Change 00:08:02.308 Read (02h): Supported 00:08:02.308 Compare (05h): Supported 00:08:02.308 Write Zeroes (08h): Supported LBA-Change 00:08:02.308 Dataset Management (09h): Supported LBA-Change 00:08:02.308 Unknown (0Ch): Supported 00:08:02.308 Unknown (12h): Supported 00:08:02.308 Copy (19h): Supported LBA-Change 00:08:02.308 Unknown (1Dh): Supported LBA-Change 00:08:02.308 00:08:02.308 Error Log 00:08:02.308 ========= 00:08:02.308 00:08:02.308 Arbitration 00:08:02.308 =========== 00:08:02.308 Arbitration Burst: no limit 00:08:02.308 00:08:02.308 Power Management 00:08:02.308 ================ 00:08:02.308 Number of Power States: 1 00:08:02.308 Current Power State: Power State #0 00:08:02.308 Power State #0: 00:08:02.308 Max Power: 25.00 W 00:08:02.308 Non-Operational State: Operational 00:08:02.308 Entry Latency: 16 microseconds 00:08:02.308 Exit Latency: 4 microseconds 00:08:02.308 Relative Read Throughput: 0 00:08:02.308 Relative Read Latency: 0 00:08:02.308 Relative Write Throughput: 0 00:08:02.308 Relative Write Latency: 0 00:08:02.308 Idle Power: Not Reported 00:08:02.308 Active Power: Not Reported 00:08:02.308 Non-Operational Permissive Mode: Not Supported 00:08:02.308 00:08:02.308 Health Information 00:08:02.308 ================== 00:08:02.308 Critical Warnings: 00:08:02.308 Available Spare Space: OK 00:08:02.308 Temperature: OK 00:08:02.308 Device Reliability: OK 00:08:02.308 Read Only: No 00:08:02.308 Volatile Memory Backup: OK 00:08:02.308 Current Temperature: 323 Kelvin (50 Celsius) 00:08:02.308 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:02.308 Available Spare: 0% 00:08:02.308 Available Spare Threshold: 0% 00:08:02.308 Life Percentage Used: 0% 00:08:02.308 Data Units Read: 2218 00:08:02.308 Data Units Written: 2006 00:08:02.308 Host Read Commands: 115192 00:08:02.308 Host Write Commands: 113461 00:08:02.308 Controller Busy Time: 0 minutes 00:08:02.308 Power Cycles: 0 00:08:02.308 Power On Hours: 0 hours 00:08:02.308 Unsafe Shutdowns: 0 00:08:02.308 Unrecoverable Media Errors: 0 00:08:02.308 Lifetime Error Log Entries: 0 00:08:02.308 Warning Temperature Time: 0 minutes 00:08:02.308 Critical Temperature Time: 0 minutes 00:08:02.308 00:08:02.308 Number of Queues 00:08:02.308 ================ 00:08:02.308 Number of I/O Submission Queues: 64 00:08:02.308 Number of I/O Completion Queues: 64 00:08:02.308 00:08:02.308 ZNS Specific Controller Data 00:08:02.308 ============================ 00:08:02.308 Zone Append Size Limit: 0 00:08:02.308 00:08:02.308 00:08:02.308 Active Namespaces 00:08:02.308 ================= 00:08:02.308 Namespace ID:1 00:08:02.308 Error Recovery Timeout: Unlimited 00:08:02.308 Command Set Identifier: NVM (00h) 00:08:02.308 Deallocate: Supported 00:08:02.308 Deallocated/Unwritten Error: Supported 00:08:02.308 Deallocated Read Value: All 0x00 00:08:02.308 Deallocate in Write Zeroes: Not Supported 00:08:02.308 Deallocated Guard Field: 0xFFFF 00:08:02.308 Flush: Supported 00:08:02.308 Reservation: Not Supported 00:08:02.308 Namespace Sharing Capabilities: Private 00:08:02.308 Size (in LBAs): 1048576 (4GiB) 00:08:02.308 Capacity (in LBAs): 1048576 (4GiB) 00:08:02.308 Utilization (in LBAs): 1048576 (4GiB) 00:08:02.308 Thin Provisioning: Not Supported 00:08:02.308 Per-NS Atomic Units: No 00:08:02.308 Maximum Single Source Range Length: 128 00:08:02.308 Maximum Copy Length: 128 00:08:02.308 Maximum Source Range Count: 128 00:08:02.308 NGUID/EUI64 Never Reused: No 00:08:02.308 Namespace Write Protected: No 00:08:02.308 Number of LBA Formats: 8 00:08:02.308 Current LBA Format: LBA Format #04 00:08:02.308 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:02.308 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:02.308 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:02.308 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:02.308 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:02.308 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:02.308 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:02.308 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:02.308 00:08:02.308 NVM Specific Namespace Data 00:08:02.308 =========================== 00:08:02.308 Logical Block Storage Tag Mask: 0 00:08:02.308 Protection Information Capabilities: 00:08:02.308 16b Guard Protection Information Storage Tag Support: No 00:08:02.308 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:02.308 Storage Tag Check Read Support: No 00:08:02.308 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.308 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.308 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.308 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.308 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.308 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.308 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.308 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.308 Namespace ID:2 00:08:02.308 Error Recovery Timeout: Unlimited 00:08:02.308 Command Set Identifier: NVM (00h) 00:08:02.308 Deallocate: Supported 00:08:02.308 Deallocated/Unwritten Error: Supported 00:08:02.308 Deallocated Read Value: All 0x00 00:08:02.308 Deallocate in Write Zeroes: Not Supported 00:08:02.308 Deallocated Guard Field: 0xFFFF 00:08:02.308 Flush: Supported 00:08:02.308 Reservation: Not Supported 00:08:02.308 Namespace Sharing Capabilities: Private 00:08:02.308 Size (in LBAs): 1048576 (4GiB) 00:08:02.308 Capacity (in LBAs): 1048576 (4GiB) 00:08:02.308 Utilization (in LBAs): 1048576 (4GiB) 00:08:02.308 Thin Provisioning: Not Supported 00:08:02.308 Per-NS Atomic Units: No 00:08:02.308 Maximum Single Source Range Length: 128 00:08:02.308 Maximum Copy Length: 128 00:08:02.308 Maximum Source Range Count: 128 00:08:02.308 NGUID/EUI64 Never Reused: No 00:08:02.308 Namespace Write Protected: No 00:08:02.308 Number of LBA Formats: 8 00:08:02.308 Current LBA Format: LBA Format #04 00:08:02.308 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:02.308 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:02.308 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:02.308 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:02.308 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:02.308 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:02.308 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:02.308 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:02.308 00:08:02.308 NVM Specific Namespace Data 00:08:02.308 =========================== 00:08:02.308 Logical Block Storage Tag Mask: 0 00:08:02.308 Protection Information Capabilities: 00:08:02.308 16b Guard Protection Information Storage Tag Support: No 00:08:02.308 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:02.308 Storage Tag Check Read Support: No 00:08:02.308 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.309 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.309 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.309 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.309 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.309 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.309 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.309 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.309 Namespace ID:3 00:08:02.309 Error Recovery Timeout: Unlimited 00:08:02.309 Command Set Identifier: NVM (00h) 00:08:02.309 Deallocate: Supported 00:08:02.309 Deallocated/Unwritten Error: Supported 00:08:02.309 Deallocated Read Value: All 0x00 00:08:02.309 Deallocate in Write Zeroes: Not Supported 00:08:02.309 Deallocated Guard Field: 0xFFFF 00:08:02.309 Flush: Supported 00:08:02.309 Reservation: Not Supported 00:08:02.309 Namespace Sharing Capabilities: Private 00:08:02.309 Size (in LBAs): 1048576 (4GiB) 00:08:02.309 Capacity (in LBAs): 1048576 (4GiB) 00:08:02.309 Utilization (in LBAs): 1048576 (4GiB) 00:08:02.309 Thin Provisioning: Not Supported 00:08:02.309 Per-NS Atomic Units: No 00:08:02.309 Maximum Single Source Range Length: 128 00:08:02.309 Maximum Copy Length: 128 00:08:02.309 Maximum Source Range Count: 128 00:08:02.309 NGUID/EUI64 Never Reused: No 00:08:02.309 Namespace Write Protected: No 00:08:02.309 Number of LBA Formats: 8 00:08:02.309 Current LBA Format: LBA Format #04 00:08:02.309 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:02.309 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:02.309 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:02.309 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:02.309 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:02.309 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:02.309 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:02.309 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:02.309 00:08:02.309 NVM Specific Namespace Data 00:08:02.309 =========================== 00:08:02.309 Logical Block Storage Tag Mask: 0 00:08:02.309 Protection Information Capabilities: 00:08:02.309 16b Guard Protection Information Storage Tag Support: No 00:08:02.309 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:02.309 Storage Tag Check Read Support: No 00:08:02.309 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.309 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.309 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.309 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.309 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.309 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.309 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.309 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.309 13:56:40 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:02.309 13:56:40 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:08:02.309 ===================================================== 00:08:02.309 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:02.309 ===================================================== 00:08:02.309 Controller Capabilities/Features 00:08:02.309 ================================ 00:08:02.309 Vendor ID: 1b36 00:08:02.309 Subsystem Vendor ID: 1af4 00:08:02.309 Serial Number: 12340 00:08:02.309 Model Number: QEMU NVMe Ctrl 00:08:02.309 Firmware Version: 8.0.0 00:08:02.309 Recommended Arb Burst: 6 00:08:02.309 IEEE OUI Identifier: 00 54 52 00:08:02.309 Multi-path I/O 00:08:02.309 May have multiple subsystem ports: No 00:08:02.309 May have multiple controllers: No 00:08:02.309 Associated with SR-IOV VF: No 00:08:02.309 Max Data Transfer Size: 524288 00:08:02.309 Max Number of Namespaces: 256 00:08:02.309 Max Number of I/O Queues: 64 00:08:02.309 NVMe Specification Version (VS): 1.4 00:08:02.309 NVMe Specification Version (Identify): 1.4 00:08:02.309 Maximum Queue Entries: 2048 00:08:02.309 Contiguous Queues Required: Yes 00:08:02.309 Arbitration Mechanisms Supported 00:08:02.309 Weighted Round Robin: Not Supported 00:08:02.309 Vendor Specific: Not Supported 00:08:02.309 Reset Timeout: 7500 ms 00:08:02.309 Doorbell Stride: 4 bytes 00:08:02.309 NVM Subsystem Reset: Not Supported 00:08:02.309 Command Sets Supported 00:08:02.309 NVM Command Set: Supported 00:08:02.309 Boot Partition: Not Supported 00:08:02.309 Memory Page Size Minimum: 4096 bytes 00:08:02.309 Memory Page Size Maximum: 65536 bytes 00:08:02.309 Persistent Memory Region: Not Supported 00:08:02.309 Optional Asynchronous Events Supported 00:08:02.309 Namespace Attribute Notices: Supported 00:08:02.309 Firmware Activation Notices: Not Supported 00:08:02.309 ANA Change Notices: Not Supported 00:08:02.309 PLE Aggregate Log Change Notices: Not Supported 00:08:02.309 LBA Status Info Alert Notices: Not Supported 00:08:02.309 EGE Aggregate Log Change Notices: Not Supported 00:08:02.309 Normal NVM Subsystem Shutdown event: Not Supported 00:08:02.309 Zone Descriptor Change Notices: Not Supported 00:08:02.309 Discovery Log Change Notices: Not Supported 00:08:02.309 Controller Attributes 00:08:02.309 128-bit Host Identifier: Not Supported 00:08:02.309 Non-Operational Permissive Mode: Not Supported 00:08:02.309 NVM Sets: Not Supported 00:08:02.309 Read Recovery Levels: Not Supported 00:08:02.309 Endurance Groups: Not Supported 00:08:02.309 Predictable Latency Mode: Not Supported 00:08:02.309 Traffic Based Keep ALive: Not Supported 00:08:02.309 Namespace Granularity: Not Supported 00:08:02.309 SQ Associations: Not Supported 00:08:02.309 UUID List: Not Supported 00:08:02.309 Multi-Domain Subsystem: Not Supported 00:08:02.309 Fixed Capacity Management: Not Supported 00:08:02.309 Variable Capacity Management: Not Supported 00:08:02.309 Delete Endurance Group: Not Supported 00:08:02.309 Delete NVM Set: Not Supported 00:08:02.309 Extended LBA Formats Supported: Supported 00:08:02.309 Flexible Data Placement Supported: Not Supported 00:08:02.309 00:08:02.309 Controller Memory Buffer Support 00:08:02.309 ================================ 00:08:02.309 Supported: No 00:08:02.309 00:08:02.309 Persistent Memory Region Support 00:08:02.309 ================================ 00:08:02.309 Supported: No 00:08:02.309 00:08:02.309 Admin Command Set Attributes 00:08:02.309 ============================ 00:08:02.309 Security Send/Receive: Not Supported 00:08:02.309 Format NVM: Supported 00:08:02.309 Firmware Activate/Download: Not Supported 00:08:02.309 Namespace Management: Supported 00:08:02.309 Device Self-Test: Not Supported 00:08:02.309 Directives: Supported 00:08:02.309 NVMe-MI: Not Supported 00:08:02.309 Virtualization Management: Not Supported 00:08:02.309 Doorbell Buffer Config: Supported 00:08:02.309 Get LBA Status Capability: Not Supported 00:08:02.309 Command & Feature Lockdown Capability: Not Supported 00:08:02.309 Abort Command Limit: 4 00:08:02.309 Async Event Request Limit: 4 00:08:02.309 Number of Firmware Slots: N/A 00:08:02.309 Firmware Slot 1 Read-Only: N/A 00:08:02.309 Firmware Activation Without Reset: N/A 00:08:02.309 Multiple Update Detection Support: N/A 00:08:02.309 Firmware Update Granularity: No Information Provided 00:08:02.309 Per-Namespace SMART Log: Yes 00:08:02.309 Asymmetric Namespace Access Log Page: Not Supported 00:08:02.309 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:02.309 Command Effects Log Page: Supported 00:08:02.309 Get Log Page Extended Data: Supported 00:08:02.309 Telemetry Log Pages: Not Supported 00:08:02.309 Persistent Event Log Pages: Not Supported 00:08:02.309 Supported Log Pages Log Page: May Support 00:08:02.309 Commands Supported & Effects Log Page: Not Supported 00:08:02.309 Feature Identifiers & Effects Log Page:May Support 00:08:02.309 NVMe-MI Commands & Effects Log Page: May Support 00:08:02.309 Data Area 4 for Telemetry Log: Not Supported 00:08:02.309 Error Log Page Entries Supported: 1 00:08:02.309 Keep Alive: Not Supported 00:08:02.309 00:08:02.309 NVM Command Set Attributes 00:08:02.309 ========================== 00:08:02.309 Submission Queue Entry Size 00:08:02.309 Max: 64 00:08:02.309 Min: 64 00:08:02.309 Completion Queue Entry Size 00:08:02.309 Max: 16 00:08:02.309 Min: 16 00:08:02.309 Number of Namespaces: 256 00:08:02.309 Compare Command: Supported 00:08:02.309 Write Uncorrectable Command: Not Supported 00:08:02.309 Dataset Management Command: Supported 00:08:02.309 Write Zeroes Command: Supported 00:08:02.309 Set Features Save Field: Supported 00:08:02.309 Reservations: Not Supported 00:08:02.309 Timestamp: Supported 00:08:02.309 Copy: Supported 00:08:02.309 Volatile Write Cache: Present 00:08:02.310 Atomic Write Unit (Normal): 1 00:08:02.310 Atomic Write Unit (PFail): 1 00:08:02.310 Atomic Compare & Write Unit: 1 00:08:02.310 Fused Compare & Write: Not Supported 00:08:02.310 Scatter-Gather List 00:08:02.310 SGL Command Set: Supported 00:08:02.310 SGL Keyed: Not Supported 00:08:02.310 SGL Bit Bucket Descriptor: Not Supported 00:08:02.310 SGL Metadata Pointer: Not Supported 00:08:02.310 Oversized SGL: Not Supported 00:08:02.310 SGL Metadata Address: Not Supported 00:08:02.310 SGL Offset: Not Supported 00:08:02.310 Transport SGL Data Block: Not Supported 00:08:02.310 Replay Protected Memory Block: Not Supported 00:08:02.310 00:08:02.310 Firmware Slot Information 00:08:02.310 ========================= 00:08:02.310 Active slot: 1 00:08:02.310 Slot 1 Firmware Revision: 1.0 00:08:02.310 00:08:02.310 00:08:02.310 Commands Supported and Effects 00:08:02.310 ============================== 00:08:02.310 Admin Commands 00:08:02.310 -------------- 00:08:02.310 Delete I/O Submission Queue (00h): Supported 00:08:02.310 Create I/O Submission Queue (01h): Supported 00:08:02.310 Get Log Page (02h): Supported 00:08:02.310 Delete I/O Completion Queue (04h): Supported 00:08:02.310 Create I/O Completion Queue (05h): Supported 00:08:02.310 Identify (06h): Supported 00:08:02.310 Abort (08h): Supported 00:08:02.310 Set Features (09h): Supported 00:08:02.310 Get Features (0Ah): Supported 00:08:02.310 Asynchronous Event Request (0Ch): Supported 00:08:02.310 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:02.310 Directive Send (19h): Supported 00:08:02.310 Directive Receive (1Ah): Supported 00:08:02.310 Virtualization Management (1Ch): Supported 00:08:02.310 Doorbell Buffer Config (7Ch): Supported 00:08:02.310 Format NVM (80h): Supported LBA-Change 00:08:02.310 I/O Commands 00:08:02.310 ------------ 00:08:02.310 Flush (00h): Supported LBA-Change 00:08:02.310 Write (01h): Supported LBA-Change 00:08:02.310 Read (02h): Supported 00:08:02.310 Compare (05h): Supported 00:08:02.310 Write Zeroes (08h): Supported LBA-Change 00:08:02.310 Dataset Management (09h): Supported LBA-Change 00:08:02.310 Unknown (0Ch): Supported 00:08:02.310 Unknown (12h): Supported 00:08:02.310 Copy (19h): Supported LBA-Change 00:08:02.310 Unknown (1Dh): Supported LBA-Change 00:08:02.310 00:08:02.310 Error Log 00:08:02.310 ========= 00:08:02.310 00:08:02.310 Arbitration 00:08:02.310 =========== 00:08:02.310 Arbitration Burst: no limit 00:08:02.310 00:08:02.310 Power Management 00:08:02.310 ================ 00:08:02.310 Number of Power States: 1 00:08:02.310 Current Power State: Power State #0 00:08:02.310 Power State #0: 00:08:02.310 Max Power: 25.00 W 00:08:02.310 Non-Operational State: Operational 00:08:02.310 Entry Latency: 16 microseconds 00:08:02.310 Exit Latency: 4 microseconds 00:08:02.310 Relative Read Throughput: 0 00:08:02.310 Relative Read Latency: 0 00:08:02.310 Relative Write Throughput: 0 00:08:02.310 Relative Write Latency: 0 00:08:02.569 Idle Power: Not Reported 00:08:02.569 Active Power: Not Reported 00:08:02.569 Non-Operational Permissive Mode: Not Supported 00:08:02.569 00:08:02.569 Health Information 00:08:02.569 ================== 00:08:02.569 Critical Warnings: 00:08:02.569 Available Spare Space: OK 00:08:02.569 Temperature: OK 00:08:02.569 Device Reliability: OK 00:08:02.569 Read Only: No 00:08:02.569 Volatile Memory Backup: OK 00:08:02.569 Current Temperature: 323 Kelvin (50 Celsius) 00:08:02.569 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:02.569 Available Spare: 0% 00:08:02.569 Available Spare Threshold: 0% 00:08:02.569 Life Percentage Used: 0% 00:08:02.569 Data Units Read: 714 00:08:02.569 Data Units Written: 642 00:08:02.569 Host Read Commands: 37996 00:08:02.569 Host Write Commands: 37782 00:08:02.569 Controller Busy Time: 0 minutes 00:08:02.569 Power Cycles: 0 00:08:02.569 Power On Hours: 0 hours 00:08:02.569 Unsafe Shutdowns: 0 00:08:02.569 Unrecoverable Media Errors: 0 00:08:02.569 Lifetime Error Log Entries: 0 00:08:02.569 Warning Temperature Time: 0 minutes 00:08:02.569 Critical Temperature Time: 0 minutes 00:08:02.569 00:08:02.569 Number of Queues 00:08:02.569 ================ 00:08:02.569 Number of I/O Submission Queues: 64 00:08:02.569 Number of I/O Completion Queues: 64 00:08:02.569 00:08:02.569 ZNS Specific Controller Data 00:08:02.569 ============================ 00:08:02.569 Zone Append Size Limit: 0 00:08:02.569 00:08:02.569 00:08:02.569 Active Namespaces 00:08:02.569 ================= 00:08:02.569 Namespace ID:1 00:08:02.569 Error Recovery Timeout: Unlimited 00:08:02.569 Command Set Identifier: NVM (00h) 00:08:02.569 Deallocate: Supported 00:08:02.569 Deallocated/Unwritten Error: Supported 00:08:02.569 Deallocated Read Value: All 0x00 00:08:02.569 Deallocate in Write Zeroes: Not Supported 00:08:02.569 Deallocated Guard Field: 0xFFFF 00:08:02.569 Flush: Supported 00:08:02.569 Reservation: Not Supported 00:08:02.569 Metadata Transferred as: Separate Metadata Buffer 00:08:02.569 Namespace Sharing Capabilities: Private 00:08:02.569 Size (in LBAs): 1548666 (5GiB) 00:08:02.569 Capacity (in LBAs): 1548666 (5GiB) 00:08:02.569 Utilization (in LBAs): 1548666 (5GiB) 00:08:02.569 Thin Provisioning: Not Supported 00:08:02.569 Per-NS Atomic Units: No 00:08:02.569 Maximum Single Source Range Length: 128 00:08:02.569 Maximum Copy Length: 128 00:08:02.569 Maximum Source Range Count: 128 00:08:02.569 NGUID/EUI64 Never Reused: No 00:08:02.569 Namespace Write Protected: No 00:08:02.569 Number of LBA Formats: 8 00:08:02.569 Current LBA Format: LBA Format #07 00:08:02.569 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:02.569 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:02.569 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:02.569 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:02.569 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:02.569 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:02.569 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:02.569 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:02.569 00:08:02.569 NVM Specific Namespace Data 00:08:02.569 =========================== 00:08:02.569 Logical Block Storage Tag Mask: 0 00:08:02.569 Protection Information Capabilities: 00:08:02.569 16b Guard Protection Information Storage Tag Support: No 00:08:02.569 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:02.569 Storage Tag Check Read Support: No 00:08:02.569 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.569 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.569 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.569 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.569 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.569 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.569 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.569 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.569 13:56:40 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:02.569 13:56:40 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:08:02.569 ===================================================== 00:08:02.569 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:02.569 ===================================================== 00:08:02.569 Controller Capabilities/Features 00:08:02.569 ================================ 00:08:02.569 Vendor ID: 1b36 00:08:02.569 Subsystem Vendor ID: 1af4 00:08:02.569 Serial Number: 12341 00:08:02.569 Model Number: QEMU NVMe Ctrl 00:08:02.569 Firmware Version: 8.0.0 00:08:02.569 Recommended Arb Burst: 6 00:08:02.569 IEEE OUI Identifier: 00 54 52 00:08:02.569 Multi-path I/O 00:08:02.569 May have multiple subsystem ports: No 00:08:02.569 May have multiple controllers: No 00:08:02.569 Associated with SR-IOV VF: No 00:08:02.569 Max Data Transfer Size: 524288 00:08:02.569 Max Number of Namespaces: 256 00:08:02.569 Max Number of I/O Queues: 64 00:08:02.569 NVMe Specification Version (VS): 1.4 00:08:02.570 NVMe Specification Version (Identify): 1.4 00:08:02.570 Maximum Queue Entries: 2048 00:08:02.570 Contiguous Queues Required: Yes 00:08:02.570 Arbitration Mechanisms Supported 00:08:02.570 Weighted Round Robin: Not Supported 00:08:02.570 Vendor Specific: Not Supported 00:08:02.570 Reset Timeout: 7500 ms 00:08:02.570 Doorbell Stride: 4 bytes 00:08:02.570 NVM Subsystem Reset: Not Supported 00:08:02.570 Command Sets Supported 00:08:02.570 NVM Command Set: Supported 00:08:02.570 Boot Partition: Not Supported 00:08:02.570 Memory Page Size Minimum: 4096 bytes 00:08:02.570 Memory Page Size Maximum: 65536 bytes 00:08:02.570 Persistent Memory Region: Not Supported 00:08:02.570 Optional Asynchronous Events Supported 00:08:02.570 Namespace Attribute Notices: Supported 00:08:02.570 Firmware Activation Notices: Not Supported 00:08:02.570 ANA Change Notices: Not Supported 00:08:02.570 PLE Aggregate Log Change Notices: Not Supported 00:08:02.570 LBA Status Info Alert Notices: Not Supported 00:08:02.570 EGE Aggregate Log Change Notices: Not Supported 00:08:02.570 Normal NVM Subsystem Shutdown event: Not Supported 00:08:02.570 Zone Descriptor Change Notices: Not Supported 00:08:02.570 Discovery Log Change Notices: Not Supported 00:08:02.570 Controller Attributes 00:08:02.570 128-bit Host Identifier: Not Supported 00:08:02.570 Non-Operational Permissive Mode: Not Supported 00:08:02.570 NVM Sets: Not Supported 00:08:02.570 Read Recovery Levels: Not Supported 00:08:02.570 Endurance Groups: Not Supported 00:08:02.570 Predictable Latency Mode: Not Supported 00:08:02.570 Traffic Based Keep ALive: Not Supported 00:08:02.570 Namespace Granularity: Not Supported 00:08:02.570 SQ Associations: Not Supported 00:08:02.570 UUID List: Not Supported 00:08:02.570 Multi-Domain Subsystem: Not Supported 00:08:02.570 Fixed Capacity Management: Not Supported 00:08:02.570 Variable Capacity Management: Not Supported 00:08:02.570 Delete Endurance Group: Not Supported 00:08:02.570 Delete NVM Set: Not Supported 00:08:02.570 Extended LBA Formats Supported: Supported 00:08:02.570 Flexible Data Placement Supported: Not Supported 00:08:02.570 00:08:02.570 Controller Memory Buffer Support 00:08:02.570 ================================ 00:08:02.570 Supported: No 00:08:02.570 00:08:02.570 Persistent Memory Region Support 00:08:02.570 ================================ 00:08:02.570 Supported: No 00:08:02.570 00:08:02.570 Admin Command Set Attributes 00:08:02.570 ============================ 00:08:02.570 Security Send/Receive: Not Supported 00:08:02.570 Format NVM: Supported 00:08:02.570 Firmware Activate/Download: Not Supported 00:08:02.570 Namespace Management: Supported 00:08:02.570 Device Self-Test: Not Supported 00:08:02.570 Directives: Supported 00:08:02.570 NVMe-MI: Not Supported 00:08:02.570 Virtualization Management: Not Supported 00:08:02.570 Doorbell Buffer Config: Supported 00:08:02.570 Get LBA Status Capability: Not Supported 00:08:02.570 Command & Feature Lockdown Capability: Not Supported 00:08:02.570 Abort Command Limit: 4 00:08:02.570 Async Event Request Limit: 4 00:08:02.570 Number of Firmware Slots: N/A 00:08:02.570 Firmware Slot 1 Read-Only: N/A 00:08:02.570 Firmware Activation Without Reset: N/A 00:08:02.570 Multiple Update Detection Support: N/A 00:08:02.570 Firmware Update Granularity: No Information Provided 00:08:02.570 Per-Namespace SMART Log: Yes 00:08:02.570 Asymmetric Namespace Access Log Page: Not Supported 00:08:02.570 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:02.570 Command Effects Log Page: Supported 00:08:02.570 Get Log Page Extended Data: Supported 00:08:02.570 Telemetry Log Pages: Not Supported 00:08:02.570 Persistent Event Log Pages: Not Supported 00:08:02.570 Supported Log Pages Log Page: May Support 00:08:02.570 Commands Supported & Effects Log Page: Not Supported 00:08:02.570 Feature Identifiers & Effects Log Page:May Support 00:08:02.570 NVMe-MI Commands & Effects Log Page: May Support 00:08:02.570 Data Area 4 for Telemetry Log: Not Supported 00:08:02.570 Error Log Page Entries Supported: 1 00:08:02.570 Keep Alive: Not Supported 00:08:02.570 00:08:02.570 NVM Command Set Attributes 00:08:02.570 ========================== 00:08:02.570 Submission Queue Entry Size 00:08:02.570 Max: 64 00:08:02.570 Min: 64 00:08:02.570 Completion Queue Entry Size 00:08:02.570 Max: 16 00:08:02.570 Min: 16 00:08:02.570 Number of Namespaces: 256 00:08:02.570 Compare Command: Supported 00:08:02.570 Write Uncorrectable Command: Not Supported 00:08:02.570 Dataset Management Command: Supported 00:08:02.570 Write Zeroes Command: Supported 00:08:02.570 Set Features Save Field: Supported 00:08:02.570 Reservations: Not Supported 00:08:02.570 Timestamp: Supported 00:08:02.570 Copy: Supported 00:08:02.570 Volatile Write Cache: Present 00:08:02.570 Atomic Write Unit (Normal): 1 00:08:02.570 Atomic Write Unit (PFail): 1 00:08:02.570 Atomic Compare & Write Unit: 1 00:08:02.570 Fused Compare & Write: Not Supported 00:08:02.570 Scatter-Gather List 00:08:02.570 SGL Command Set: Supported 00:08:02.570 SGL Keyed: Not Supported 00:08:02.570 SGL Bit Bucket Descriptor: Not Supported 00:08:02.570 SGL Metadata Pointer: Not Supported 00:08:02.570 Oversized SGL: Not Supported 00:08:02.570 SGL Metadata Address: Not Supported 00:08:02.570 SGL Offset: Not Supported 00:08:02.570 Transport SGL Data Block: Not Supported 00:08:02.570 Replay Protected Memory Block: Not Supported 00:08:02.570 00:08:02.570 Firmware Slot Information 00:08:02.570 ========================= 00:08:02.570 Active slot: 1 00:08:02.570 Slot 1 Firmware Revision: 1.0 00:08:02.570 00:08:02.570 00:08:02.570 Commands Supported and Effects 00:08:02.570 ============================== 00:08:02.570 Admin Commands 00:08:02.570 -------------- 00:08:02.570 Delete I/O Submission Queue (00h): Supported 00:08:02.570 Create I/O Submission Queue (01h): Supported 00:08:02.570 Get Log Page (02h): Supported 00:08:02.570 Delete I/O Completion Queue (04h): Supported 00:08:02.570 Create I/O Completion Queue (05h): Supported 00:08:02.570 Identify (06h): Supported 00:08:02.570 Abort (08h): Supported 00:08:02.570 Set Features (09h): Supported 00:08:02.570 Get Features (0Ah): Supported 00:08:02.570 Asynchronous Event Request (0Ch): Supported 00:08:02.570 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:02.570 Directive Send (19h): Supported 00:08:02.570 Directive Receive (1Ah): Supported 00:08:02.570 Virtualization Management (1Ch): Supported 00:08:02.570 Doorbell Buffer Config (7Ch): Supported 00:08:02.570 Format NVM (80h): Supported LBA-Change 00:08:02.570 I/O Commands 00:08:02.570 ------------ 00:08:02.570 Flush (00h): Supported LBA-Change 00:08:02.570 Write (01h): Supported LBA-Change 00:08:02.570 Read (02h): Supported 00:08:02.570 Compare (05h): Supported 00:08:02.570 Write Zeroes (08h): Supported LBA-Change 00:08:02.570 Dataset Management (09h): Supported LBA-Change 00:08:02.570 Unknown (0Ch): Supported 00:08:02.570 Unknown (12h): Supported 00:08:02.570 Copy (19h): Supported LBA-Change 00:08:02.570 Unknown (1Dh): Supported LBA-Change 00:08:02.570 00:08:02.570 Error Log 00:08:02.570 ========= 00:08:02.570 00:08:02.570 Arbitration 00:08:02.570 =========== 00:08:02.570 Arbitration Burst: no limit 00:08:02.570 00:08:02.570 Power Management 00:08:02.570 ================ 00:08:02.570 Number of Power States: 1 00:08:02.570 Current Power State: Power State #0 00:08:02.570 Power State #0: 00:08:02.570 Max Power: 25.00 W 00:08:02.570 Non-Operational State: Operational 00:08:02.570 Entry Latency: 16 microseconds 00:08:02.570 Exit Latency: 4 microseconds 00:08:02.570 Relative Read Throughput: 0 00:08:02.570 Relative Read Latency: 0 00:08:02.570 Relative Write Throughput: 0 00:08:02.570 Relative Write Latency: 0 00:08:02.570 Idle Power: Not Reported 00:08:02.570 Active Power: Not Reported 00:08:02.570 Non-Operational Permissive Mode: Not Supported 00:08:02.570 00:08:02.570 Health Information 00:08:02.570 ================== 00:08:02.570 Critical Warnings: 00:08:02.570 Available Spare Space: OK 00:08:02.570 Temperature: OK 00:08:02.570 Device Reliability: OK 00:08:02.570 Read Only: No 00:08:02.570 Volatile Memory Backup: OK 00:08:02.570 Current Temperature: 323 Kelvin (50 Celsius) 00:08:02.570 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:02.570 Available Spare: 0% 00:08:02.570 Available Spare Threshold: 0% 00:08:02.570 Life Percentage Used: 0% 00:08:02.570 Data Units Read: 1060 00:08:02.570 Data Units Written: 932 00:08:02.570 Host Read Commands: 55000 00:08:02.570 Host Write Commands: 53851 00:08:02.570 Controller Busy Time: 0 minutes 00:08:02.570 Power Cycles: 0 00:08:02.571 Power On Hours: 0 hours 00:08:02.571 Unsafe Shutdowns: 0 00:08:02.571 Unrecoverable Media Errors: 0 00:08:02.571 Lifetime Error Log Entries: 0 00:08:02.571 Warning Temperature Time: 0 minutes 00:08:02.571 Critical Temperature Time: 0 minutes 00:08:02.571 00:08:02.571 Number of Queues 00:08:02.571 ================ 00:08:02.571 Number of I/O Submission Queues: 64 00:08:02.571 Number of I/O Completion Queues: 64 00:08:02.571 00:08:02.571 ZNS Specific Controller Data 00:08:02.571 ============================ 00:08:02.571 Zone Append Size Limit: 0 00:08:02.571 00:08:02.571 00:08:02.571 Active Namespaces 00:08:02.571 ================= 00:08:02.571 Namespace ID:1 00:08:02.571 Error Recovery Timeout: Unlimited 00:08:02.571 Command Set Identifier: NVM (00h) 00:08:02.571 Deallocate: Supported 00:08:02.571 Deallocated/Unwritten Error: Supported 00:08:02.571 Deallocated Read Value: All 0x00 00:08:02.571 Deallocate in Write Zeroes: Not Supported 00:08:02.571 Deallocated Guard Field: 0xFFFF 00:08:02.571 Flush: Supported 00:08:02.571 Reservation: Not Supported 00:08:02.571 Namespace Sharing Capabilities: Private 00:08:02.571 Size (in LBAs): 1310720 (5GiB) 00:08:02.571 Capacity (in LBAs): 1310720 (5GiB) 00:08:02.571 Utilization (in LBAs): 1310720 (5GiB) 00:08:02.571 Thin Provisioning: Not Supported 00:08:02.571 Per-NS Atomic Units: No 00:08:02.571 Maximum Single Source Range Length: 128 00:08:02.571 Maximum Copy Length: 128 00:08:02.571 Maximum Source Range Count: 128 00:08:02.571 NGUID/EUI64 Never Reused: No 00:08:02.571 Namespace Write Protected: No 00:08:02.571 Number of LBA Formats: 8 00:08:02.571 Current LBA Format: LBA Format #04 00:08:02.571 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:02.571 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:02.571 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:02.571 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:02.571 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:02.571 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:02.571 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:02.571 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:02.571 00:08:02.571 NVM Specific Namespace Data 00:08:02.571 =========================== 00:08:02.571 Logical Block Storage Tag Mask: 0 00:08:02.571 Protection Information Capabilities: 00:08:02.571 16b Guard Protection Information Storage Tag Support: No 00:08:02.571 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:02.571 Storage Tag Check Read Support: No 00:08:02.571 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.571 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.571 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.571 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.571 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.571 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.571 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.571 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.571 13:56:40 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:02.571 13:56:40 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:08:02.830 ===================================================== 00:08:02.830 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:02.830 ===================================================== 00:08:02.830 Controller Capabilities/Features 00:08:02.830 ================================ 00:08:02.830 Vendor ID: 1b36 00:08:02.830 Subsystem Vendor ID: 1af4 00:08:02.830 Serial Number: 12342 00:08:02.830 Model Number: QEMU NVMe Ctrl 00:08:02.830 Firmware Version: 8.0.0 00:08:02.830 Recommended Arb Burst: 6 00:08:02.830 IEEE OUI Identifier: 00 54 52 00:08:02.830 Multi-path I/O 00:08:02.830 May have multiple subsystem ports: No 00:08:02.830 May have multiple controllers: No 00:08:02.830 Associated with SR-IOV VF: No 00:08:02.830 Max Data Transfer Size: 524288 00:08:02.830 Max Number of Namespaces: 256 00:08:02.830 Max Number of I/O Queues: 64 00:08:02.830 NVMe Specification Version (VS): 1.4 00:08:02.830 NVMe Specification Version (Identify): 1.4 00:08:02.830 Maximum Queue Entries: 2048 00:08:02.830 Contiguous Queues Required: Yes 00:08:02.830 Arbitration Mechanisms Supported 00:08:02.830 Weighted Round Robin: Not Supported 00:08:02.830 Vendor Specific: Not Supported 00:08:02.830 Reset Timeout: 7500 ms 00:08:02.830 Doorbell Stride: 4 bytes 00:08:02.830 NVM Subsystem Reset: Not Supported 00:08:02.830 Command Sets Supported 00:08:02.830 NVM Command Set: Supported 00:08:02.830 Boot Partition: Not Supported 00:08:02.830 Memory Page Size Minimum: 4096 bytes 00:08:02.830 Memory Page Size Maximum: 65536 bytes 00:08:02.830 Persistent Memory Region: Not Supported 00:08:02.830 Optional Asynchronous Events Supported 00:08:02.830 Namespace Attribute Notices: Supported 00:08:02.830 Firmware Activation Notices: Not Supported 00:08:02.830 ANA Change Notices: Not Supported 00:08:02.830 PLE Aggregate Log Change Notices: Not Supported 00:08:02.830 LBA Status Info Alert Notices: Not Supported 00:08:02.830 EGE Aggregate Log Change Notices: Not Supported 00:08:02.830 Normal NVM Subsystem Shutdown event: Not Supported 00:08:02.830 Zone Descriptor Change Notices: Not Supported 00:08:02.830 Discovery Log Change Notices: Not Supported 00:08:02.830 Controller Attributes 00:08:02.831 128-bit Host Identifier: Not Supported 00:08:02.831 Non-Operational Permissive Mode: Not Supported 00:08:02.831 NVM Sets: Not Supported 00:08:02.831 Read Recovery Levels: Not Supported 00:08:02.831 Endurance Groups: Not Supported 00:08:02.831 Predictable Latency Mode: Not Supported 00:08:02.831 Traffic Based Keep ALive: Not Supported 00:08:02.831 Namespace Granularity: Not Supported 00:08:02.831 SQ Associations: Not Supported 00:08:02.831 UUID List: Not Supported 00:08:02.831 Multi-Domain Subsystem: Not Supported 00:08:02.831 Fixed Capacity Management: Not Supported 00:08:02.831 Variable Capacity Management: Not Supported 00:08:02.831 Delete Endurance Group: Not Supported 00:08:02.831 Delete NVM Set: Not Supported 00:08:02.831 Extended LBA Formats Supported: Supported 00:08:02.831 Flexible Data Placement Supported: Not Supported 00:08:02.831 00:08:02.831 Controller Memory Buffer Support 00:08:02.831 ================================ 00:08:02.831 Supported: No 00:08:02.831 00:08:02.831 Persistent Memory Region Support 00:08:02.831 ================================ 00:08:02.831 Supported: No 00:08:02.831 00:08:02.831 Admin Command Set Attributes 00:08:02.831 ============================ 00:08:02.831 Security Send/Receive: Not Supported 00:08:02.831 Format NVM: Supported 00:08:02.831 Firmware Activate/Download: Not Supported 00:08:02.831 Namespace Management: Supported 00:08:02.831 Device Self-Test: Not Supported 00:08:02.831 Directives: Supported 00:08:02.831 NVMe-MI: Not Supported 00:08:02.831 Virtualization Management: Not Supported 00:08:02.831 Doorbell Buffer Config: Supported 00:08:02.831 Get LBA Status Capability: Not Supported 00:08:02.831 Command & Feature Lockdown Capability: Not Supported 00:08:02.831 Abort Command Limit: 4 00:08:02.831 Async Event Request Limit: 4 00:08:02.831 Number of Firmware Slots: N/A 00:08:02.831 Firmware Slot 1 Read-Only: N/A 00:08:02.831 Firmware Activation Without Reset: N/A 00:08:02.831 Multiple Update Detection Support: N/A 00:08:02.831 Firmware Update Granularity: No Information Provided 00:08:02.831 Per-Namespace SMART Log: Yes 00:08:02.831 Asymmetric Namespace Access Log Page: Not Supported 00:08:02.831 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:02.831 Command Effects Log Page: Supported 00:08:02.831 Get Log Page Extended Data: Supported 00:08:02.831 Telemetry Log Pages: Not Supported 00:08:02.831 Persistent Event Log Pages: Not Supported 00:08:02.831 Supported Log Pages Log Page: May Support 00:08:02.831 Commands Supported & Effects Log Page: Not Supported 00:08:02.831 Feature Identifiers & Effects Log Page:May Support 00:08:02.831 NVMe-MI Commands & Effects Log Page: May Support 00:08:02.831 Data Area 4 for Telemetry Log: Not Supported 00:08:02.831 Error Log Page Entries Supported: 1 00:08:02.831 Keep Alive: Not Supported 00:08:02.831 00:08:02.831 NVM Command Set Attributes 00:08:02.831 ========================== 00:08:02.831 Submission Queue Entry Size 00:08:02.831 Max: 64 00:08:02.831 Min: 64 00:08:02.831 Completion Queue Entry Size 00:08:02.831 Max: 16 00:08:02.831 Min: 16 00:08:02.831 Number of Namespaces: 256 00:08:02.831 Compare Command: Supported 00:08:02.831 Write Uncorrectable Command: Not Supported 00:08:02.831 Dataset Management Command: Supported 00:08:02.831 Write Zeroes Command: Supported 00:08:02.831 Set Features Save Field: Supported 00:08:02.831 Reservations: Not Supported 00:08:02.831 Timestamp: Supported 00:08:02.831 Copy: Supported 00:08:02.831 Volatile Write Cache: Present 00:08:02.831 Atomic Write Unit (Normal): 1 00:08:02.831 Atomic Write Unit (PFail): 1 00:08:02.831 Atomic Compare & Write Unit: 1 00:08:02.831 Fused Compare & Write: Not Supported 00:08:02.831 Scatter-Gather List 00:08:02.831 SGL Command Set: Supported 00:08:02.831 SGL Keyed: Not Supported 00:08:02.831 SGL Bit Bucket Descriptor: Not Supported 00:08:02.831 SGL Metadata Pointer: Not Supported 00:08:02.831 Oversized SGL: Not Supported 00:08:02.831 SGL Metadata Address: Not Supported 00:08:02.831 SGL Offset: Not Supported 00:08:02.831 Transport SGL Data Block: Not Supported 00:08:02.831 Replay Protected Memory Block: Not Supported 00:08:02.831 00:08:02.831 Firmware Slot Information 00:08:02.831 ========================= 00:08:02.831 Active slot: 1 00:08:02.831 Slot 1 Firmware Revision: 1.0 00:08:02.831 00:08:02.831 00:08:02.831 Commands Supported and Effects 00:08:02.831 ============================== 00:08:02.831 Admin Commands 00:08:02.831 -------------- 00:08:02.831 Delete I/O Submission Queue (00h): Supported 00:08:02.831 Create I/O Submission Queue (01h): Supported 00:08:02.831 Get Log Page (02h): Supported 00:08:02.831 Delete I/O Completion Queue (04h): Supported 00:08:02.831 Create I/O Completion Queue (05h): Supported 00:08:02.831 Identify (06h): Supported 00:08:02.831 Abort (08h): Supported 00:08:02.831 Set Features (09h): Supported 00:08:02.831 Get Features (0Ah): Supported 00:08:02.831 Asynchronous Event Request (0Ch): Supported 00:08:02.831 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:02.831 Directive Send (19h): Supported 00:08:02.831 Directive Receive (1Ah): Supported 00:08:02.831 Virtualization Management (1Ch): Supported 00:08:02.831 Doorbell Buffer Config (7Ch): Supported 00:08:02.831 Format NVM (80h): Supported LBA-Change 00:08:02.831 I/O Commands 00:08:02.831 ------------ 00:08:02.831 Flush (00h): Supported LBA-Change 00:08:02.831 Write (01h): Supported LBA-Change 00:08:02.831 Read (02h): Supported 00:08:02.831 Compare (05h): Supported 00:08:02.831 Write Zeroes (08h): Supported LBA-Change 00:08:02.831 Dataset Management (09h): Supported LBA-Change 00:08:02.831 Unknown (0Ch): Supported 00:08:02.831 Unknown (12h): Supported 00:08:02.831 Copy (19h): Supported LBA-Change 00:08:02.831 Unknown (1Dh): Supported LBA-Change 00:08:02.831 00:08:02.831 Error Log 00:08:02.831 ========= 00:08:02.831 00:08:02.831 Arbitration 00:08:02.831 =========== 00:08:02.831 Arbitration Burst: no limit 00:08:02.831 00:08:02.831 Power Management 00:08:02.831 ================ 00:08:02.831 Number of Power States: 1 00:08:02.831 Current Power State: Power State #0 00:08:02.831 Power State #0: 00:08:02.831 Max Power: 25.00 W 00:08:02.831 Non-Operational State: Operational 00:08:02.831 Entry Latency: 16 microseconds 00:08:02.831 Exit Latency: 4 microseconds 00:08:02.831 Relative Read Throughput: 0 00:08:02.831 Relative Read Latency: 0 00:08:02.831 Relative Write Throughput: 0 00:08:02.831 Relative Write Latency: 0 00:08:02.831 Idle Power: Not Reported 00:08:02.831 Active Power: Not Reported 00:08:02.831 Non-Operational Permissive Mode: Not Supported 00:08:02.831 00:08:02.831 Health Information 00:08:02.831 ================== 00:08:02.831 Critical Warnings: 00:08:02.831 Available Spare Space: OK 00:08:02.831 Temperature: OK 00:08:02.831 Device Reliability: OK 00:08:02.831 Read Only: No 00:08:02.831 Volatile Memory Backup: OK 00:08:02.831 Current Temperature: 323 Kelvin (50 Celsius) 00:08:02.831 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:02.831 Available Spare: 0% 00:08:02.831 Available Spare Threshold: 0% 00:08:02.831 Life Percentage Used: 0% 00:08:02.831 Data Units Read: 2218 00:08:02.831 Data Units Written: 2006 00:08:02.831 Host Read Commands: 115192 00:08:02.831 Host Write Commands: 113461 00:08:02.831 Controller Busy Time: 0 minutes 00:08:02.831 Power Cycles: 0 00:08:02.831 Power On Hours: 0 hours 00:08:02.831 Unsafe Shutdowns: 0 00:08:02.831 Unrecoverable Media Errors: 0 00:08:02.831 Lifetime Error Log Entries: 0 00:08:02.831 Warning Temperature Time: 0 minutes 00:08:02.831 Critical Temperature Time: 0 minutes 00:08:02.831 00:08:02.832 Number of Queues 00:08:02.832 ================ 00:08:02.832 Number of I/O Submission Queues: 64 00:08:02.832 Number of I/O Completion Queues: 64 00:08:02.832 00:08:02.832 ZNS Specific Controller Data 00:08:02.832 ============================ 00:08:02.832 Zone Append Size Limit: 0 00:08:02.832 00:08:02.832 00:08:02.832 Active Namespaces 00:08:02.832 ================= 00:08:02.832 Namespace ID:1 00:08:02.832 Error Recovery Timeout: Unlimited 00:08:02.832 Command Set Identifier: NVM (00h) 00:08:02.832 Deallocate: Supported 00:08:02.832 Deallocated/Unwritten Error: Supported 00:08:02.832 Deallocated Read Value: All 0x00 00:08:02.832 Deallocate in Write Zeroes: Not Supported 00:08:02.832 Deallocated Guard Field: 0xFFFF 00:08:02.832 Flush: Supported 00:08:02.832 Reservation: Not Supported 00:08:02.832 Namespace Sharing Capabilities: Private 00:08:02.832 Size (in LBAs): 1048576 (4GiB) 00:08:02.832 Capacity (in LBAs): 1048576 (4GiB) 00:08:02.832 Utilization (in LBAs): 1048576 (4GiB) 00:08:02.832 Thin Provisioning: Not Supported 00:08:02.832 Per-NS Atomic Units: No 00:08:02.832 Maximum Single Source Range Length: 128 00:08:02.832 Maximum Copy Length: 128 00:08:02.832 Maximum Source Range Count: 128 00:08:02.832 NGUID/EUI64 Never Reused: No 00:08:02.832 Namespace Write Protected: No 00:08:02.832 Number of LBA Formats: 8 00:08:02.832 Current LBA Format: LBA Format #04 00:08:02.832 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:02.832 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:02.832 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:02.832 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:02.832 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:02.832 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:02.832 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:02.832 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:02.832 00:08:02.832 NVM Specific Namespace Data 00:08:02.832 =========================== 00:08:02.832 Logical Block Storage Tag Mask: 0 00:08:02.832 Protection Information Capabilities: 00:08:02.832 16b Guard Protection Information Storage Tag Support: No 00:08:02.832 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:02.832 Storage Tag Check Read Support: No 00:08:02.832 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.832 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.832 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.832 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.832 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.832 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.832 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.832 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.832 Namespace ID:2 00:08:02.832 Error Recovery Timeout: Unlimited 00:08:02.832 Command Set Identifier: NVM (00h) 00:08:02.832 Deallocate: Supported 00:08:02.832 Deallocated/Unwritten Error: Supported 00:08:02.832 Deallocated Read Value: All 0x00 00:08:02.832 Deallocate in Write Zeroes: Not Supported 00:08:02.832 Deallocated Guard Field: 0xFFFF 00:08:02.832 Flush: Supported 00:08:02.832 Reservation: Not Supported 00:08:02.832 Namespace Sharing Capabilities: Private 00:08:02.832 Size (in LBAs): 1048576 (4GiB) 00:08:02.832 Capacity (in LBAs): 1048576 (4GiB) 00:08:02.832 Utilization (in LBAs): 1048576 (4GiB) 00:08:02.832 Thin Provisioning: Not Supported 00:08:02.832 Per-NS Atomic Units: No 00:08:02.832 Maximum Single Source Range Length: 128 00:08:02.832 Maximum Copy Length: 128 00:08:02.832 Maximum Source Range Count: 128 00:08:02.832 NGUID/EUI64 Never Reused: No 00:08:02.832 Namespace Write Protected: No 00:08:02.832 Number of LBA Formats: 8 00:08:02.832 Current LBA Format: LBA Format #04 00:08:02.832 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:02.832 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:02.832 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:02.832 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:02.832 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:02.832 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:02.832 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:02.832 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:02.832 00:08:02.832 NVM Specific Namespace Data 00:08:02.832 =========================== 00:08:02.832 Logical Block Storage Tag Mask: 0 00:08:02.832 Protection Information Capabilities: 00:08:02.832 16b Guard Protection Information Storage Tag Support: No 00:08:02.832 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:02.832 Storage Tag Check Read Support: No 00:08:02.832 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.832 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.832 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.832 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.832 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.832 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.832 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.832 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.832 Namespace ID:3 00:08:02.832 Error Recovery Timeout: Unlimited 00:08:02.832 Command Set Identifier: NVM (00h) 00:08:02.832 Deallocate: Supported 00:08:02.832 Deallocated/Unwritten Error: Supported 00:08:02.832 Deallocated Read Value: All 0x00 00:08:02.832 Deallocate in Write Zeroes: Not Supported 00:08:02.832 Deallocated Guard Field: 0xFFFF 00:08:02.832 Flush: Supported 00:08:02.832 Reservation: Not Supported 00:08:02.832 Namespace Sharing Capabilities: Private 00:08:02.832 Size (in LBAs): 1048576 (4GiB) 00:08:02.832 Capacity (in LBAs): 1048576 (4GiB) 00:08:02.832 Utilization (in LBAs): 1048576 (4GiB) 00:08:02.832 Thin Provisioning: Not Supported 00:08:02.832 Per-NS Atomic Units: No 00:08:02.832 Maximum Single Source Range Length: 128 00:08:02.832 Maximum Copy Length: 128 00:08:02.832 Maximum Source Range Count: 128 00:08:02.832 NGUID/EUI64 Never Reused: No 00:08:02.832 Namespace Write Protected: No 00:08:02.832 Number of LBA Formats: 8 00:08:02.832 Current LBA Format: LBA Format #04 00:08:02.832 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:02.832 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:02.832 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:02.832 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:02.832 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:02.832 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:02.832 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:02.832 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:02.832 00:08:02.832 NVM Specific Namespace Data 00:08:02.832 =========================== 00:08:02.832 Logical Block Storage Tag Mask: 0 00:08:02.832 Protection Information Capabilities: 00:08:02.832 16b Guard Protection Information Storage Tag Support: No 00:08:02.832 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:02.832 Storage Tag Check Read Support: No 00:08:02.832 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.832 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.832 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.832 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.832 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.832 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.832 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.832 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:02.832 13:56:41 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:02.832 13:56:41 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:08:03.091 ===================================================== 00:08:03.091 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:03.091 ===================================================== 00:08:03.091 Controller Capabilities/Features 00:08:03.091 ================================ 00:08:03.091 Vendor ID: 1b36 00:08:03.091 Subsystem Vendor ID: 1af4 00:08:03.091 Serial Number: 12343 00:08:03.091 Model Number: QEMU NVMe Ctrl 00:08:03.091 Firmware Version: 8.0.0 00:08:03.091 Recommended Arb Burst: 6 00:08:03.091 IEEE OUI Identifier: 00 54 52 00:08:03.091 Multi-path I/O 00:08:03.091 May have multiple subsystem ports: No 00:08:03.091 May have multiple controllers: Yes 00:08:03.091 Associated with SR-IOV VF: No 00:08:03.091 Max Data Transfer Size: 524288 00:08:03.091 Max Number of Namespaces: 256 00:08:03.091 Max Number of I/O Queues: 64 00:08:03.091 NVMe Specification Version (VS): 1.4 00:08:03.091 NVMe Specification Version (Identify): 1.4 00:08:03.091 Maximum Queue Entries: 2048 00:08:03.091 Contiguous Queues Required: Yes 00:08:03.091 Arbitration Mechanisms Supported 00:08:03.091 Weighted Round Robin: Not Supported 00:08:03.091 Vendor Specific: Not Supported 00:08:03.091 Reset Timeout: 7500 ms 00:08:03.091 Doorbell Stride: 4 bytes 00:08:03.091 NVM Subsystem Reset: Not Supported 00:08:03.091 Command Sets Supported 00:08:03.091 NVM Command Set: Supported 00:08:03.091 Boot Partition: Not Supported 00:08:03.091 Memory Page Size Minimum: 4096 bytes 00:08:03.091 Memory Page Size Maximum: 65536 bytes 00:08:03.091 Persistent Memory Region: Not Supported 00:08:03.091 Optional Asynchronous Events Supported 00:08:03.091 Namespace Attribute Notices: Supported 00:08:03.091 Firmware Activation Notices: Not Supported 00:08:03.091 ANA Change Notices: Not Supported 00:08:03.091 PLE Aggregate Log Change Notices: Not Supported 00:08:03.091 LBA Status Info Alert Notices: Not Supported 00:08:03.091 EGE Aggregate Log Change Notices: Not Supported 00:08:03.091 Normal NVM Subsystem Shutdown event: Not Supported 00:08:03.091 Zone Descriptor Change Notices: Not Supported 00:08:03.091 Discovery Log Change Notices: Not Supported 00:08:03.091 Controller Attributes 00:08:03.091 128-bit Host Identifier: Not Supported 00:08:03.091 Non-Operational Permissive Mode: Not Supported 00:08:03.091 NVM Sets: Not Supported 00:08:03.092 Read Recovery Levels: Not Supported 00:08:03.092 Endurance Groups: Supported 00:08:03.092 Predictable Latency Mode: Not Supported 00:08:03.092 Traffic Based Keep ALive: Not Supported 00:08:03.092 Namespace Granularity: Not Supported 00:08:03.092 SQ Associations: Not Supported 00:08:03.092 UUID List: Not Supported 00:08:03.092 Multi-Domain Subsystem: Not Supported 00:08:03.092 Fixed Capacity Management: Not Supported 00:08:03.092 Variable Capacity Management: Not Supported 00:08:03.092 Delete Endurance Group: Not Supported 00:08:03.092 Delete NVM Set: Not Supported 00:08:03.092 Extended LBA Formats Supported: Supported 00:08:03.092 Flexible Data Placement Supported: Supported 00:08:03.092 00:08:03.092 Controller Memory Buffer Support 00:08:03.092 ================================ 00:08:03.092 Supported: No 00:08:03.092 00:08:03.092 Persistent Memory Region Support 00:08:03.092 ================================ 00:08:03.092 Supported: No 00:08:03.092 00:08:03.092 Admin Command Set Attributes 00:08:03.092 ============================ 00:08:03.092 Security Send/Receive: Not Supported 00:08:03.092 Format NVM: Supported 00:08:03.092 Firmware Activate/Download: Not Supported 00:08:03.092 Namespace Management: Supported 00:08:03.092 Device Self-Test: Not Supported 00:08:03.092 Directives: Supported 00:08:03.092 NVMe-MI: Not Supported 00:08:03.092 Virtualization Management: Not Supported 00:08:03.092 Doorbell Buffer Config: Supported 00:08:03.092 Get LBA Status Capability: Not Supported 00:08:03.092 Command & Feature Lockdown Capability: Not Supported 00:08:03.092 Abort Command Limit: 4 00:08:03.092 Async Event Request Limit: 4 00:08:03.092 Number of Firmware Slots: N/A 00:08:03.092 Firmware Slot 1 Read-Only: N/A 00:08:03.092 Firmware Activation Without Reset: N/A 00:08:03.092 Multiple Update Detection Support: N/A 00:08:03.092 Firmware Update Granularity: No Information Provided 00:08:03.092 Per-Namespace SMART Log: Yes 00:08:03.092 Asymmetric Namespace Access Log Page: Not Supported 00:08:03.092 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:03.092 Command Effects Log Page: Supported 00:08:03.092 Get Log Page Extended Data: Supported 00:08:03.092 Telemetry Log Pages: Not Supported 00:08:03.092 Persistent Event Log Pages: Not Supported 00:08:03.092 Supported Log Pages Log Page: May Support 00:08:03.092 Commands Supported & Effects Log Page: Not Supported 00:08:03.092 Feature Identifiers & Effects Log Page:May Support 00:08:03.092 NVMe-MI Commands & Effects Log Page: May Support 00:08:03.092 Data Area 4 for Telemetry Log: Not Supported 00:08:03.092 Error Log Page Entries Supported: 1 00:08:03.092 Keep Alive: Not Supported 00:08:03.092 00:08:03.092 NVM Command Set Attributes 00:08:03.092 ========================== 00:08:03.092 Submission Queue Entry Size 00:08:03.092 Max: 64 00:08:03.092 Min: 64 00:08:03.092 Completion Queue Entry Size 00:08:03.092 Max: 16 00:08:03.092 Min: 16 00:08:03.092 Number of Namespaces: 256 00:08:03.092 Compare Command: Supported 00:08:03.092 Write Uncorrectable Command: Not Supported 00:08:03.092 Dataset Management Command: Supported 00:08:03.092 Write Zeroes Command: Supported 00:08:03.092 Set Features Save Field: Supported 00:08:03.092 Reservations: Not Supported 00:08:03.092 Timestamp: Supported 00:08:03.092 Copy: Supported 00:08:03.092 Volatile Write Cache: Present 00:08:03.092 Atomic Write Unit (Normal): 1 00:08:03.092 Atomic Write Unit (PFail): 1 00:08:03.092 Atomic Compare & Write Unit: 1 00:08:03.092 Fused Compare & Write: Not Supported 00:08:03.092 Scatter-Gather List 00:08:03.092 SGL Command Set: Supported 00:08:03.092 SGL Keyed: Not Supported 00:08:03.092 SGL Bit Bucket Descriptor: Not Supported 00:08:03.092 SGL Metadata Pointer: Not Supported 00:08:03.092 Oversized SGL: Not Supported 00:08:03.092 SGL Metadata Address: Not Supported 00:08:03.092 SGL Offset: Not Supported 00:08:03.092 Transport SGL Data Block: Not Supported 00:08:03.092 Replay Protected Memory Block: Not Supported 00:08:03.092 00:08:03.092 Firmware Slot Information 00:08:03.092 ========================= 00:08:03.092 Active slot: 1 00:08:03.092 Slot 1 Firmware Revision: 1.0 00:08:03.092 00:08:03.092 00:08:03.092 Commands Supported and Effects 00:08:03.092 ============================== 00:08:03.092 Admin Commands 00:08:03.092 -------------- 00:08:03.092 Delete I/O Submission Queue (00h): Supported 00:08:03.092 Create I/O Submission Queue (01h): Supported 00:08:03.092 Get Log Page (02h): Supported 00:08:03.092 Delete I/O Completion Queue (04h): Supported 00:08:03.092 Create I/O Completion Queue (05h): Supported 00:08:03.092 Identify (06h): Supported 00:08:03.092 Abort (08h): Supported 00:08:03.092 Set Features (09h): Supported 00:08:03.092 Get Features (0Ah): Supported 00:08:03.092 Asynchronous Event Request (0Ch): Supported 00:08:03.092 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:03.092 Directive Send (19h): Supported 00:08:03.092 Directive Receive (1Ah): Supported 00:08:03.092 Virtualization Management (1Ch): Supported 00:08:03.092 Doorbell Buffer Config (7Ch): Supported 00:08:03.092 Format NVM (80h): Supported LBA-Change 00:08:03.092 I/O Commands 00:08:03.092 ------------ 00:08:03.092 Flush (00h): Supported LBA-Change 00:08:03.092 Write (01h): Supported LBA-Change 00:08:03.092 Read (02h): Supported 00:08:03.092 Compare (05h): Supported 00:08:03.092 Write Zeroes (08h): Supported LBA-Change 00:08:03.092 Dataset Management (09h): Supported LBA-Change 00:08:03.092 Unknown (0Ch): Supported 00:08:03.092 Unknown (12h): Supported 00:08:03.092 Copy (19h): Supported LBA-Change 00:08:03.092 Unknown (1Dh): Supported LBA-Change 00:08:03.092 00:08:03.092 Error Log 00:08:03.092 ========= 00:08:03.092 00:08:03.092 Arbitration 00:08:03.092 =========== 00:08:03.092 Arbitration Burst: no limit 00:08:03.092 00:08:03.092 Power Management 00:08:03.092 ================ 00:08:03.092 Number of Power States: 1 00:08:03.092 Current Power State: Power State #0 00:08:03.092 Power State #0: 00:08:03.092 Max Power: 25.00 W 00:08:03.092 Non-Operational State: Operational 00:08:03.092 Entry Latency: 16 microseconds 00:08:03.092 Exit Latency: 4 microseconds 00:08:03.092 Relative Read Throughput: 0 00:08:03.092 Relative Read Latency: 0 00:08:03.092 Relative Write Throughput: 0 00:08:03.092 Relative Write Latency: 0 00:08:03.092 Idle Power: Not Reported 00:08:03.092 Active Power: Not Reported 00:08:03.092 Non-Operational Permissive Mode: Not Supported 00:08:03.092 00:08:03.092 Health Information 00:08:03.092 ================== 00:08:03.092 Critical Warnings: 00:08:03.092 Available Spare Space: OK 00:08:03.092 Temperature: OK 00:08:03.092 Device Reliability: OK 00:08:03.092 Read Only: No 00:08:03.092 Volatile Memory Backup: OK 00:08:03.092 Current Temperature: 323 Kelvin (50 Celsius) 00:08:03.092 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:03.092 Available Spare: 0% 00:08:03.092 Available Spare Threshold: 0% 00:08:03.092 Life Percentage Used: 0% 00:08:03.092 Data Units Read: 760 00:08:03.092 Data Units Written: 689 00:08:03.092 Host Read Commands: 38656 00:08:03.092 Host Write Commands: 38079 00:08:03.092 Controller Busy Time: 0 minutes 00:08:03.092 Power Cycles: 0 00:08:03.092 Power On Hours: 0 hours 00:08:03.092 Unsafe Shutdowns: 0 00:08:03.092 Unrecoverable Media Errors: 0 00:08:03.092 Lifetime Error Log Entries: 0 00:08:03.092 Warning Temperature Time: 0 minutes 00:08:03.092 Critical Temperature Time: 0 minutes 00:08:03.092 00:08:03.092 Number of Queues 00:08:03.092 ================ 00:08:03.092 Number of I/O Submission Queues: 64 00:08:03.092 Number of I/O Completion Queues: 64 00:08:03.092 00:08:03.092 ZNS Specific Controller Data 00:08:03.092 ============================ 00:08:03.092 Zone Append Size Limit: 0 00:08:03.092 00:08:03.092 00:08:03.092 Active Namespaces 00:08:03.093 ================= 00:08:03.093 Namespace ID:1 00:08:03.093 Error Recovery Timeout: Unlimited 00:08:03.093 Command Set Identifier: NVM (00h) 00:08:03.093 Deallocate: Supported 00:08:03.093 Deallocated/Unwritten Error: Supported 00:08:03.093 Deallocated Read Value: All 0x00 00:08:03.093 Deallocate in Write Zeroes: Not Supported 00:08:03.093 Deallocated Guard Field: 0xFFFF 00:08:03.093 Flush: Supported 00:08:03.093 Reservation: Not Supported 00:08:03.093 Namespace Sharing Capabilities: Multiple Controllers 00:08:03.093 Size (in LBAs): 262144 (1GiB) 00:08:03.093 Capacity (in LBAs): 262144 (1GiB) 00:08:03.093 Utilization (in LBAs): 262144 (1GiB) 00:08:03.093 Thin Provisioning: Not Supported 00:08:03.093 Per-NS Atomic Units: No 00:08:03.093 Maximum Single Source Range Length: 128 00:08:03.093 Maximum Copy Length: 128 00:08:03.093 Maximum Source Range Count: 128 00:08:03.093 NGUID/EUI64 Never Reused: No 00:08:03.093 Namespace Write Protected: No 00:08:03.093 Endurance group ID: 1 00:08:03.093 Number of LBA Formats: 8 00:08:03.093 Current LBA Format: LBA Format #04 00:08:03.093 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:03.093 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:03.093 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:03.093 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:03.093 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:03.093 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:03.093 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:03.093 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:03.093 00:08:03.093 Get Feature FDP: 00:08:03.093 ================ 00:08:03.093 Enabled: Yes 00:08:03.093 FDP configuration index: 0 00:08:03.093 00:08:03.093 FDP configurations log page 00:08:03.093 =========================== 00:08:03.093 Number of FDP configurations: 1 00:08:03.093 Version: 0 00:08:03.093 Size: 112 00:08:03.093 FDP Configuration Descriptor: 0 00:08:03.093 Descriptor Size: 96 00:08:03.093 Reclaim Group Identifier format: 2 00:08:03.093 FDP Volatile Write Cache: Not Present 00:08:03.093 FDP Configuration: Valid 00:08:03.093 Vendor Specific Size: 0 00:08:03.093 Number of Reclaim Groups: 2 00:08:03.093 Number of Recalim Unit Handles: 8 00:08:03.093 Max Placement Identifiers: 128 00:08:03.093 Number of Namespaces Suppprted: 256 00:08:03.093 Reclaim unit Nominal Size: 6000000 bytes 00:08:03.093 Estimated Reclaim Unit Time Limit: Not Reported 00:08:03.093 RUH Desc #000: RUH Type: Initially Isolated 00:08:03.093 RUH Desc #001: RUH Type: Initially Isolated 00:08:03.093 RUH Desc #002: RUH Type: Initially Isolated 00:08:03.093 RUH Desc #003: RUH Type: Initially Isolated 00:08:03.093 RUH Desc #004: RUH Type: Initially Isolated 00:08:03.093 RUH Desc #005: RUH Type: Initially Isolated 00:08:03.093 RUH Desc #006: RUH Type: Initially Isolated 00:08:03.093 RUH Desc #007: RUH Type: Initially Isolated 00:08:03.093 00:08:03.093 FDP reclaim unit handle usage log page 00:08:03.093 ====================================== 00:08:03.093 Number of Reclaim Unit Handles: 8 00:08:03.093 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:03.093 RUH Usage Desc #001: RUH Attributes: Unused 00:08:03.093 RUH Usage Desc #002: RUH Attributes: Unused 00:08:03.093 RUH Usage Desc #003: RUH Attributes: Unused 00:08:03.093 RUH Usage Desc #004: RUH Attributes: Unused 00:08:03.093 RUH Usage Desc #005: RUH Attributes: Unused 00:08:03.093 RUH Usage Desc #006: RUH Attributes: Unused 00:08:03.093 RUH Usage Desc #007: RUH Attributes: Unused 00:08:03.093 00:08:03.093 FDP statistics log page 00:08:03.093 ======================= 00:08:03.093 Host bytes with metadata written: 455647232 00:08:03.093 Media bytes with metadata written: 455700480 00:08:03.093 Media bytes erased: 0 00:08:03.093 00:08:03.093 FDP events log page 00:08:03.093 =================== 00:08:03.093 Number of FDP events: 0 00:08:03.093 00:08:03.093 NVM Specific Namespace Data 00:08:03.093 =========================== 00:08:03.093 Logical Block Storage Tag Mask: 0 00:08:03.093 Protection Information Capabilities: 00:08:03.093 16b Guard Protection Information Storage Tag Support: No 00:08:03.093 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:03.093 Storage Tag Check Read Support: No 00:08:03.093 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.093 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.093 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.093 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.093 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.093 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.093 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.093 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.093 00:08:03.093 real 0m1.009s 00:08:03.093 user 0m0.340s 00:08:03.093 sys 0m0.470s 00:08:03.093 13:56:41 nvme.nvme_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:03.093 13:56:41 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:08:03.093 ************************************ 00:08:03.093 END TEST nvme_identify 00:08:03.093 ************************************ 00:08:03.093 13:56:41 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:08:03.093 13:56:41 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:03.093 13:56:41 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:03.093 13:56:41 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:03.093 ************************************ 00:08:03.093 START TEST nvme_perf 00:08:03.093 ************************************ 00:08:03.093 13:56:41 nvme.nvme_perf -- common/autotest_common.sh@1125 -- # nvme_perf 00:08:03.093 13:56:41 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:08:04.471 Initializing NVMe Controllers 00:08:04.471 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:04.471 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:04.471 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:04.471 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:04.471 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:04.471 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:04.471 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:04.471 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:04.471 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:04.471 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:04.471 Initialization complete. Launching workers. 00:08:04.471 ======================================================== 00:08:04.471 Latency(us) 00:08:04.471 Device Information : IOPS MiB/s Average min max 00:08:04.471 PCIE (0000:00:10.0) NSID 1 from core 0: 16530.10 193.71 7745.81 4438.92 23599.22 00:08:04.471 PCIE (0000:00:11.0) NSID 1 from core 0: 16530.10 193.71 7740.90 4296.29 23164.68 00:08:04.471 PCIE (0000:00:13.0) NSID 1 from core 0: 16530.10 193.71 7734.57 3770.26 22840.08 00:08:04.471 PCIE (0000:00:12.0) NSID 1 from core 0: 16530.10 193.71 7728.13 3572.92 22456.22 00:08:04.471 PCIE (0000:00:12.0) NSID 2 from core 0: 16530.10 193.71 7721.63 3410.11 22046.69 00:08:04.471 PCIE (0000:00:12.0) NSID 3 from core 0: 16530.10 193.71 7715.24 3217.31 21610.85 00:08:04.471 ======================================================== 00:08:04.471 Total : 99180.58 1162.27 7731.05 3217.31 23599.22 00:08:04.471 00:08:04.471 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:04.471 ================================================================================= 00:08:04.471 1.00000% : 5696.591us 00:08:04.471 10.00000% : 5923.446us 00:08:04.471 25.00000% : 6200.714us 00:08:04.471 50.00000% : 6604.012us 00:08:04.471 75.00000% : 8217.206us 00:08:04.471 90.00000% : 11947.717us 00:08:04.471 95.00000% : 14317.095us 00:08:04.471 98.00000% : 15325.342us 00:08:04.471 99.00000% : 16031.114us 00:08:04.471 99.50000% : 18148.431us 00:08:04.471 99.90000% : 23290.486us 00:08:04.471 99.99000% : 23592.960us 00:08:04.471 99.99900% : 23693.785us 00:08:04.471 99.99990% : 23693.785us 00:08:04.471 99.99999% : 23693.785us 00:08:04.471 00:08:04.471 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:04.471 ================================================================================= 00:08:04.471 1.00000% : 5772.209us 00:08:04.471 10.00000% : 5973.858us 00:08:04.471 25.00000% : 6200.714us 00:08:04.471 50.00000% : 6553.600us 00:08:04.471 75.00000% : 8166.794us 00:08:04.471 90.00000% : 11796.480us 00:08:04.471 95.00000% : 14317.095us 00:08:04.471 98.00000% : 15325.342us 00:08:04.471 99.00000% : 16131.938us 00:08:04.471 99.50000% : 17946.782us 00:08:04.471 99.90000% : 22887.188us 00:08:04.471 99.99000% : 23189.662us 00:08:04.471 99.99900% : 23189.662us 00:08:04.471 99.99990% : 23189.662us 00:08:04.471 99.99999% : 23189.662us 00:08:04.471 00:08:04.472 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:04.472 ================================================================================= 00:08:04.472 1.00000% : 5747.003us 00:08:04.472 10.00000% : 5973.858us 00:08:04.472 25.00000% : 6200.714us 00:08:04.472 50.00000% : 6553.600us 00:08:04.472 75.00000% : 8267.618us 00:08:04.472 90.00000% : 12149.366us 00:08:04.472 95.00000% : 14317.095us 00:08:04.472 98.00000% : 15123.692us 00:08:04.472 99.00000% : 16031.114us 00:08:04.472 99.50000% : 17745.132us 00:08:04.472 99.90000% : 22584.714us 00:08:04.472 99.99000% : 22887.188us 00:08:04.472 99.99900% : 22887.188us 00:08:04.472 99.99990% : 22887.188us 00:08:04.472 99.99999% : 22887.188us 00:08:04.472 00:08:04.472 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:04.472 ================================================================================= 00:08:04.472 1.00000% : 5747.003us 00:08:04.472 10.00000% : 5973.858us 00:08:04.472 25.00000% : 6200.714us 00:08:04.472 50.00000% : 6553.600us 00:08:04.472 75.00000% : 8267.618us 00:08:04.472 90.00000% : 12048.542us 00:08:04.472 95.00000% : 14115.446us 00:08:04.472 98.00000% : 15426.166us 00:08:04.472 99.00000% : 16434.412us 00:08:04.472 99.50000% : 17341.834us 00:08:04.472 99.90000% : 22181.415us 00:08:04.472 99.99000% : 22483.889us 00:08:04.472 99.99900% : 22483.889us 00:08:04.472 99.99990% : 22483.889us 00:08:04.472 99.99999% : 22483.889us 00:08:04.472 00:08:04.472 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:04.472 ================================================================================= 00:08:04.472 1.00000% : 5721.797us 00:08:04.472 10.00000% : 5973.858us 00:08:04.472 25.00000% : 6200.714us 00:08:04.472 50.00000% : 6553.600us 00:08:04.472 75.00000% : 8318.031us 00:08:04.472 90.00000% : 12149.366us 00:08:04.472 95.00000% : 14014.622us 00:08:04.472 98.00000% : 15325.342us 00:08:04.472 99.00000% : 16131.938us 00:08:04.472 99.50000% : 17039.360us 00:08:04.472 99.90000% : 21778.117us 00:08:04.472 99.99000% : 22080.591us 00:08:04.472 99.99900% : 22080.591us 00:08:04.472 99.99990% : 22080.591us 00:08:04.472 99.99999% : 22080.591us 00:08:04.472 00:08:04.472 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:04.472 ================================================================================= 00:08:04.472 1.00000% : 5747.003us 00:08:04.472 10.00000% : 5973.858us 00:08:04.472 25.00000% : 6200.714us 00:08:04.472 50.00000% : 6553.600us 00:08:04.472 75.00000% : 8217.206us 00:08:04.472 90.00000% : 12098.954us 00:08:04.472 95.00000% : 14216.271us 00:08:04.472 98.00000% : 15325.342us 00:08:04.472 99.00000% : 16131.938us 00:08:04.472 99.50000% : 17241.009us 00:08:04.472 99.90000% : 21374.818us 00:08:04.472 99.99000% : 21677.292us 00:08:04.472 99.99900% : 21677.292us 00:08:04.472 99.99990% : 21677.292us 00:08:04.472 99.99999% : 21677.292us 00:08:04.472 00:08:04.472 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:04.472 ============================================================================== 00:08:04.472 Range in us Cumulative IO count 00:08:04.472 4436.283 - 4461.489: 0.0181% ( 3) 00:08:04.472 4461.489 - 4486.695: 0.0422% ( 4) 00:08:04.472 4486.695 - 4511.902: 0.0483% ( 1) 00:08:04.472 4511.902 - 4537.108: 0.0543% ( 1) 00:08:04.472 4537.108 - 4562.314: 0.0664% ( 2) 00:08:04.472 4562.314 - 4587.520: 0.0845% ( 3) 00:08:04.472 4587.520 - 4612.726: 0.0905% ( 1) 00:08:04.472 4612.726 - 4637.932: 0.1086% ( 3) 00:08:04.472 4637.932 - 4663.138: 0.1146% ( 1) 00:08:04.472 4663.138 - 4688.345: 0.1267% ( 2) 00:08:04.472 4688.345 - 4713.551: 0.1448% ( 3) 00:08:04.472 4713.551 - 4738.757: 0.1569% ( 2) 00:08:04.472 4738.757 - 4763.963: 0.1689% ( 2) 00:08:04.472 4763.963 - 4789.169: 0.1810% ( 2) 00:08:04.472 4789.169 - 4814.375: 0.1931% ( 2) 00:08:04.472 4814.375 - 4839.582: 0.2051% ( 2) 00:08:04.472 4839.582 - 4864.788: 0.2172% ( 2) 00:08:04.472 4864.788 - 4889.994: 0.2292% ( 2) 00:08:04.472 4889.994 - 4915.200: 0.2413% ( 2) 00:08:04.472 4915.200 - 4940.406: 0.2534% ( 2) 00:08:04.472 4940.406 - 4965.612: 0.2654% ( 2) 00:08:04.472 4965.612 - 4990.818: 0.2775% ( 2) 00:08:04.472 4990.818 - 5016.025: 0.2956% ( 3) 00:08:04.472 5016.025 - 5041.231: 0.3016% ( 1) 00:08:04.472 5041.231 - 5066.437: 0.3137% ( 2) 00:08:04.472 5066.437 - 5091.643: 0.3318% ( 3) 00:08:04.472 5091.643 - 5116.849: 0.3378% ( 1) 00:08:04.472 5116.849 - 5142.055: 0.3499% ( 2) 00:08:04.472 5142.055 - 5167.262: 0.3620% ( 2) 00:08:04.472 5167.262 - 5192.468: 0.3801% ( 3) 00:08:04.472 5192.468 - 5217.674: 0.3861% ( 1) 00:08:04.472 5545.354 - 5570.560: 0.4163% ( 5) 00:08:04.472 5570.560 - 5595.766: 0.4645% ( 8) 00:08:04.472 5595.766 - 5620.972: 0.5430% ( 13) 00:08:04.472 5620.972 - 5646.178: 0.7058% ( 27) 00:08:04.472 5646.178 - 5671.385: 0.9833% ( 46) 00:08:04.472 5671.385 - 5696.591: 1.3212% ( 56) 00:08:04.472 5696.591 - 5721.797: 1.8883% ( 94) 00:08:04.472 5721.797 - 5747.003: 2.5700% ( 113) 00:08:04.472 5747.003 - 5772.209: 3.5111% ( 156) 00:08:04.472 5772.209 - 5797.415: 4.4703% ( 159) 00:08:04.472 5797.415 - 5822.622: 5.5623% ( 181) 00:08:04.472 5822.622 - 5847.828: 6.7266% ( 193) 00:08:04.472 5847.828 - 5873.034: 8.0056% ( 212) 00:08:04.472 5873.034 - 5898.240: 9.3931% ( 230) 00:08:04.472 5898.240 - 5923.446: 10.6781% ( 213) 00:08:04.472 5923.446 - 5948.652: 11.9088% ( 204) 00:08:04.472 5948.652 - 5973.858: 13.2179% ( 217) 00:08:04.472 5973.858 - 5999.065: 14.5210% ( 216) 00:08:04.472 5999.065 - 6024.271: 15.9628% ( 239) 00:08:04.472 6024.271 - 6049.477: 17.3745% ( 234) 00:08:04.472 6049.477 - 6074.683: 18.8465% ( 244) 00:08:04.472 6074.683 - 6099.889: 20.2220% ( 228) 00:08:04.472 6099.889 - 6125.095: 21.7483% ( 253) 00:08:04.472 6125.095 - 6150.302: 23.2324% ( 246) 00:08:04.472 6150.302 - 6175.508: 24.7708% ( 255) 00:08:04.472 6175.508 - 6200.714: 26.2307% ( 242) 00:08:04.472 6200.714 - 6225.920: 27.8234% ( 264) 00:08:04.472 6225.920 - 6251.126: 29.4281% ( 266) 00:08:04.472 6251.126 - 6276.332: 30.9303% ( 249) 00:08:04.472 6276.332 - 6301.538: 32.4566% ( 253) 00:08:04.472 6301.538 - 6326.745: 34.0372% ( 262) 00:08:04.472 6326.745 - 6351.951: 35.5876% ( 257) 00:08:04.472 6351.951 - 6377.157: 37.1803% ( 264) 00:08:04.472 6377.157 - 6402.363: 38.8393% ( 275) 00:08:04.472 6402.363 - 6427.569: 40.3656% ( 253) 00:08:04.472 6427.569 - 6452.775: 42.0729% ( 283) 00:08:04.472 6452.775 - 6503.188: 45.2401% ( 525) 00:08:04.472 6503.188 - 6553.600: 48.5159% ( 543) 00:08:04.472 6553.600 - 6604.012: 51.7797% ( 541) 00:08:04.472 6604.012 - 6654.425: 54.8021% ( 501) 00:08:04.472 6654.425 - 6704.837: 57.4023% ( 431) 00:08:04.472 6704.837 - 6755.249: 59.6766% ( 377) 00:08:04.472 6755.249 - 6805.662: 61.2874% ( 267) 00:08:04.472 6805.662 - 6856.074: 62.4035% ( 185) 00:08:04.472 6856.074 - 6906.486: 63.2903% ( 147) 00:08:04.472 6906.486 - 6956.898: 64.1168% ( 137) 00:08:04.472 6956.898 - 7007.311: 64.7925% ( 112) 00:08:04.472 7007.311 - 7057.723: 65.3837% ( 98) 00:08:04.472 7057.723 - 7108.135: 65.9447% ( 93) 00:08:04.472 7108.135 - 7158.548: 66.4334% ( 81) 00:08:04.472 7158.548 - 7208.960: 66.8436% ( 68) 00:08:04.472 7208.960 - 7259.372: 67.3142% ( 78) 00:08:04.472 7259.372 - 7309.785: 67.6762% ( 60) 00:08:04.472 7309.785 - 7360.197: 68.1528% ( 79) 00:08:04.472 7360.197 - 7410.609: 68.5509% ( 66) 00:08:04.472 7410.609 - 7461.022: 69.0275% ( 79) 00:08:04.472 7461.022 - 7511.434: 69.4438% ( 69) 00:08:04.472 7511.434 - 7561.846: 69.9023% ( 76) 00:08:04.472 7561.846 - 7612.258: 70.3427% ( 73) 00:08:04.472 7612.258 - 7662.671: 70.7710% ( 71) 00:08:04.472 7662.671 - 7713.083: 71.2114% ( 73) 00:08:04.472 7713.083 - 7763.495: 71.6216% ( 68) 00:08:04.472 7763.495 - 7813.908: 71.9715% ( 58) 00:08:04.472 7813.908 - 7864.320: 72.3938% ( 70) 00:08:04.472 7864.320 - 7914.732: 72.7980% ( 67) 00:08:04.472 7914.732 - 7965.145: 73.1479% ( 58) 00:08:04.473 7965.145 - 8015.557: 73.5220% ( 62) 00:08:04.473 8015.557 - 8065.969: 73.9081% ( 64) 00:08:04.473 8065.969 - 8116.382: 74.2640% ( 59) 00:08:04.473 8116.382 - 8166.794: 74.6803% ( 69) 00:08:04.473 8166.794 - 8217.206: 75.0845% ( 67) 00:08:04.473 8217.206 - 8267.618: 75.5369% ( 75) 00:08:04.473 8267.618 - 8318.031: 75.9351% ( 66) 00:08:04.473 8318.031 - 8368.443: 76.3514% ( 69) 00:08:04.473 8368.443 - 8418.855: 76.7133% ( 60) 00:08:04.473 8418.855 - 8469.268: 77.0572% ( 57) 00:08:04.473 8469.268 - 8519.680: 77.3468% ( 48) 00:08:04.473 8519.680 - 8570.092: 77.6665% ( 53) 00:08:04.473 8570.092 - 8620.505: 78.0043% ( 56) 00:08:04.473 8620.505 - 8670.917: 78.2638% ( 43) 00:08:04.473 8670.917 - 8721.329: 78.5051% ( 40) 00:08:04.473 8721.329 - 8771.742: 78.7946% ( 48) 00:08:04.473 8771.742 - 8822.154: 79.0420% ( 41) 00:08:04.473 8822.154 - 8872.566: 79.3255% ( 47) 00:08:04.473 8872.566 - 8922.978: 79.6211% ( 49) 00:08:04.473 8922.978 - 8973.391: 79.8986% ( 46) 00:08:04.473 8973.391 - 9023.803: 80.1762% ( 46) 00:08:04.473 9023.803 - 9074.215: 80.4657% ( 48) 00:08:04.473 9074.215 - 9124.628: 80.7372% ( 45) 00:08:04.473 9124.628 - 9175.040: 81.0087% ( 45) 00:08:04.473 9175.040 - 9225.452: 81.3164% ( 51) 00:08:04.473 9225.452 - 9275.865: 81.6180% ( 50) 00:08:04.473 9275.865 - 9326.277: 81.9257% ( 51) 00:08:04.473 9326.277 - 9376.689: 82.1911% ( 44) 00:08:04.473 9376.689 - 9427.102: 82.4867% ( 49) 00:08:04.473 9427.102 - 9477.514: 82.7341% ( 41) 00:08:04.473 9477.514 - 9527.926: 83.0176% ( 47) 00:08:04.473 9527.926 - 9578.338: 83.2589% ( 40) 00:08:04.473 9578.338 - 9628.751: 83.4882% ( 38) 00:08:04.473 9628.751 - 9679.163: 83.7054% ( 36) 00:08:04.473 9679.163 - 9729.575: 83.9225% ( 36) 00:08:04.473 9729.575 - 9779.988: 84.1337% ( 35) 00:08:04.473 9779.988 - 9830.400: 84.3147% ( 30) 00:08:04.473 9830.400 - 9880.812: 84.4715% ( 26) 00:08:04.473 9880.812 - 9931.225: 84.6525% ( 30) 00:08:04.473 9931.225 - 9981.637: 84.7973% ( 24) 00:08:04.473 9981.637 - 10032.049: 84.9662% ( 28) 00:08:04.473 10032.049 - 10082.462: 85.1291% ( 27) 00:08:04.473 10082.462 - 10132.874: 85.2920% ( 27) 00:08:04.473 10132.874 - 10183.286: 85.4126% ( 20) 00:08:04.473 10183.286 - 10233.698: 85.5393% ( 21) 00:08:04.473 10233.698 - 10284.111: 85.6540% ( 19) 00:08:04.473 10284.111 - 10334.523: 85.7686% ( 19) 00:08:04.473 10334.523 - 10384.935: 85.8832% ( 19) 00:08:04.473 10384.935 - 10435.348: 85.9677% ( 14) 00:08:04.473 10435.348 - 10485.760: 86.1125% ( 24) 00:08:04.473 10485.760 - 10536.172: 86.2512% ( 23) 00:08:04.473 10536.172 - 10586.585: 86.4201% ( 28) 00:08:04.473 10586.585 - 10636.997: 86.5468% ( 21) 00:08:04.473 10636.997 - 10687.409: 86.6916% ( 24) 00:08:04.473 10687.409 - 10737.822: 86.8304% ( 23) 00:08:04.473 10737.822 - 10788.234: 86.9389% ( 18) 00:08:04.473 10788.234 - 10838.646: 87.0717% ( 22) 00:08:04.473 10838.646 - 10889.058: 87.2225% ( 25) 00:08:04.473 10889.058 - 10939.471: 87.4035% ( 30) 00:08:04.473 10939.471 - 10989.883: 87.5483% ( 24) 00:08:04.473 10989.883 - 11040.295: 87.6569% ( 18) 00:08:04.473 11040.295 - 11090.708: 87.7594% ( 17) 00:08:04.473 11090.708 - 11141.120: 87.8861% ( 21) 00:08:04.473 11141.120 - 11191.532: 88.0128% ( 21) 00:08:04.473 11191.532 - 11241.945: 88.1274% ( 19) 00:08:04.473 11241.945 - 11292.357: 88.2239% ( 16) 00:08:04.473 11292.357 - 11342.769: 88.3627% ( 23) 00:08:04.473 11342.769 - 11393.182: 88.4713% ( 18) 00:08:04.473 11393.182 - 11443.594: 88.6100% ( 23) 00:08:04.473 11443.594 - 11494.006: 88.7729% ( 27) 00:08:04.473 11494.006 - 11544.418: 88.9056% ( 22) 00:08:04.473 11544.418 - 11594.831: 89.0384% ( 22) 00:08:04.473 11594.831 - 11645.243: 89.1651% ( 21) 00:08:04.473 11645.243 - 11695.655: 89.3219% ( 26) 00:08:04.473 11695.655 - 11746.068: 89.5029% ( 30) 00:08:04.473 11746.068 - 11796.480: 89.6899% ( 31) 00:08:04.473 11796.480 - 11846.892: 89.8287% ( 23) 00:08:04.473 11846.892 - 11897.305: 89.9493% ( 20) 00:08:04.473 11897.305 - 11947.717: 90.1182% ( 28) 00:08:04.473 11947.717 - 11998.129: 90.2872% ( 28) 00:08:04.473 11998.129 - 12048.542: 90.4199% ( 22) 00:08:04.473 12048.542 - 12098.954: 90.5586% ( 23) 00:08:04.473 12098.954 - 12149.366: 90.7155% ( 26) 00:08:04.473 12149.366 - 12199.778: 90.8301% ( 19) 00:08:04.473 12199.778 - 12250.191: 90.9568% ( 21) 00:08:04.473 12250.191 - 12300.603: 91.0714% ( 19) 00:08:04.473 12300.603 - 12351.015: 91.2042% ( 22) 00:08:04.473 12351.015 - 12401.428: 91.3429% ( 23) 00:08:04.473 12401.428 - 12451.840: 91.4334% ( 15) 00:08:04.473 12451.840 - 12502.252: 91.5360% ( 17) 00:08:04.473 12502.252 - 12552.665: 91.6264% ( 15) 00:08:04.473 12552.665 - 12603.077: 91.6807% ( 9) 00:08:04.473 12603.077 - 12653.489: 91.7712% ( 15) 00:08:04.473 12653.489 - 12703.902: 91.8738% ( 17) 00:08:04.473 12703.902 - 12754.314: 91.9462% ( 12) 00:08:04.473 12754.314 - 12804.726: 92.0125% ( 11) 00:08:04.473 12804.726 - 12855.138: 92.0548% ( 7) 00:08:04.473 12855.138 - 12905.551: 92.1091% ( 9) 00:08:04.473 12905.551 - 13006.375: 92.2358% ( 21) 00:08:04.473 13006.375 - 13107.200: 92.3986% ( 27) 00:08:04.473 13107.200 - 13208.025: 92.5495% ( 25) 00:08:04.473 13208.025 - 13308.849: 92.7305% ( 30) 00:08:04.473 13308.849 - 13409.674: 92.8752% ( 24) 00:08:04.473 13409.674 - 13510.498: 93.0562% ( 30) 00:08:04.473 13510.498 - 13611.323: 93.2613% ( 34) 00:08:04.473 13611.323 - 13712.148: 93.4122% ( 25) 00:08:04.473 13712.148 - 13812.972: 93.6776% ( 44) 00:08:04.473 13812.972 - 13913.797: 94.0154% ( 56) 00:08:04.473 13913.797 - 14014.622: 94.3472% ( 55) 00:08:04.473 14014.622 - 14115.446: 94.6006% ( 42) 00:08:04.473 14115.446 - 14216.271: 94.9083% ( 51) 00:08:04.473 14216.271 - 14317.095: 95.2160% ( 51) 00:08:04.473 14317.095 - 14417.920: 95.5176% ( 50) 00:08:04.473 14417.920 - 14518.745: 95.8856% ( 61) 00:08:04.473 14518.745 - 14619.569: 96.2476% ( 60) 00:08:04.473 14619.569 - 14720.394: 96.5734% ( 54) 00:08:04.473 14720.394 - 14821.218: 96.8750% ( 50) 00:08:04.473 14821.218 - 14922.043: 97.1163% ( 40) 00:08:04.473 14922.043 - 15022.868: 97.3697% ( 42) 00:08:04.473 15022.868 - 15123.692: 97.5627% ( 32) 00:08:04.473 15123.692 - 15224.517: 97.8161% ( 42) 00:08:04.473 15224.517 - 15325.342: 98.0273% ( 35) 00:08:04.473 15325.342 - 15426.166: 98.2022% ( 29) 00:08:04.473 15426.166 - 15526.991: 98.3953% ( 32) 00:08:04.473 15526.991 - 15627.815: 98.6064% ( 35) 00:08:04.473 15627.815 - 15728.640: 98.6969% ( 15) 00:08:04.473 15728.640 - 15829.465: 98.8417% ( 24) 00:08:04.473 15829.465 - 15930.289: 98.9141% ( 12) 00:08:04.473 15930.289 - 16031.114: 99.0227% ( 18) 00:08:04.473 16031.114 - 16131.938: 99.1011% ( 13) 00:08:04.473 16131.938 - 16232.763: 99.1494% ( 8) 00:08:04.473 16232.763 - 16333.588: 99.1976% ( 8) 00:08:04.473 16333.588 - 16434.412: 99.2278% ( 5) 00:08:04.473 17241.009 - 17341.834: 99.2338% ( 1) 00:08:04.473 17341.834 - 17442.658: 99.2700% ( 6) 00:08:04.473 17442.658 - 17543.483: 99.3062% ( 6) 00:08:04.473 17543.483 - 17644.308: 99.3424% ( 6) 00:08:04.473 17644.308 - 17745.132: 99.3786% ( 6) 00:08:04.473 17745.132 - 17845.957: 99.4088% ( 5) 00:08:04.473 17845.957 - 17946.782: 99.4510% ( 7) 00:08:04.473 17946.782 - 18047.606: 99.4872% ( 6) 00:08:04.473 18047.606 - 18148.431: 99.5234% ( 6) 00:08:04.473 18148.431 - 18249.255: 99.5656% ( 7) 00:08:04.473 18249.255 - 18350.080: 99.5958% ( 5) 00:08:04.473 18350.080 - 18450.905: 99.6139% ( 3) 00:08:04.473 22181.415 - 22282.240: 99.6260% ( 2) 00:08:04.473 22282.240 - 22383.065: 99.6561% ( 5) 00:08:04.473 22383.065 - 22483.889: 99.6863% ( 5) 00:08:04.473 22483.889 - 22584.714: 99.7104% ( 4) 00:08:04.473 22584.714 - 22685.538: 99.7406% ( 5) 00:08:04.473 22685.538 - 22786.363: 99.7708% ( 5) 00:08:04.473 22786.363 - 22887.188: 99.7949% ( 4) 00:08:04.473 22887.188 - 22988.012: 99.8250% ( 5) 00:08:04.473 22988.012 - 23088.837: 99.8552% ( 5) 00:08:04.473 23088.837 - 23189.662: 99.8914% ( 6) 00:08:04.473 23189.662 - 23290.486: 99.9095% ( 3) 00:08:04.473 23290.486 - 23391.311: 99.9397% ( 5) 00:08:04.473 23391.311 - 23492.135: 99.9698% ( 5) 00:08:04.473 23492.135 - 23592.960: 99.9940% ( 4) 00:08:04.473 23592.960 - 23693.785: 100.0000% ( 1) 00:08:04.473 00:08:04.473 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:04.473 ============================================================================== 00:08:04.473 Range in us Cumulative IO count 00:08:04.473 4285.046 - 4310.252: 0.0181% ( 3) 00:08:04.473 4310.252 - 4335.458: 0.0302% ( 2) 00:08:04.473 4335.458 - 4360.665: 0.0422% ( 2) 00:08:04.473 4360.665 - 4385.871: 0.0543% ( 2) 00:08:04.473 4385.871 - 4411.077: 0.0724% ( 3) 00:08:04.473 4411.077 - 4436.283: 0.0784% ( 1) 00:08:04.473 4436.283 - 4461.489: 0.0965% ( 3) 00:08:04.473 4461.489 - 4486.695: 0.1207% ( 4) 00:08:04.473 4486.695 - 4511.902: 0.1327% ( 2) 00:08:04.473 4511.902 - 4537.108: 0.1448% ( 2) 00:08:04.473 4537.108 - 4562.314: 0.1629% ( 3) 00:08:04.473 4562.314 - 4587.520: 0.1750% ( 2) 00:08:04.473 4587.520 - 4612.726: 0.1870% ( 2) 00:08:04.473 4612.726 - 4637.932: 0.2051% ( 3) 00:08:04.473 4637.932 - 4663.138: 0.2172% ( 2) 00:08:04.473 4663.138 - 4688.345: 0.2292% ( 2) 00:08:04.473 4688.345 - 4713.551: 0.2473% ( 3) 00:08:04.474 4713.551 - 4738.757: 0.2594% ( 2) 00:08:04.474 4738.757 - 4763.963: 0.2715% ( 2) 00:08:04.474 4763.963 - 4789.169: 0.2896% ( 3) 00:08:04.474 4789.169 - 4814.375: 0.3016% ( 2) 00:08:04.474 4814.375 - 4839.582: 0.3137% ( 2) 00:08:04.474 4839.582 - 4864.788: 0.3318% ( 3) 00:08:04.474 4864.788 - 4889.994: 0.3439% ( 2) 00:08:04.474 4889.994 - 4915.200: 0.3620% ( 3) 00:08:04.474 4915.200 - 4940.406: 0.3740% ( 2) 00:08:04.474 4940.406 - 4965.612: 0.3861% ( 2) 00:08:04.474 5620.972 - 5646.178: 0.3982% ( 2) 00:08:04.474 5646.178 - 5671.385: 0.4042% ( 1) 00:08:04.474 5671.385 - 5696.591: 0.4645% ( 10) 00:08:04.474 5696.591 - 5721.797: 0.6214% ( 26) 00:08:04.474 5721.797 - 5747.003: 0.8808% ( 43) 00:08:04.474 5747.003 - 5772.209: 1.2609% ( 63) 00:08:04.474 5772.209 - 5797.415: 1.8581% ( 99) 00:08:04.474 5797.415 - 5822.622: 2.6243% ( 127) 00:08:04.474 5822.622 - 5847.828: 3.6257% ( 166) 00:08:04.474 5847.828 - 5873.034: 4.6754% ( 174) 00:08:04.474 5873.034 - 5898.240: 5.8760% ( 199) 00:08:04.474 5898.240 - 5923.446: 7.2454% ( 227) 00:08:04.474 5923.446 - 5948.652: 8.8984% ( 274) 00:08:04.474 5948.652 - 5973.858: 10.5212% ( 269) 00:08:04.474 5973.858 - 5999.065: 12.0958% ( 261) 00:08:04.474 5999.065 - 6024.271: 13.6945% ( 265) 00:08:04.474 6024.271 - 6049.477: 15.2389% ( 256) 00:08:04.474 6049.477 - 6074.683: 16.8859% ( 273) 00:08:04.474 6074.683 - 6099.889: 18.5569% ( 277) 00:08:04.474 6099.889 - 6125.095: 20.2461% ( 280) 00:08:04.474 6125.095 - 6150.302: 22.0138% ( 293) 00:08:04.474 6150.302 - 6175.508: 23.8357% ( 302) 00:08:04.474 6175.508 - 6200.714: 25.5611% ( 286) 00:08:04.474 6200.714 - 6225.920: 27.3106% ( 290) 00:08:04.474 6225.920 - 6251.126: 29.1144% ( 299) 00:08:04.474 6251.126 - 6276.332: 30.8639% ( 290) 00:08:04.474 6276.332 - 6301.538: 32.7341% ( 310) 00:08:04.474 6301.538 - 6326.745: 34.6042% ( 310) 00:08:04.474 6326.745 - 6351.951: 36.4986% ( 314) 00:08:04.474 6351.951 - 6377.157: 38.2662% ( 293) 00:08:04.474 6377.157 - 6402.363: 40.1363% ( 310) 00:08:04.474 6402.363 - 6427.569: 41.9884% ( 307) 00:08:04.474 6427.569 - 6452.775: 43.8948% ( 316) 00:08:04.474 6452.775 - 6503.188: 47.6532% ( 623) 00:08:04.474 6503.188 - 6553.600: 51.3152% ( 607) 00:08:04.474 6553.600 - 6604.012: 54.5910% ( 543) 00:08:04.474 6604.012 - 6654.425: 57.2756% ( 445) 00:08:04.474 6654.425 - 6704.837: 59.4353% ( 358) 00:08:04.474 6704.837 - 6755.249: 60.8953% ( 242) 00:08:04.474 6755.249 - 6805.662: 61.9148% ( 169) 00:08:04.474 6805.662 - 6856.074: 62.8740% ( 159) 00:08:04.474 6856.074 - 6906.486: 63.7307% ( 142) 00:08:04.474 6906.486 - 6956.898: 64.4426% ( 118) 00:08:04.474 6956.898 - 7007.311: 65.0941% ( 108) 00:08:04.474 7007.311 - 7057.723: 65.6612% ( 94) 00:08:04.474 7057.723 - 7108.135: 66.1499% ( 81) 00:08:04.474 7108.135 - 7158.548: 66.5601% ( 68) 00:08:04.474 7158.548 - 7208.960: 67.0005% ( 73) 00:08:04.474 7208.960 - 7259.372: 67.4047% ( 67) 00:08:04.474 7259.372 - 7309.785: 67.8330% ( 71) 00:08:04.474 7309.785 - 7360.197: 68.2613% ( 71) 00:08:04.474 7360.197 - 7410.609: 68.7078% ( 74) 00:08:04.474 7410.609 - 7461.022: 69.0818% ( 62) 00:08:04.474 7461.022 - 7511.434: 69.4619% ( 63) 00:08:04.474 7511.434 - 7561.846: 69.8178% ( 59) 00:08:04.474 7561.846 - 7612.258: 70.2642% ( 74) 00:08:04.474 7612.258 - 7662.671: 70.7107% ( 74) 00:08:04.474 7662.671 - 7713.083: 71.1330% ( 70) 00:08:04.474 7713.083 - 7763.495: 71.5734% ( 73) 00:08:04.474 7763.495 - 7813.908: 72.0017% ( 71) 00:08:04.474 7813.908 - 7864.320: 72.4602% ( 76) 00:08:04.474 7864.320 - 7914.732: 72.9247% ( 77) 00:08:04.474 7914.732 - 7965.145: 73.4254% ( 83) 00:08:04.474 7965.145 - 8015.557: 73.8960% ( 78) 00:08:04.474 8015.557 - 8065.969: 74.3545% ( 76) 00:08:04.474 8065.969 - 8116.382: 74.8250% ( 78) 00:08:04.474 8116.382 - 8166.794: 75.2413% ( 69) 00:08:04.474 8166.794 - 8217.206: 75.6636% ( 70) 00:08:04.474 8217.206 - 8267.618: 76.1100% ( 74) 00:08:04.474 8267.618 - 8318.031: 76.5022% ( 65) 00:08:04.474 8318.031 - 8368.443: 76.8641% ( 60) 00:08:04.474 8368.443 - 8418.855: 77.2140% ( 58) 00:08:04.474 8418.855 - 8469.268: 77.4674% ( 42) 00:08:04.474 8469.268 - 8519.680: 77.7148% ( 41) 00:08:04.474 8519.680 - 8570.092: 77.9561% ( 40) 00:08:04.474 8570.092 - 8620.505: 78.1974% ( 40) 00:08:04.474 8620.505 - 8670.917: 78.3965% ( 33) 00:08:04.474 8670.917 - 8721.329: 78.5835% ( 31) 00:08:04.474 8721.329 - 8771.742: 78.7403% ( 26) 00:08:04.474 8771.742 - 8822.154: 78.9032% ( 27) 00:08:04.474 8822.154 - 8872.566: 79.0360% ( 22) 00:08:04.474 8872.566 - 8922.978: 79.2833% ( 41) 00:08:04.474 8922.978 - 8973.391: 79.4824% ( 33) 00:08:04.474 8973.391 - 9023.803: 79.7358% ( 42) 00:08:04.474 9023.803 - 9074.215: 79.9771% ( 40) 00:08:04.474 9074.215 - 9124.628: 80.2486% ( 45) 00:08:04.474 9124.628 - 9175.040: 80.5743% ( 54) 00:08:04.474 9175.040 - 9225.452: 80.8941% ( 53) 00:08:04.474 9225.452 - 9275.865: 81.2319% ( 56) 00:08:04.474 9275.865 - 9326.277: 81.5577% ( 54) 00:08:04.474 9326.277 - 9376.689: 81.8292% ( 45) 00:08:04.474 9376.689 - 9427.102: 82.1006% ( 45) 00:08:04.474 9427.102 - 9477.514: 82.4143% ( 52) 00:08:04.474 9477.514 - 9527.926: 82.7039% ( 48) 00:08:04.474 9527.926 - 9578.338: 83.0236% ( 53) 00:08:04.474 9578.338 - 9628.751: 83.3374% ( 52) 00:08:04.474 9628.751 - 9679.163: 83.6028% ( 44) 00:08:04.474 9679.163 - 9729.575: 83.8622% ( 43) 00:08:04.474 9729.575 - 9779.988: 84.1035% ( 40) 00:08:04.474 9779.988 - 9830.400: 84.3267% ( 37) 00:08:04.474 9830.400 - 9880.812: 84.5198% ( 32) 00:08:04.474 9880.812 - 9931.225: 84.7008% ( 30) 00:08:04.474 9931.225 - 9981.637: 84.9059% ( 34) 00:08:04.474 9981.637 - 10032.049: 85.0326% ( 21) 00:08:04.474 10032.049 - 10082.462: 85.1653% ( 22) 00:08:04.474 10082.462 - 10132.874: 85.3222% ( 26) 00:08:04.474 10132.874 - 10183.286: 85.4730% ( 25) 00:08:04.474 10183.286 - 10233.698: 85.6178% ( 24) 00:08:04.474 10233.698 - 10284.111: 85.7505% ( 22) 00:08:04.474 10284.111 - 10334.523: 85.8953% ( 24) 00:08:04.474 10334.523 - 10384.935: 85.9978% ( 17) 00:08:04.474 10384.935 - 10435.348: 86.1125% ( 19) 00:08:04.474 10435.348 - 10485.760: 86.2150% ( 17) 00:08:04.474 10485.760 - 10536.172: 86.3236% ( 18) 00:08:04.474 10536.172 - 10586.585: 86.4443% ( 20) 00:08:04.474 10586.585 - 10636.997: 86.5709% ( 21) 00:08:04.474 10636.997 - 10687.409: 86.7037% ( 22) 00:08:04.474 10687.409 - 10737.822: 86.8424% ( 23) 00:08:04.474 10737.822 - 10788.234: 86.9993% ( 26) 00:08:04.474 10788.234 - 10838.646: 87.1441% ( 24) 00:08:04.474 10838.646 - 10889.058: 87.2587% ( 19) 00:08:04.474 10889.058 - 10939.471: 87.3492% ( 15) 00:08:04.474 10939.471 - 10989.883: 87.4216% ( 12) 00:08:04.474 10989.883 - 11040.295: 87.5302% ( 18) 00:08:04.474 11040.295 - 11090.708: 87.6569% ( 21) 00:08:04.474 11090.708 - 11141.120: 87.7835% ( 21) 00:08:04.474 11141.120 - 11191.532: 87.9223% ( 23) 00:08:04.474 11191.532 - 11241.945: 88.0550% ( 22) 00:08:04.474 11241.945 - 11292.357: 88.1877% ( 22) 00:08:04.474 11292.357 - 11342.769: 88.3446% ( 26) 00:08:04.474 11342.769 - 11393.182: 88.5135% ( 28) 00:08:04.474 11393.182 - 11443.594: 88.7005% ( 31) 00:08:04.474 11443.594 - 11494.006: 88.9117% ( 35) 00:08:04.474 11494.006 - 11544.418: 89.1168% ( 34) 00:08:04.474 11544.418 - 11594.831: 89.3219% ( 34) 00:08:04.474 11594.831 - 11645.243: 89.5029% ( 30) 00:08:04.474 11645.243 - 11695.655: 89.6839% ( 30) 00:08:04.474 11695.655 - 11746.068: 89.8528% ( 28) 00:08:04.474 11746.068 - 11796.480: 90.0217% ( 28) 00:08:04.474 11796.480 - 11846.892: 90.2027% ( 30) 00:08:04.474 11846.892 - 11897.305: 90.3354% ( 22) 00:08:04.474 11897.305 - 11947.717: 90.4862% ( 25) 00:08:04.474 11947.717 - 11998.129: 90.6069% ( 20) 00:08:04.474 11998.129 - 12048.542: 90.7276% ( 20) 00:08:04.474 12048.542 - 12098.954: 90.8482% ( 20) 00:08:04.474 12098.954 - 12149.366: 90.9568% ( 18) 00:08:04.474 12149.366 - 12199.778: 91.1076% ( 25) 00:08:04.474 12199.778 - 12250.191: 91.2403% ( 22) 00:08:04.474 12250.191 - 12300.603: 91.3851% ( 24) 00:08:04.474 12300.603 - 12351.015: 91.5179% ( 22) 00:08:04.474 12351.015 - 12401.428: 91.5963% ( 13) 00:08:04.474 12401.428 - 12451.840: 91.6687% ( 12) 00:08:04.474 12451.840 - 12502.252: 91.7290% ( 10) 00:08:04.474 12502.252 - 12552.665: 91.7833% ( 9) 00:08:04.474 12552.665 - 12603.077: 91.8436% ( 10) 00:08:04.474 12603.077 - 12653.489: 91.8919% ( 8) 00:08:04.474 12653.489 - 12703.902: 91.9522% ( 10) 00:08:04.474 12703.902 - 12754.314: 91.9944% ( 7) 00:08:04.474 12754.314 - 12804.726: 92.0487% ( 9) 00:08:04.474 12804.726 - 12855.138: 92.1030% ( 9) 00:08:04.474 12855.138 - 12905.551: 92.1573% ( 9) 00:08:04.474 12905.551 - 13006.375: 92.3142% ( 26) 00:08:04.474 13006.375 - 13107.200: 92.4348% ( 20) 00:08:04.474 13107.200 - 13208.025: 92.5615% ( 21) 00:08:04.474 13208.025 - 13308.849: 92.7486% ( 31) 00:08:04.474 13308.849 - 13409.674: 92.9054% ( 26) 00:08:04.474 13409.674 - 13510.498: 93.0924% ( 31) 00:08:04.474 13510.498 - 13611.323: 93.2794% ( 31) 00:08:04.474 13611.323 - 13712.148: 93.5027% ( 37) 00:08:04.474 13712.148 - 13812.972: 93.7500% ( 41) 00:08:04.474 13812.972 - 13913.797: 94.0154% ( 44) 00:08:04.474 13913.797 - 14014.622: 94.2688% ( 42) 00:08:04.474 14014.622 - 14115.446: 94.5825% ( 52) 00:08:04.475 14115.446 - 14216.271: 94.9385% ( 59) 00:08:04.475 14216.271 - 14317.095: 95.2823% ( 57) 00:08:04.475 14317.095 - 14417.920: 95.5840% ( 50) 00:08:04.475 14417.920 - 14518.745: 95.8977% ( 52) 00:08:04.475 14518.745 - 14619.569: 96.1812% ( 47) 00:08:04.475 14619.569 - 14720.394: 96.4105% ( 38) 00:08:04.475 14720.394 - 14821.218: 96.6819% ( 45) 00:08:04.475 14821.218 - 14922.043: 96.9715% ( 48) 00:08:04.475 14922.043 - 15022.868: 97.2551% ( 47) 00:08:04.475 15022.868 - 15123.692: 97.5024% ( 41) 00:08:04.475 15123.692 - 15224.517: 97.7618% ( 43) 00:08:04.475 15224.517 - 15325.342: 98.0092% ( 41) 00:08:04.475 15325.342 - 15426.166: 98.2384% ( 38) 00:08:04.475 15426.166 - 15526.991: 98.4556% ( 36) 00:08:04.475 15526.991 - 15627.815: 98.6547% ( 33) 00:08:04.475 15627.815 - 15728.640: 98.7934% ( 23) 00:08:04.475 15728.640 - 15829.465: 98.8960% ( 17) 00:08:04.475 15829.465 - 15930.289: 98.9382% ( 7) 00:08:04.475 15930.289 - 16031.114: 98.9744% ( 6) 00:08:04.475 16031.114 - 16131.938: 99.0106% ( 6) 00:08:04.475 16131.938 - 16232.763: 99.0468% ( 6) 00:08:04.475 16232.763 - 16333.588: 99.0890% ( 7) 00:08:04.475 16333.588 - 16434.412: 99.1252% ( 6) 00:08:04.475 16434.412 - 16535.237: 99.1614% ( 6) 00:08:04.475 16535.237 - 16636.062: 99.2037% ( 7) 00:08:04.475 16636.062 - 16736.886: 99.2278% ( 4) 00:08:04.475 17039.360 - 17140.185: 99.2399% ( 2) 00:08:04.475 17140.185 - 17241.009: 99.2761% ( 6) 00:08:04.475 17241.009 - 17341.834: 99.3183% ( 7) 00:08:04.475 17341.834 - 17442.658: 99.3605% ( 7) 00:08:04.475 17442.658 - 17543.483: 99.3967% ( 6) 00:08:04.475 17543.483 - 17644.308: 99.4329% ( 6) 00:08:04.475 17644.308 - 17745.132: 99.4631% ( 5) 00:08:04.475 17745.132 - 17845.957: 99.4993% ( 6) 00:08:04.475 17845.957 - 17946.782: 99.5294% ( 5) 00:08:04.475 17946.782 - 18047.606: 99.5656% ( 6) 00:08:04.475 18047.606 - 18148.431: 99.5958% ( 5) 00:08:04.475 18148.431 - 18249.255: 99.6139% ( 3) 00:08:04.475 21979.766 - 22080.591: 99.6441% ( 5) 00:08:04.475 22080.591 - 22181.415: 99.6742% ( 5) 00:08:04.475 22181.415 - 22282.240: 99.7104% ( 6) 00:08:04.475 22282.240 - 22383.065: 99.7406% ( 5) 00:08:04.475 22383.065 - 22483.889: 99.7768% ( 6) 00:08:04.475 22483.889 - 22584.714: 99.8069% ( 5) 00:08:04.475 22584.714 - 22685.538: 99.8431% ( 6) 00:08:04.475 22685.538 - 22786.363: 99.8793% ( 6) 00:08:04.475 22786.363 - 22887.188: 99.9095% ( 5) 00:08:04.475 22887.188 - 22988.012: 99.9457% ( 6) 00:08:04.475 22988.012 - 23088.837: 99.9698% ( 4) 00:08:04.475 23088.837 - 23189.662: 100.0000% ( 5) 00:08:04.475 00:08:04.475 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:04.475 ============================================================================== 00:08:04.475 Range in us Cumulative IO count 00:08:04.475 3755.717 - 3780.923: 0.0121% ( 2) 00:08:04.475 3780.923 - 3806.129: 0.0302% ( 3) 00:08:04.475 3806.129 - 3831.335: 0.0422% ( 2) 00:08:04.475 3831.335 - 3856.542: 0.0543% ( 2) 00:08:04.475 3856.542 - 3881.748: 0.0724% ( 3) 00:08:04.475 3881.748 - 3906.954: 0.0845% ( 2) 00:08:04.475 3906.954 - 3932.160: 0.0965% ( 2) 00:08:04.475 3932.160 - 3957.366: 0.1086% ( 2) 00:08:04.475 3957.366 - 3982.572: 0.1267% ( 3) 00:08:04.475 3982.572 - 4007.778: 0.1388% ( 2) 00:08:04.475 4007.778 - 4032.985: 0.1508% ( 2) 00:08:04.475 4032.985 - 4058.191: 0.1689% ( 3) 00:08:04.475 4058.191 - 4083.397: 0.1810% ( 2) 00:08:04.475 4083.397 - 4108.603: 0.1931% ( 2) 00:08:04.475 4108.603 - 4133.809: 0.2111% ( 3) 00:08:04.475 4133.809 - 4159.015: 0.2232% ( 2) 00:08:04.475 4159.015 - 4184.222: 0.2353% ( 2) 00:08:04.475 4184.222 - 4209.428: 0.2534% ( 3) 00:08:04.475 4209.428 - 4234.634: 0.2654% ( 2) 00:08:04.475 4234.634 - 4259.840: 0.2835% ( 3) 00:08:04.475 4259.840 - 4285.046: 0.2956% ( 2) 00:08:04.475 4285.046 - 4310.252: 0.3077% ( 2) 00:08:04.475 4310.252 - 4335.458: 0.3258% ( 3) 00:08:04.475 4335.458 - 4360.665: 0.3378% ( 2) 00:08:04.475 4360.665 - 4385.871: 0.3559% ( 3) 00:08:04.475 4385.871 - 4411.077: 0.3680% ( 2) 00:08:04.475 4411.077 - 4436.283: 0.3801% ( 2) 00:08:04.475 4436.283 - 4461.489: 0.3861% ( 1) 00:08:04.475 5343.705 - 5368.911: 0.3982% ( 2) 00:08:04.475 5368.911 - 5394.117: 0.4102% ( 2) 00:08:04.475 5394.117 - 5419.323: 0.4223% ( 2) 00:08:04.475 5419.323 - 5444.529: 0.4344% ( 2) 00:08:04.475 5444.529 - 5469.735: 0.4525% ( 3) 00:08:04.475 5469.735 - 5494.942: 0.4706% ( 3) 00:08:04.475 5494.942 - 5520.148: 0.4826% ( 2) 00:08:04.475 5520.148 - 5545.354: 0.5007% ( 3) 00:08:04.475 5545.354 - 5570.560: 0.5128% ( 2) 00:08:04.475 5570.560 - 5595.766: 0.5309% ( 3) 00:08:04.475 5595.766 - 5620.972: 0.5430% ( 2) 00:08:04.475 5620.972 - 5646.178: 0.5671% ( 4) 00:08:04.475 5646.178 - 5671.385: 0.6515% ( 14) 00:08:04.475 5671.385 - 5696.591: 0.7601% ( 18) 00:08:04.475 5696.591 - 5721.797: 0.8808% ( 20) 00:08:04.475 5721.797 - 5747.003: 1.1462% ( 44) 00:08:04.475 5747.003 - 5772.209: 1.4780% ( 55) 00:08:04.475 5772.209 - 5797.415: 1.9607% ( 80) 00:08:04.475 5797.415 - 5822.622: 2.7148% ( 125) 00:08:04.475 5822.622 - 5847.828: 3.6318% ( 152) 00:08:04.475 5847.828 - 5873.034: 4.9348% ( 216) 00:08:04.475 5873.034 - 5898.240: 6.2741% ( 222) 00:08:04.475 5898.240 - 5923.446: 7.7823% ( 250) 00:08:04.475 5923.446 - 5948.652: 9.2845% ( 249) 00:08:04.475 5948.652 - 5973.858: 10.8048% ( 252) 00:08:04.475 5973.858 - 5999.065: 12.4216% ( 268) 00:08:04.475 5999.065 - 6024.271: 14.0987% ( 278) 00:08:04.475 6024.271 - 6049.477: 15.6974% ( 265) 00:08:04.475 6049.477 - 6074.683: 17.1875% ( 247) 00:08:04.475 6074.683 - 6099.889: 18.7983% ( 267) 00:08:04.475 6099.889 - 6125.095: 20.3789% ( 262) 00:08:04.475 6125.095 - 6150.302: 22.2068% ( 303) 00:08:04.475 6150.302 - 6175.508: 24.0709% ( 309) 00:08:04.475 6175.508 - 6200.714: 25.9472% ( 311) 00:08:04.475 6200.714 - 6225.920: 27.7570% ( 300) 00:08:04.475 6225.920 - 6251.126: 29.4944% ( 288) 00:08:04.475 6251.126 - 6276.332: 31.3646% ( 310) 00:08:04.475 6276.332 - 6301.538: 33.2227% ( 308) 00:08:04.475 6301.538 - 6326.745: 35.1170% ( 314) 00:08:04.475 6326.745 - 6351.951: 37.0355% ( 318) 00:08:04.475 6351.951 - 6377.157: 38.9599% ( 319) 00:08:04.475 6377.157 - 6402.363: 40.7517% ( 297) 00:08:04.475 6402.363 - 6427.569: 42.6400% ( 313) 00:08:04.475 6427.569 - 6452.775: 44.5584% ( 318) 00:08:04.475 6452.775 - 6503.188: 48.2867% ( 618) 00:08:04.475 6503.188 - 6553.600: 52.0150% ( 618) 00:08:04.475 6553.600 - 6604.012: 55.3813% ( 558) 00:08:04.475 6604.012 - 6654.425: 58.2891% ( 482) 00:08:04.475 6654.425 - 6704.837: 60.3463% ( 341) 00:08:04.475 6704.837 - 6755.249: 61.8605% ( 251) 00:08:04.475 6755.249 - 6805.662: 62.9404% ( 179) 00:08:04.475 6805.662 - 6856.074: 63.9479% ( 167) 00:08:04.475 6856.074 - 6906.486: 64.9795% ( 171) 00:08:04.475 6906.486 - 6956.898: 65.7879% ( 134) 00:08:04.475 6956.898 - 7007.311: 66.4394% ( 108) 00:08:04.475 7007.311 - 7057.723: 66.9944% ( 92) 00:08:04.475 7057.723 - 7108.135: 67.5193% ( 87) 00:08:04.475 7108.135 - 7158.548: 67.9597% ( 73) 00:08:04.475 7158.548 - 7208.960: 68.3941% ( 72) 00:08:04.475 7208.960 - 7259.372: 68.7259% ( 55) 00:08:04.475 7259.372 - 7309.785: 68.9973% ( 45) 00:08:04.475 7309.785 - 7360.197: 69.2930% ( 49) 00:08:04.475 7360.197 - 7410.609: 69.6248% ( 55) 00:08:04.475 7410.609 - 7461.022: 69.9324% ( 51) 00:08:04.475 7461.022 - 7511.434: 70.2461% ( 52) 00:08:04.475 7511.434 - 7561.846: 70.5538% ( 51) 00:08:04.475 7561.846 - 7612.258: 70.8374% ( 47) 00:08:04.475 7612.258 - 7662.671: 71.1209% ( 47) 00:08:04.475 7662.671 - 7713.083: 71.4467% ( 54) 00:08:04.475 7713.083 - 7763.495: 71.6759% ( 38) 00:08:04.475 7763.495 - 7813.908: 71.9052% ( 38) 00:08:04.475 7813.908 - 7864.320: 72.1887% ( 47) 00:08:04.475 7864.320 - 7914.732: 72.4903% ( 50) 00:08:04.475 7914.732 - 7965.145: 72.8101% ( 53) 00:08:04.475 7965.145 - 8015.557: 73.1600% ( 58) 00:08:04.475 8015.557 - 8065.969: 73.5280% ( 61) 00:08:04.475 8065.969 - 8116.382: 73.8900% ( 60) 00:08:04.475 8116.382 - 8166.794: 74.2881% ( 66) 00:08:04.475 8166.794 - 8217.206: 74.6501% ( 60) 00:08:04.475 8217.206 - 8267.618: 75.0060% ( 59) 00:08:04.475 8267.618 - 8318.031: 75.3680% ( 60) 00:08:04.475 8318.031 - 8368.443: 75.7360% ( 61) 00:08:04.475 8368.443 - 8418.855: 76.0678% ( 55) 00:08:04.475 8418.855 - 8469.268: 76.4177% ( 58) 00:08:04.476 8469.268 - 8519.680: 76.7616% ( 57) 00:08:04.476 8519.680 - 8570.092: 77.0813% ( 53) 00:08:04.476 8570.092 - 8620.505: 77.4493% ( 61) 00:08:04.476 8620.505 - 8670.917: 77.7872% ( 56) 00:08:04.476 8670.917 - 8721.329: 78.1552% ( 61) 00:08:04.476 8721.329 - 8771.742: 78.4749% ( 53) 00:08:04.476 8771.742 - 8822.154: 78.7886% ( 52) 00:08:04.476 8822.154 - 8872.566: 79.0299% ( 40) 00:08:04.476 8872.566 - 8922.978: 79.2833% ( 42) 00:08:04.476 8922.978 - 8973.391: 79.5246% ( 40) 00:08:04.476 8973.391 - 9023.803: 79.7539% ( 38) 00:08:04.476 9023.803 - 9074.215: 80.0253% ( 45) 00:08:04.476 9074.215 - 9124.628: 80.3632% ( 56) 00:08:04.476 9124.628 - 9175.040: 80.6648% ( 50) 00:08:04.476 9175.040 - 9225.452: 80.9182% ( 42) 00:08:04.476 9225.452 - 9275.865: 81.1836% ( 44) 00:08:04.476 9275.865 - 9326.277: 81.4069% ( 37) 00:08:04.476 9326.277 - 9376.689: 81.6482% ( 40) 00:08:04.476 9376.689 - 9427.102: 81.8412% ( 32) 00:08:04.476 9427.102 - 9477.514: 82.0524% ( 35) 00:08:04.476 9477.514 - 9527.926: 82.2394% ( 31) 00:08:04.476 9527.926 - 9578.338: 82.4264% ( 31) 00:08:04.476 9578.338 - 9628.751: 82.6737% ( 41) 00:08:04.476 9628.751 - 9679.163: 82.9090% ( 39) 00:08:04.476 9679.163 - 9729.575: 83.1202% ( 35) 00:08:04.476 9729.575 - 9779.988: 83.3253% ( 34) 00:08:04.476 9779.988 - 9830.400: 83.5304% ( 34) 00:08:04.476 9830.400 - 9880.812: 83.7717% ( 40) 00:08:04.476 9880.812 - 9931.225: 83.9949% ( 37) 00:08:04.476 9931.225 - 9981.637: 84.1940% ( 33) 00:08:04.476 9981.637 - 10032.049: 84.3991% ( 34) 00:08:04.476 10032.049 - 10082.462: 84.6223% ( 37) 00:08:04.476 10082.462 - 10132.874: 84.8335% ( 35) 00:08:04.476 10132.874 - 10183.286: 85.0446% ( 35) 00:08:04.476 10183.286 - 10233.698: 85.2558% ( 35) 00:08:04.476 10233.698 - 10284.111: 85.4669% ( 35) 00:08:04.476 10284.111 - 10334.523: 85.7022% ( 39) 00:08:04.476 10334.523 - 10384.935: 85.9254% ( 37) 00:08:04.476 10384.935 - 10435.348: 86.0883% ( 27) 00:08:04.476 10435.348 - 10485.760: 86.2210% ( 22) 00:08:04.476 10485.760 - 10536.172: 86.3296% ( 18) 00:08:04.476 10536.172 - 10586.585: 86.4684% ( 23) 00:08:04.476 10586.585 - 10636.997: 86.6313% ( 27) 00:08:04.476 10636.997 - 10687.409: 86.7761% ( 24) 00:08:04.476 10687.409 - 10737.822: 86.9208% ( 24) 00:08:04.476 10737.822 - 10788.234: 87.0596% ( 23) 00:08:04.476 10788.234 - 10838.646: 87.1622% ( 17) 00:08:04.476 10838.646 - 10889.058: 87.2406% ( 13) 00:08:04.476 10889.058 - 10939.471: 87.3311% ( 15) 00:08:04.476 10939.471 - 10989.883: 87.4155% ( 14) 00:08:04.476 10989.883 - 11040.295: 87.4819% ( 11) 00:08:04.476 11040.295 - 11090.708: 87.5543% ( 12) 00:08:04.476 11090.708 - 11141.120: 87.6267% ( 12) 00:08:04.476 11141.120 - 11191.532: 87.7172% ( 15) 00:08:04.476 11191.532 - 11241.945: 87.7896% ( 12) 00:08:04.476 11241.945 - 11292.357: 87.8801% ( 15) 00:08:04.476 11292.357 - 11342.769: 87.9706% ( 15) 00:08:04.476 11342.769 - 11393.182: 88.0972% ( 21) 00:08:04.476 11393.182 - 11443.594: 88.2179% ( 20) 00:08:04.476 11443.594 - 11494.006: 88.3446% ( 21) 00:08:04.476 11494.006 - 11544.418: 88.4592% ( 19) 00:08:04.476 11544.418 - 11594.831: 88.5799% ( 20) 00:08:04.476 11594.831 - 11645.243: 88.6885% ( 18) 00:08:04.476 11645.243 - 11695.655: 88.7729% ( 14) 00:08:04.476 11695.655 - 11746.068: 88.8694% ( 16) 00:08:04.476 11746.068 - 11796.480: 88.9961% ( 21) 00:08:04.476 11796.480 - 11846.892: 89.1168% ( 20) 00:08:04.476 11846.892 - 11897.305: 89.2736% ( 26) 00:08:04.476 11897.305 - 11947.717: 89.4305% ( 26) 00:08:04.476 11947.717 - 11998.129: 89.5813% ( 25) 00:08:04.476 11998.129 - 12048.542: 89.7502% ( 28) 00:08:04.476 12048.542 - 12098.954: 89.9071% ( 26) 00:08:04.476 12098.954 - 12149.366: 90.0881% ( 30) 00:08:04.476 12149.366 - 12199.778: 90.2691% ( 30) 00:08:04.476 12199.778 - 12250.191: 90.4139% ( 24) 00:08:04.476 12250.191 - 12300.603: 90.5767% ( 27) 00:08:04.476 12300.603 - 12351.015: 90.7577% ( 30) 00:08:04.476 12351.015 - 12401.428: 90.9266% ( 28) 00:08:04.476 12401.428 - 12451.840: 91.0956% ( 28) 00:08:04.476 12451.840 - 12502.252: 91.2343% ( 23) 00:08:04.476 12502.252 - 12552.665: 91.3912% ( 26) 00:08:04.476 12552.665 - 12603.077: 91.5842% ( 32) 00:08:04.476 12603.077 - 12653.489: 91.7109% ( 21) 00:08:04.476 12653.489 - 12703.902: 91.8376% ( 21) 00:08:04.476 12703.902 - 12754.314: 91.9824% ( 24) 00:08:04.476 12754.314 - 12804.726: 92.1151% ( 22) 00:08:04.476 12804.726 - 12855.138: 92.2056% ( 15) 00:08:04.476 12855.138 - 12905.551: 92.3021% ( 16) 00:08:04.476 12905.551 - 13006.375: 92.4891% ( 31) 00:08:04.476 13006.375 - 13107.200: 92.7425% ( 42) 00:08:04.476 13107.200 - 13208.025: 92.9356% ( 32) 00:08:04.476 13208.025 - 13308.849: 93.1407% ( 34) 00:08:04.476 13308.849 - 13409.674: 93.3458% ( 34) 00:08:04.476 13409.674 - 13510.498: 93.5449% ( 33) 00:08:04.476 13510.498 - 13611.323: 93.7440% ( 33) 00:08:04.476 13611.323 - 13712.148: 93.9370% ( 32) 00:08:04.476 13712.148 - 13812.972: 94.1844% ( 41) 00:08:04.476 13812.972 - 13913.797: 94.3774% ( 32) 00:08:04.476 13913.797 - 14014.622: 94.5403% ( 27) 00:08:04.476 14014.622 - 14115.446: 94.7575% ( 36) 00:08:04.476 14115.446 - 14216.271: 94.9867% ( 38) 00:08:04.476 14216.271 - 14317.095: 95.3427% ( 59) 00:08:04.476 14317.095 - 14417.920: 95.6986% ( 59) 00:08:04.476 14417.920 - 14518.745: 96.1269% ( 71) 00:08:04.476 14518.745 - 14619.569: 96.5613% ( 72) 00:08:04.476 14619.569 - 14720.394: 96.8871% ( 54) 00:08:04.476 14720.394 - 14821.218: 97.2008% ( 52) 00:08:04.476 14821.218 - 14922.043: 97.5024% ( 50) 00:08:04.476 14922.043 - 15022.868: 97.7980% ( 49) 00:08:04.476 15022.868 - 15123.692: 98.0936% ( 49) 00:08:04.476 15123.692 - 15224.517: 98.3048% ( 35) 00:08:04.476 15224.517 - 15325.342: 98.4616% ( 26) 00:08:04.476 15325.342 - 15426.166: 98.5582% ( 16) 00:08:04.476 15426.166 - 15526.991: 98.6426% ( 14) 00:08:04.476 15526.991 - 15627.815: 98.7391% ( 16) 00:08:04.476 15627.815 - 15728.640: 98.8115% ( 12) 00:08:04.476 15728.640 - 15829.465: 98.8719% ( 10) 00:08:04.476 15829.465 - 15930.289: 98.9443% ( 12) 00:08:04.476 15930.289 - 16031.114: 99.0106% ( 11) 00:08:04.476 16031.114 - 16131.938: 99.0770% ( 11) 00:08:04.476 16131.938 - 16232.763: 99.1252% ( 8) 00:08:04.476 16232.763 - 16333.588: 99.1735% ( 8) 00:08:04.476 16333.588 - 16434.412: 99.2218% ( 8) 00:08:04.476 16434.412 - 16535.237: 99.2278% ( 1) 00:08:04.476 16736.886 - 16837.711: 99.2338% ( 1) 00:08:04.476 16837.711 - 16938.535: 99.2761% ( 7) 00:08:04.476 16938.535 - 17039.360: 99.3002% ( 4) 00:08:04.476 17039.360 - 17140.185: 99.3304% ( 5) 00:08:04.476 17140.185 - 17241.009: 99.3605% ( 5) 00:08:04.476 17241.009 - 17341.834: 99.3967% ( 6) 00:08:04.476 17341.834 - 17442.658: 99.4269% ( 5) 00:08:04.476 17442.658 - 17543.483: 99.4570% ( 5) 00:08:04.476 17543.483 - 17644.308: 99.4932% ( 6) 00:08:04.476 17644.308 - 17745.132: 99.5294% ( 6) 00:08:04.476 17745.132 - 17845.957: 99.5596% ( 5) 00:08:04.476 17845.957 - 17946.782: 99.5958% ( 6) 00:08:04.476 17946.782 - 18047.606: 99.6139% ( 3) 00:08:04.476 21677.292 - 21778.117: 99.6441% ( 5) 00:08:04.476 21778.117 - 21878.942: 99.6742% ( 5) 00:08:04.476 21878.942 - 21979.766: 99.7104% ( 6) 00:08:04.476 21979.766 - 22080.591: 99.7406% ( 5) 00:08:04.476 22080.591 - 22181.415: 99.7768% ( 6) 00:08:04.476 22181.415 - 22282.240: 99.8130% ( 6) 00:08:04.476 22282.240 - 22383.065: 99.8431% ( 5) 00:08:04.476 22383.065 - 22483.889: 99.8793% ( 6) 00:08:04.476 22483.889 - 22584.714: 99.9095% ( 5) 00:08:04.476 22584.714 - 22685.538: 99.9457% ( 6) 00:08:04.476 22685.538 - 22786.363: 99.9819% ( 6) 00:08:04.476 22786.363 - 22887.188: 100.0000% ( 3) 00:08:04.476 00:08:04.476 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:04.476 ============================================================================== 00:08:04.476 Range in us Cumulative IO count 00:08:04.476 3554.068 - 3579.274: 0.0121% ( 2) 00:08:04.477 3579.274 - 3604.480: 0.0181% ( 1) 00:08:04.477 3604.480 - 3629.686: 0.0362% ( 3) 00:08:04.477 3629.686 - 3654.892: 0.0483% ( 2) 00:08:04.477 3654.892 - 3680.098: 0.0603% ( 2) 00:08:04.477 3680.098 - 3705.305: 0.0845% ( 4) 00:08:04.477 3705.305 - 3730.511: 0.0965% ( 2) 00:08:04.477 3730.511 - 3755.717: 0.1146% ( 3) 00:08:04.477 3755.717 - 3780.923: 0.1267% ( 2) 00:08:04.477 3780.923 - 3806.129: 0.1388% ( 2) 00:08:04.477 3806.129 - 3831.335: 0.1508% ( 2) 00:08:04.477 3831.335 - 3856.542: 0.1629% ( 2) 00:08:04.477 3856.542 - 3881.748: 0.1750% ( 2) 00:08:04.477 3881.748 - 3906.954: 0.1931% ( 3) 00:08:04.477 3906.954 - 3932.160: 0.2051% ( 2) 00:08:04.477 3932.160 - 3957.366: 0.2232% ( 3) 00:08:04.477 3957.366 - 3982.572: 0.2353% ( 2) 00:08:04.477 3982.572 - 4007.778: 0.2473% ( 2) 00:08:04.477 4007.778 - 4032.985: 0.2654% ( 3) 00:08:04.477 4032.985 - 4058.191: 0.2775% ( 2) 00:08:04.477 4058.191 - 4083.397: 0.2896% ( 2) 00:08:04.477 4083.397 - 4108.603: 0.3077% ( 3) 00:08:04.477 4108.603 - 4133.809: 0.3197% ( 2) 00:08:04.477 4133.809 - 4159.015: 0.3378% ( 3) 00:08:04.477 4159.015 - 4184.222: 0.3499% ( 2) 00:08:04.477 4184.222 - 4209.428: 0.3680% ( 3) 00:08:04.477 4209.428 - 4234.634: 0.3801% ( 2) 00:08:04.477 4234.634 - 4259.840: 0.3861% ( 1) 00:08:04.477 5142.055 - 5167.262: 0.4042% ( 3) 00:08:04.477 5167.262 - 5192.468: 0.4163% ( 2) 00:08:04.477 5192.468 - 5217.674: 0.4283% ( 2) 00:08:04.477 5217.674 - 5242.880: 0.4464% ( 3) 00:08:04.477 5242.880 - 5268.086: 0.4645% ( 3) 00:08:04.477 5268.086 - 5293.292: 0.4826% ( 3) 00:08:04.477 5293.292 - 5318.498: 0.4947% ( 2) 00:08:04.477 5318.498 - 5343.705: 0.5128% ( 3) 00:08:04.477 5343.705 - 5368.911: 0.5249% ( 2) 00:08:04.477 5368.911 - 5394.117: 0.5430% ( 3) 00:08:04.477 5394.117 - 5419.323: 0.5550% ( 2) 00:08:04.477 5419.323 - 5444.529: 0.5671% ( 2) 00:08:04.477 5444.529 - 5469.735: 0.5852% ( 3) 00:08:04.477 5469.735 - 5494.942: 0.5972% ( 2) 00:08:04.477 5494.942 - 5520.148: 0.6093% ( 2) 00:08:04.477 5520.148 - 5545.354: 0.6214% ( 2) 00:08:04.477 5545.354 - 5570.560: 0.6395% ( 3) 00:08:04.477 5570.560 - 5595.766: 0.6515% ( 2) 00:08:04.477 5595.766 - 5620.972: 0.6696% ( 3) 00:08:04.477 5620.972 - 5646.178: 0.6938% ( 4) 00:08:04.477 5646.178 - 5671.385: 0.7360% ( 7) 00:08:04.477 5671.385 - 5696.591: 0.7903% ( 9) 00:08:04.477 5696.591 - 5721.797: 0.9472% ( 26) 00:08:04.477 5721.797 - 5747.003: 1.1281% ( 30) 00:08:04.477 5747.003 - 5772.209: 1.4539% ( 54) 00:08:04.477 5772.209 - 5797.415: 1.9788% ( 87) 00:08:04.477 5797.415 - 5822.622: 2.7208% ( 123) 00:08:04.477 5822.622 - 5847.828: 3.7464% ( 170) 00:08:04.477 5847.828 - 5873.034: 4.8926% ( 190) 00:08:04.477 5873.034 - 5898.240: 6.2922% ( 232) 00:08:04.477 5898.240 - 5923.446: 7.7582% ( 243) 00:08:04.477 5923.446 - 5948.652: 9.2242% ( 243) 00:08:04.477 5948.652 - 5973.858: 10.7324% ( 250) 00:08:04.477 5973.858 - 5999.065: 12.3371% ( 266) 00:08:04.477 5999.065 - 6024.271: 13.9660% ( 270) 00:08:04.477 6024.271 - 6049.477: 15.5285% ( 259) 00:08:04.477 6049.477 - 6074.683: 17.1573% ( 270) 00:08:04.477 6074.683 - 6099.889: 18.7983% ( 272) 00:08:04.477 6099.889 - 6125.095: 20.5056% ( 283) 00:08:04.477 6125.095 - 6150.302: 22.3938% ( 313) 00:08:04.477 6150.302 - 6175.508: 24.1554% ( 292) 00:08:04.477 6175.508 - 6200.714: 25.9110% ( 291) 00:08:04.477 6200.714 - 6225.920: 27.7027% ( 297) 00:08:04.477 6225.920 - 6251.126: 29.4703% ( 293) 00:08:04.477 6251.126 - 6276.332: 31.3164% ( 306) 00:08:04.477 6276.332 - 6301.538: 33.1684% ( 307) 00:08:04.477 6301.538 - 6326.745: 34.9361% ( 293) 00:08:04.477 6326.745 - 6351.951: 36.8666% ( 320) 00:08:04.477 6351.951 - 6377.157: 38.6764% ( 300) 00:08:04.477 6377.157 - 6402.363: 40.5405% ( 309) 00:08:04.477 6402.363 - 6427.569: 42.3745% ( 304) 00:08:04.477 6427.569 - 6452.775: 44.2387% ( 309) 00:08:04.477 6452.775 - 6503.188: 48.0695% ( 635) 00:08:04.477 6503.188 - 6553.600: 51.6590% ( 595) 00:08:04.477 6553.600 - 6604.012: 55.0615% ( 564) 00:08:04.477 6604.012 - 6654.425: 57.9513% ( 479) 00:08:04.477 6654.425 - 6704.837: 60.0507% ( 348) 00:08:04.477 6704.837 - 6755.249: 61.5167% ( 243) 00:08:04.477 6755.249 - 6805.662: 62.6569% ( 189) 00:08:04.477 6805.662 - 6856.074: 63.7609% ( 183) 00:08:04.477 6856.074 - 6906.486: 64.7442% ( 163) 00:08:04.477 6906.486 - 6956.898: 65.5828% ( 139) 00:08:04.477 6956.898 - 7007.311: 66.3489% ( 127) 00:08:04.477 7007.311 - 7057.723: 66.9824% ( 105) 00:08:04.477 7057.723 - 7108.135: 67.5917% ( 101) 00:08:04.477 7108.135 - 7158.548: 68.1226% ( 88) 00:08:04.477 7158.548 - 7208.960: 68.5811% ( 76) 00:08:04.477 7208.960 - 7259.372: 69.0094% ( 71) 00:08:04.477 7259.372 - 7309.785: 69.4136% ( 67) 00:08:04.477 7309.785 - 7360.197: 69.7937% ( 63) 00:08:04.477 7360.197 - 7410.609: 70.1375% ( 57) 00:08:04.477 7410.609 - 7461.022: 70.4090% ( 45) 00:08:04.477 7461.022 - 7511.434: 70.6503% ( 40) 00:08:04.477 7511.434 - 7561.846: 70.8796% ( 38) 00:08:04.477 7561.846 - 7612.258: 71.1993% ( 53) 00:08:04.477 7612.258 - 7662.671: 71.4708% ( 45) 00:08:04.477 7662.671 - 7713.083: 71.7664% ( 49) 00:08:04.477 7713.083 - 7763.495: 72.0379% ( 45) 00:08:04.477 7763.495 - 7813.908: 72.2792% ( 40) 00:08:04.477 7813.908 - 7864.320: 72.5688% ( 48) 00:08:04.477 7864.320 - 7914.732: 72.8041% ( 39) 00:08:04.477 7914.732 - 7965.145: 73.0454% ( 40) 00:08:04.477 7965.145 - 8015.557: 73.3289% ( 47) 00:08:04.477 8015.557 - 8065.969: 73.6306% ( 50) 00:08:04.477 8065.969 - 8116.382: 73.9563% ( 54) 00:08:04.477 8116.382 - 8166.794: 74.3424% ( 64) 00:08:04.477 8166.794 - 8217.206: 74.7406% ( 66) 00:08:04.477 8217.206 - 8267.618: 75.0845% ( 57) 00:08:04.477 8267.618 - 8318.031: 75.4645% ( 63) 00:08:04.477 8318.031 - 8368.443: 75.8024% ( 56) 00:08:04.477 8368.443 - 8418.855: 76.1643% ( 60) 00:08:04.477 8418.855 - 8469.268: 76.4720% ( 51) 00:08:04.477 8469.268 - 8519.680: 76.7797% ( 51) 00:08:04.477 8519.680 - 8570.092: 77.0934% ( 52) 00:08:04.477 8570.092 - 8620.505: 77.3830% ( 48) 00:08:04.477 8620.505 - 8670.917: 77.6665% ( 47) 00:08:04.477 8670.917 - 8721.329: 77.9380% ( 45) 00:08:04.477 8721.329 - 8771.742: 78.3000% ( 60) 00:08:04.477 8771.742 - 8822.154: 78.7102% ( 68) 00:08:04.477 8822.154 - 8872.566: 79.0782% ( 61) 00:08:04.477 8872.566 - 8922.978: 79.4402% ( 60) 00:08:04.477 8922.978 - 8973.391: 79.7720% ( 55) 00:08:04.477 8973.391 - 9023.803: 80.0434% ( 45) 00:08:04.477 9023.803 - 9074.215: 80.3089% ( 44) 00:08:04.477 9074.215 - 9124.628: 80.5562% ( 41) 00:08:04.477 9124.628 - 9175.040: 80.8156% ( 43) 00:08:04.477 9175.040 - 9225.452: 81.0931% ( 46) 00:08:04.477 9225.452 - 9275.865: 81.3405% ( 41) 00:08:04.477 9275.865 - 9326.277: 81.5697% ( 38) 00:08:04.477 9326.277 - 9376.689: 81.8111% ( 40) 00:08:04.477 9376.689 - 9427.102: 82.0282% ( 36) 00:08:04.477 9427.102 - 9477.514: 82.2756% ( 41) 00:08:04.477 9477.514 - 9527.926: 82.5712% ( 49) 00:08:04.477 9527.926 - 9578.338: 82.8306% ( 43) 00:08:04.477 9578.338 - 9628.751: 82.9935% ( 27) 00:08:04.477 9628.751 - 9679.163: 83.1443% ( 25) 00:08:04.477 9679.163 - 9729.575: 83.2951% ( 25) 00:08:04.477 9729.575 - 9779.988: 83.4278% ( 22) 00:08:04.477 9779.988 - 9830.400: 83.5787% ( 25) 00:08:04.477 9830.400 - 9880.812: 83.7416% ( 27) 00:08:04.477 9880.812 - 9931.225: 83.9225% ( 30) 00:08:04.477 9931.225 - 9981.637: 84.0915% ( 28) 00:08:04.477 9981.637 - 10032.049: 84.2604% ( 28) 00:08:04.477 10032.049 - 10082.462: 84.4414% ( 30) 00:08:04.477 10082.462 - 10132.874: 84.6404% ( 33) 00:08:04.477 10132.874 - 10183.286: 84.8275% ( 31) 00:08:04.477 10183.286 - 10233.698: 85.0567% ( 38) 00:08:04.477 10233.698 - 10284.111: 85.2558% ( 33) 00:08:04.477 10284.111 - 10334.523: 85.4247% ( 28) 00:08:04.477 10334.523 - 10384.935: 85.5997% ( 29) 00:08:04.477 10384.935 - 10435.348: 85.7686% ( 28) 00:08:04.477 10435.348 - 10485.760: 85.9375% ( 28) 00:08:04.477 10485.760 - 10536.172: 86.1185% ( 30) 00:08:04.477 10536.172 - 10586.585: 86.2934% ( 29) 00:08:04.477 10586.585 - 10636.997: 86.5106% ( 36) 00:08:04.477 10636.997 - 10687.409: 86.6675% ( 26) 00:08:04.477 10687.409 - 10737.822: 86.7942% ( 21) 00:08:04.477 10737.822 - 10788.234: 86.9631% ( 28) 00:08:04.477 10788.234 - 10838.646: 87.1139% ( 25) 00:08:04.477 10838.646 - 10889.058: 87.2647% ( 25) 00:08:04.477 10889.058 - 10939.471: 87.4336% ( 28) 00:08:04.477 10939.471 - 10989.883: 87.5845% ( 25) 00:08:04.477 10989.883 - 11040.295: 87.6991% ( 19) 00:08:04.477 11040.295 - 11090.708: 87.7956% ( 16) 00:08:04.477 11090.708 - 11141.120: 87.9404% ( 24) 00:08:04.477 11141.120 - 11191.532: 88.0550% ( 19) 00:08:04.477 11191.532 - 11241.945: 88.1455% ( 15) 00:08:04.477 11241.945 - 11292.357: 88.2119% ( 11) 00:08:04.477 11292.357 - 11342.769: 88.3024% ( 15) 00:08:04.477 11342.769 - 11393.182: 88.3989% ( 16) 00:08:04.477 11393.182 - 11443.594: 88.5075% ( 18) 00:08:04.477 11443.594 - 11494.006: 88.6161% ( 18) 00:08:04.477 11494.006 - 11544.418: 88.7186% ( 17) 00:08:04.477 11544.418 - 11594.831: 88.8212% ( 17) 00:08:04.477 11594.831 - 11645.243: 88.9177% ( 16) 00:08:04.477 11645.243 - 11695.655: 89.0444% ( 21) 00:08:04.478 11695.655 - 11746.068: 89.1711% ( 21) 00:08:04.478 11746.068 - 11796.480: 89.3038% ( 22) 00:08:04.478 11796.480 - 11846.892: 89.4426% ( 23) 00:08:04.478 11846.892 - 11897.305: 89.5693% ( 21) 00:08:04.478 11897.305 - 11947.717: 89.7261% ( 26) 00:08:04.478 11947.717 - 11998.129: 89.9131% ( 31) 00:08:04.478 11998.129 - 12048.542: 90.0639% ( 25) 00:08:04.478 12048.542 - 12098.954: 90.2268% ( 27) 00:08:04.478 12098.954 - 12149.366: 90.3535% ( 21) 00:08:04.478 12149.366 - 12199.778: 90.4681% ( 19) 00:08:04.478 12199.778 - 12250.191: 90.5888% ( 20) 00:08:04.478 12250.191 - 12300.603: 90.7034% ( 19) 00:08:04.478 12300.603 - 12351.015: 90.8482% ( 24) 00:08:04.478 12351.015 - 12401.428: 90.9749% ( 21) 00:08:04.478 12401.428 - 12451.840: 91.1076% ( 22) 00:08:04.478 12451.840 - 12502.252: 91.2343% ( 21) 00:08:04.478 12502.252 - 12552.665: 91.3550% ( 20) 00:08:04.478 12552.665 - 12603.077: 91.4756% ( 20) 00:08:04.478 12603.077 - 12653.489: 91.6204% ( 24) 00:08:04.478 12653.489 - 12703.902: 91.7411% ( 20) 00:08:04.478 12703.902 - 12754.314: 91.8436% ( 17) 00:08:04.478 12754.314 - 12804.726: 91.9522% ( 18) 00:08:04.478 12804.726 - 12855.138: 92.0608% ( 18) 00:08:04.478 12855.138 - 12905.551: 92.1634% ( 17) 00:08:04.478 12905.551 - 13006.375: 92.3926% ( 38) 00:08:04.478 13006.375 - 13107.200: 92.6098% ( 36) 00:08:04.478 13107.200 - 13208.025: 92.8994% ( 48) 00:08:04.478 13208.025 - 13308.849: 93.2131% ( 52) 00:08:04.478 13308.849 - 13409.674: 93.4544% ( 40) 00:08:04.478 13409.674 - 13510.498: 93.6897% ( 39) 00:08:04.478 13510.498 - 13611.323: 93.9189% ( 38) 00:08:04.478 13611.323 - 13712.148: 94.1482% ( 38) 00:08:04.478 13712.148 - 13812.972: 94.3895% ( 40) 00:08:04.478 13812.972 - 13913.797: 94.6549% ( 44) 00:08:04.478 13913.797 - 14014.622: 94.9324% ( 46) 00:08:04.478 14014.622 - 14115.446: 95.2461% ( 52) 00:08:04.478 14115.446 - 14216.271: 95.6262% ( 63) 00:08:04.478 14216.271 - 14317.095: 95.9097% ( 47) 00:08:04.478 14317.095 - 14417.920: 96.1511% ( 40) 00:08:04.478 14417.920 - 14518.745: 96.4225% ( 45) 00:08:04.478 14518.745 - 14619.569: 96.6639% ( 40) 00:08:04.478 14619.569 - 14720.394: 96.8629% ( 33) 00:08:04.478 14720.394 - 14821.218: 97.0620% ( 33) 00:08:04.478 14821.218 - 14922.043: 97.2792% ( 36) 00:08:04.478 14922.043 - 15022.868: 97.4964% ( 36) 00:08:04.478 15022.868 - 15123.692: 97.6472% ( 25) 00:08:04.478 15123.692 - 15224.517: 97.7558% ( 18) 00:08:04.478 15224.517 - 15325.342: 97.8825% ( 21) 00:08:04.478 15325.342 - 15426.166: 98.0092% ( 21) 00:08:04.478 15426.166 - 15526.991: 98.1298% ( 20) 00:08:04.478 15526.991 - 15627.815: 98.2806% ( 25) 00:08:04.478 15627.815 - 15728.640: 98.4194% ( 23) 00:08:04.478 15728.640 - 15829.465: 98.5702% ( 25) 00:08:04.478 15829.465 - 15930.289: 98.6486% ( 13) 00:08:04.478 15930.289 - 16031.114: 98.7391% ( 15) 00:08:04.478 16031.114 - 16131.938: 98.8598% ( 20) 00:08:04.478 16131.938 - 16232.763: 98.9081% ( 8) 00:08:04.478 16232.763 - 16333.588: 98.9624% ( 9) 00:08:04.478 16333.588 - 16434.412: 99.0106% ( 8) 00:08:04.478 16434.412 - 16535.237: 99.0770% ( 11) 00:08:04.478 16535.237 - 16636.062: 99.1313% ( 9) 00:08:04.478 16636.062 - 16736.886: 99.1976% ( 11) 00:08:04.478 16736.886 - 16837.711: 99.2459% ( 8) 00:08:04.478 16837.711 - 16938.535: 99.3062% ( 10) 00:08:04.478 16938.535 - 17039.360: 99.3666% ( 10) 00:08:04.478 17039.360 - 17140.185: 99.4208% ( 9) 00:08:04.478 17140.185 - 17241.009: 99.4691% ( 8) 00:08:04.478 17241.009 - 17341.834: 99.5234% ( 9) 00:08:04.478 17341.834 - 17442.658: 99.5656% ( 7) 00:08:04.478 17442.658 - 17543.483: 99.5958% ( 5) 00:08:04.478 17543.483 - 17644.308: 99.6139% ( 3) 00:08:04.478 21173.169 - 21273.994: 99.6199% ( 1) 00:08:04.478 21273.994 - 21374.818: 99.6561% ( 6) 00:08:04.478 21374.818 - 21475.643: 99.6863% ( 5) 00:08:04.478 21475.643 - 21576.468: 99.7165% ( 5) 00:08:04.478 21576.468 - 21677.292: 99.7527% ( 6) 00:08:04.478 21677.292 - 21778.117: 99.7828% ( 5) 00:08:04.478 21778.117 - 21878.942: 99.8130% ( 5) 00:08:04.478 21878.942 - 21979.766: 99.8492% ( 6) 00:08:04.478 21979.766 - 22080.591: 99.8793% ( 5) 00:08:04.478 22080.591 - 22181.415: 99.9155% ( 6) 00:08:04.478 22181.415 - 22282.240: 99.9517% ( 6) 00:08:04.478 22282.240 - 22383.065: 99.9759% ( 4) 00:08:04.478 22383.065 - 22483.889: 100.0000% ( 4) 00:08:04.478 00:08:04.478 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:04.478 ============================================================================== 00:08:04.478 Range in us Cumulative IO count 00:08:04.478 3402.831 - 3428.037: 0.0302% ( 5) 00:08:04.478 3428.037 - 3453.243: 0.0845% ( 9) 00:08:04.478 3453.243 - 3478.449: 0.0965% ( 2) 00:08:04.478 3478.449 - 3503.655: 0.1086% ( 2) 00:08:04.478 3503.655 - 3528.862: 0.1146% ( 1) 00:08:04.478 3528.862 - 3554.068: 0.1267% ( 2) 00:08:04.478 3579.274 - 3604.480: 0.1448% ( 3) 00:08:04.478 3604.480 - 3629.686: 0.1569% ( 2) 00:08:04.478 3629.686 - 3654.892: 0.1750% ( 3) 00:08:04.478 3654.892 - 3680.098: 0.1870% ( 2) 00:08:04.478 3680.098 - 3705.305: 0.1991% ( 2) 00:08:04.478 3705.305 - 3730.511: 0.2111% ( 2) 00:08:04.478 3730.511 - 3755.717: 0.2292% ( 3) 00:08:04.478 3755.717 - 3780.923: 0.2413% ( 2) 00:08:04.478 3780.923 - 3806.129: 0.2594% ( 3) 00:08:04.478 3806.129 - 3831.335: 0.2715% ( 2) 00:08:04.478 3831.335 - 3856.542: 0.2835% ( 2) 00:08:04.478 3856.542 - 3881.748: 0.2956% ( 2) 00:08:04.478 3881.748 - 3906.954: 0.3137% ( 3) 00:08:04.478 3906.954 - 3932.160: 0.3258% ( 2) 00:08:04.478 3932.160 - 3957.366: 0.3318% ( 1) 00:08:04.478 3957.366 - 3982.572: 0.3439% ( 2) 00:08:04.478 3982.572 - 4007.778: 0.3620% ( 3) 00:08:04.478 4007.778 - 4032.985: 0.3740% ( 2) 00:08:04.478 4032.985 - 4058.191: 0.3861% ( 2) 00:08:04.478 4940.406 - 4965.612: 0.3921% ( 1) 00:08:04.478 4965.612 - 4990.818: 0.4163% ( 4) 00:08:04.478 4990.818 - 5016.025: 0.4283% ( 2) 00:08:04.478 5016.025 - 5041.231: 0.4404% ( 2) 00:08:04.478 5041.231 - 5066.437: 0.4585% ( 3) 00:08:04.478 5066.437 - 5091.643: 0.4766% ( 3) 00:08:04.478 5091.643 - 5116.849: 0.4887% ( 2) 00:08:04.478 5116.849 - 5142.055: 0.5068% ( 3) 00:08:04.478 5142.055 - 5167.262: 0.5188% ( 2) 00:08:04.478 5167.262 - 5192.468: 0.5369% ( 3) 00:08:04.478 5192.468 - 5217.674: 0.5550% ( 3) 00:08:04.478 5217.674 - 5242.880: 0.5671% ( 2) 00:08:04.478 5242.880 - 5268.086: 0.5852% ( 3) 00:08:04.478 5268.086 - 5293.292: 0.5972% ( 2) 00:08:04.478 5293.292 - 5318.498: 0.6093% ( 2) 00:08:04.478 5318.498 - 5343.705: 0.6153% ( 1) 00:08:04.478 5343.705 - 5368.911: 0.6274% ( 2) 00:08:04.478 5368.911 - 5394.117: 0.6455% ( 3) 00:08:04.478 5394.117 - 5419.323: 0.6576% ( 2) 00:08:04.478 5419.323 - 5444.529: 0.6696% ( 2) 00:08:04.478 5444.529 - 5469.735: 0.6877% ( 3) 00:08:04.478 5469.735 - 5494.942: 0.6998% ( 2) 00:08:04.478 5494.942 - 5520.148: 0.7179% ( 3) 00:08:04.478 5520.148 - 5545.354: 0.7300% ( 2) 00:08:04.478 5545.354 - 5570.560: 0.7420% ( 2) 00:08:04.478 5570.560 - 5595.766: 0.7601% ( 3) 00:08:04.478 5595.766 - 5620.972: 0.7722% ( 2) 00:08:04.478 5646.178 - 5671.385: 0.8084% ( 6) 00:08:04.478 5671.385 - 5696.591: 0.9170% ( 18) 00:08:04.478 5696.591 - 5721.797: 1.0195% ( 17) 00:08:04.478 5721.797 - 5747.003: 1.1462% ( 21) 00:08:04.478 5747.003 - 5772.209: 1.4177% ( 45) 00:08:04.478 5772.209 - 5797.415: 1.9667% ( 91) 00:08:04.478 5797.415 - 5822.622: 2.6484% ( 113) 00:08:04.478 5822.622 - 5847.828: 3.8007% ( 191) 00:08:04.478 5847.828 - 5873.034: 4.9952% ( 198) 00:08:04.478 5873.034 - 5898.240: 6.2681% ( 211) 00:08:04.478 5898.240 - 5923.446: 7.7522% ( 246) 00:08:04.478 5923.446 - 5948.652: 9.2242% ( 244) 00:08:04.478 5948.652 - 5973.858: 10.7324% ( 250) 00:08:04.478 5973.858 - 5999.065: 12.3130% ( 262) 00:08:04.478 5999.065 - 6024.271: 13.9479% ( 271) 00:08:04.478 6024.271 - 6049.477: 15.4862% ( 255) 00:08:04.478 6049.477 - 6074.683: 17.0427% ( 258) 00:08:04.478 6074.683 - 6099.889: 18.6595% ( 268) 00:08:04.478 6099.889 - 6125.095: 20.3487% ( 280) 00:08:04.478 6125.095 - 6150.302: 22.1344% ( 296) 00:08:04.478 6150.302 - 6175.508: 23.8296% ( 281) 00:08:04.478 6175.508 - 6200.714: 25.6636% ( 304) 00:08:04.478 6200.714 - 6225.920: 27.5036% ( 305) 00:08:04.478 6225.920 - 6251.126: 29.2954% ( 297) 00:08:04.478 6251.126 - 6276.332: 31.1716% ( 311) 00:08:04.478 6276.332 - 6301.538: 33.0478% ( 311) 00:08:04.478 6301.538 - 6326.745: 34.9421% ( 314) 00:08:04.478 6326.745 - 6351.951: 36.7761% ( 304) 00:08:04.478 6351.951 - 6377.157: 38.5497% ( 294) 00:08:04.478 6377.157 - 6402.363: 40.4018% ( 307) 00:08:04.478 6402.363 - 6427.569: 42.2720% ( 310) 00:08:04.478 6427.569 - 6452.775: 44.1059% ( 304) 00:08:04.478 6452.775 - 6503.188: 47.8945% ( 628) 00:08:04.478 6503.188 - 6553.600: 51.5685% ( 609) 00:08:04.478 6553.600 - 6604.012: 54.9409% ( 559) 00:08:04.478 6604.012 - 6654.425: 57.9030% ( 491) 00:08:04.478 6654.425 - 6704.837: 59.9602% ( 341) 00:08:04.478 6704.837 - 6755.249: 61.4081% ( 240) 00:08:04.478 6755.249 - 6805.662: 62.5241% ( 185) 00:08:04.478 6805.662 - 6856.074: 63.5195% ( 165) 00:08:04.478 6856.074 - 6906.486: 64.4788% ( 159) 00:08:04.478 6906.486 - 6956.898: 65.3415% ( 143) 00:08:04.479 6956.898 - 7007.311: 66.0292% ( 114) 00:08:04.479 7007.311 - 7057.723: 66.6807% ( 108) 00:08:04.479 7057.723 - 7108.135: 67.2599% ( 96) 00:08:04.479 7108.135 - 7158.548: 67.8209% ( 93) 00:08:04.479 7158.548 - 7208.960: 68.3639% ( 90) 00:08:04.479 7208.960 - 7259.372: 68.8526% ( 81) 00:08:04.479 7259.372 - 7309.785: 69.2568% ( 67) 00:08:04.479 7309.785 - 7360.197: 69.6248% ( 61) 00:08:04.479 7360.197 - 7410.609: 70.0350% ( 68) 00:08:04.479 7410.609 - 7461.022: 70.3849% ( 58) 00:08:04.479 7461.022 - 7511.434: 70.6865% ( 50) 00:08:04.479 7511.434 - 7561.846: 71.0364% ( 58) 00:08:04.479 7561.846 - 7612.258: 71.3622% ( 54) 00:08:04.479 7612.258 - 7662.671: 71.6639% ( 50) 00:08:04.479 7662.671 - 7713.083: 71.9715% ( 51) 00:08:04.479 7713.083 - 7763.495: 72.2551% ( 47) 00:08:04.479 7763.495 - 7813.908: 72.5748% ( 53) 00:08:04.479 7813.908 - 7864.320: 72.8041% ( 38) 00:08:04.479 7864.320 - 7914.732: 73.0273% ( 37) 00:08:04.479 7914.732 - 7965.145: 73.2867% ( 43) 00:08:04.479 7965.145 - 8015.557: 73.5099% ( 37) 00:08:04.479 8015.557 - 8065.969: 73.7814% ( 45) 00:08:04.479 8065.969 - 8116.382: 74.0468% ( 44) 00:08:04.479 8116.382 - 8166.794: 74.3364% ( 48) 00:08:04.479 8166.794 - 8217.206: 74.6441% ( 51) 00:08:04.479 8217.206 - 8267.618: 74.9819% ( 56) 00:08:04.479 8267.618 - 8318.031: 75.2956% ( 52) 00:08:04.479 8318.031 - 8368.443: 75.6093% ( 52) 00:08:04.479 8368.443 - 8418.855: 75.9713% ( 60) 00:08:04.479 8418.855 - 8469.268: 76.4117% ( 73) 00:08:04.479 8469.268 - 8519.680: 76.8038% ( 65) 00:08:04.479 8519.680 - 8570.092: 77.1899% ( 64) 00:08:04.479 8570.092 - 8620.505: 77.5398% ( 58) 00:08:04.479 8620.505 - 8670.917: 77.9078% ( 61) 00:08:04.479 8670.917 - 8721.329: 78.2517% ( 57) 00:08:04.479 8721.329 - 8771.742: 78.6016% ( 58) 00:08:04.479 8771.742 - 8822.154: 78.9696% ( 61) 00:08:04.479 8822.154 - 8872.566: 79.3497% ( 63) 00:08:04.479 8872.566 - 8922.978: 79.7297% ( 63) 00:08:04.479 8922.978 - 8973.391: 80.0615% ( 55) 00:08:04.479 8973.391 - 9023.803: 80.4114% ( 58) 00:08:04.479 9023.803 - 9074.215: 80.7674% ( 59) 00:08:04.479 9074.215 - 9124.628: 81.1112% ( 57) 00:08:04.479 9124.628 - 9175.040: 81.3767% ( 44) 00:08:04.479 9175.040 - 9225.452: 81.6844% ( 51) 00:08:04.479 9225.452 - 9275.865: 81.9076% ( 37) 00:08:04.479 9275.865 - 9326.277: 82.0886% ( 30) 00:08:04.479 9326.277 - 9376.689: 82.2816% ( 32) 00:08:04.479 9376.689 - 9427.102: 82.4626% ( 30) 00:08:04.479 9427.102 - 9477.514: 82.6677% ( 34) 00:08:04.479 9477.514 - 9527.926: 82.9030% ( 39) 00:08:04.479 9527.926 - 9578.338: 83.1081% ( 34) 00:08:04.479 9578.338 - 9628.751: 83.3012% ( 32) 00:08:04.479 9628.751 - 9679.163: 83.4701% ( 28) 00:08:04.479 9679.163 - 9729.575: 83.6149% ( 24) 00:08:04.479 9729.575 - 9779.988: 83.7657% ( 25) 00:08:04.479 9779.988 - 9830.400: 83.8984% ( 22) 00:08:04.479 9830.400 - 9880.812: 84.0070% ( 18) 00:08:04.479 9880.812 - 9931.225: 84.1458% ( 23) 00:08:04.479 9931.225 - 9981.637: 84.2604% ( 19) 00:08:04.479 9981.637 - 10032.049: 84.4052% ( 24) 00:08:04.479 10032.049 - 10082.462: 84.5560% ( 25) 00:08:04.479 10082.462 - 10132.874: 84.6766% ( 20) 00:08:04.479 10132.874 - 10183.286: 84.8275% ( 25) 00:08:04.479 10183.286 - 10233.698: 84.9903% ( 27) 00:08:04.479 10233.698 - 10284.111: 85.1653% ( 29) 00:08:04.479 10284.111 - 10334.523: 85.3222% ( 26) 00:08:04.479 10334.523 - 10384.935: 85.4971% ( 29) 00:08:04.479 10384.935 - 10435.348: 85.7143% ( 36) 00:08:04.479 10435.348 - 10485.760: 85.8651% ( 25) 00:08:04.479 10485.760 - 10536.172: 86.0099% ( 24) 00:08:04.479 10536.172 - 10586.585: 86.1547% ( 24) 00:08:04.479 10586.585 - 10636.997: 86.2934% ( 23) 00:08:04.479 10636.997 - 10687.409: 86.4141% ( 20) 00:08:04.479 10687.409 - 10737.822: 86.5528% ( 23) 00:08:04.479 10737.822 - 10788.234: 86.6795% ( 21) 00:08:04.479 10788.234 - 10838.646: 86.8062% ( 21) 00:08:04.479 10838.646 - 10889.058: 86.9088% ( 17) 00:08:04.479 10889.058 - 10939.471: 87.0294% ( 20) 00:08:04.479 10939.471 - 10989.883: 87.1380% ( 18) 00:08:04.479 10989.883 - 11040.295: 87.2104% ( 12) 00:08:04.479 11040.295 - 11090.708: 87.2708% ( 10) 00:08:04.479 11090.708 - 11141.120: 87.3492% ( 13) 00:08:04.479 11141.120 - 11191.532: 87.4155% ( 11) 00:08:04.479 11191.532 - 11241.945: 87.5000% ( 14) 00:08:04.479 11241.945 - 11292.357: 87.6146% ( 19) 00:08:04.479 11292.357 - 11342.769: 87.7111% ( 16) 00:08:04.479 11342.769 - 11393.182: 87.8258% ( 19) 00:08:04.479 11393.182 - 11443.594: 87.9464% ( 20) 00:08:04.479 11443.594 - 11494.006: 88.0611% ( 19) 00:08:04.479 11494.006 - 11544.418: 88.2119% ( 25) 00:08:04.479 11544.418 - 11594.831: 88.3808% ( 28) 00:08:04.479 11594.831 - 11645.243: 88.5376% ( 26) 00:08:04.479 11645.243 - 11695.655: 88.7126% ( 29) 00:08:04.479 11695.655 - 11746.068: 88.8694% ( 26) 00:08:04.479 11746.068 - 11796.480: 89.0504% ( 30) 00:08:04.479 11796.480 - 11846.892: 89.2314% ( 30) 00:08:04.479 11846.892 - 11897.305: 89.4064% ( 29) 00:08:04.479 11897.305 - 11947.717: 89.5572% ( 25) 00:08:04.479 11947.717 - 11998.129: 89.7020% ( 24) 00:08:04.479 11998.129 - 12048.542: 89.8528% ( 25) 00:08:04.479 12048.542 - 12098.954: 89.9916% ( 23) 00:08:04.479 12098.954 - 12149.366: 90.1182% ( 21) 00:08:04.479 12149.366 - 12199.778: 90.3053% ( 31) 00:08:04.479 12199.778 - 12250.191: 90.4681% ( 27) 00:08:04.479 12250.191 - 12300.603: 90.6310% ( 27) 00:08:04.479 12300.603 - 12351.015: 90.8060% ( 29) 00:08:04.479 12351.015 - 12401.428: 90.9809% ( 29) 00:08:04.479 12401.428 - 12451.840: 91.1438% ( 27) 00:08:04.479 12451.840 - 12502.252: 91.2645% ( 20) 00:08:04.479 12502.252 - 12552.665: 91.3791% ( 19) 00:08:04.479 12552.665 - 12603.077: 91.4817% ( 17) 00:08:04.479 12603.077 - 12653.489: 91.5842% ( 17) 00:08:04.479 12653.489 - 12703.902: 91.6807% ( 16) 00:08:04.479 12703.902 - 12754.314: 91.8074% ( 21) 00:08:04.479 12754.314 - 12804.726: 92.0005% ( 32) 00:08:04.479 12804.726 - 12855.138: 92.1573% ( 26) 00:08:04.479 12855.138 - 12905.551: 92.3142% ( 26) 00:08:04.479 12905.551 - 13006.375: 92.5615% ( 41) 00:08:04.479 13006.375 - 13107.200: 92.8632% ( 50) 00:08:04.479 13107.200 - 13208.025: 93.1528% ( 48) 00:08:04.479 13208.025 - 13308.849: 93.4303% ( 46) 00:08:04.479 13308.849 - 13409.674: 93.7379% ( 51) 00:08:04.479 13409.674 - 13510.498: 94.0034% ( 44) 00:08:04.479 13510.498 - 13611.323: 94.2869% ( 47) 00:08:04.479 13611.323 - 13712.148: 94.5282% ( 40) 00:08:04.479 13712.148 - 13812.972: 94.7092% ( 30) 00:08:04.479 13812.972 - 13913.797: 94.8721% ( 27) 00:08:04.479 13913.797 - 14014.622: 95.0652% ( 32) 00:08:04.479 14014.622 - 14115.446: 95.2763% ( 35) 00:08:04.479 14115.446 - 14216.271: 95.5116% ( 39) 00:08:04.479 14216.271 - 14317.095: 95.7650% ( 42) 00:08:04.479 14317.095 - 14417.920: 96.0123% ( 41) 00:08:04.479 14417.920 - 14518.745: 96.2597% ( 41) 00:08:04.479 14518.745 - 14619.569: 96.5191% ( 43) 00:08:04.479 14619.569 - 14720.394: 96.7423% ( 37) 00:08:04.479 14720.394 - 14821.218: 96.9655% ( 37) 00:08:04.479 14821.218 - 14922.043: 97.2249% ( 43) 00:08:04.479 14922.043 - 15022.868: 97.4602% ( 39) 00:08:04.479 15022.868 - 15123.692: 97.6955% ( 39) 00:08:04.479 15123.692 - 15224.517: 97.9428% ( 41) 00:08:04.479 15224.517 - 15325.342: 98.1479% ( 34) 00:08:04.479 15325.342 - 15426.166: 98.2746% ( 21) 00:08:04.479 15426.166 - 15526.991: 98.3832% ( 18) 00:08:04.479 15526.991 - 15627.815: 98.4858% ( 17) 00:08:04.479 15627.815 - 15728.640: 98.6306% ( 24) 00:08:04.479 15728.640 - 15829.465: 98.7512% ( 20) 00:08:04.479 15829.465 - 15930.289: 98.8598% ( 18) 00:08:04.479 15930.289 - 16031.114: 98.9563% ( 16) 00:08:04.479 16031.114 - 16131.938: 99.0347% ( 13) 00:08:04.479 16131.938 - 16232.763: 99.0890% ( 9) 00:08:04.479 16232.763 - 16333.588: 99.1494% ( 10) 00:08:04.479 16333.588 - 16434.412: 99.2097% ( 10) 00:08:04.479 16434.412 - 16535.237: 99.2640% ( 9) 00:08:04.479 16535.237 - 16636.062: 99.3183% ( 9) 00:08:04.479 16636.062 - 16736.886: 99.3666% ( 8) 00:08:04.479 16736.886 - 16837.711: 99.4148% ( 8) 00:08:04.479 16837.711 - 16938.535: 99.4631% ( 8) 00:08:04.479 16938.535 - 17039.360: 99.5113% ( 8) 00:08:04.479 17039.360 - 17140.185: 99.5656% ( 9) 00:08:04.479 17140.185 - 17241.009: 99.5898% ( 4) 00:08:04.479 17241.009 - 17341.834: 99.6079% ( 3) 00:08:04.479 17341.834 - 17442.658: 99.6139% ( 1) 00:08:04.479 20769.871 - 20870.695: 99.6199% ( 1) 00:08:04.479 20870.695 - 20971.520: 99.6561% ( 6) 00:08:04.479 20971.520 - 21072.345: 99.6803% ( 4) 00:08:04.479 21072.345 - 21173.169: 99.7104% ( 5) 00:08:04.479 21173.169 - 21273.994: 99.7466% ( 6) 00:08:04.479 21273.994 - 21374.818: 99.7768% ( 5) 00:08:04.479 21374.818 - 21475.643: 99.8130% ( 6) 00:08:04.479 21475.643 - 21576.468: 99.8431% ( 5) 00:08:04.479 21576.468 - 21677.292: 99.8733% ( 5) 00:08:04.479 21677.292 - 21778.117: 99.9095% ( 6) 00:08:04.479 21778.117 - 21878.942: 99.9397% ( 5) 00:08:04.479 21878.942 - 21979.766: 99.9759% ( 6) 00:08:04.479 21979.766 - 22080.591: 100.0000% ( 4) 00:08:04.479 00:08:04.479 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:04.479 ============================================================================== 00:08:04.479 Range in us Cumulative IO count 00:08:04.479 3213.785 - 3226.388: 0.0060% ( 1) 00:08:04.479 3226.388 - 3251.594: 0.0603% ( 9) 00:08:04.479 3251.594 - 3276.800: 0.0724% ( 2) 00:08:04.480 3276.800 - 3302.006: 0.0845% ( 2) 00:08:04.480 3327.212 - 3352.418: 0.0905% ( 1) 00:08:04.480 3352.418 - 3377.625: 0.1086% ( 3) 00:08:04.480 3377.625 - 3402.831: 0.1207% ( 2) 00:08:04.480 3402.831 - 3428.037: 0.1327% ( 2) 00:08:04.480 3428.037 - 3453.243: 0.1508% ( 3) 00:08:04.480 3453.243 - 3478.449: 0.1750% ( 4) 00:08:04.480 3478.449 - 3503.655: 0.1870% ( 2) 00:08:04.480 3503.655 - 3528.862: 0.2051% ( 3) 00:08:04.480 3528.862 - 3554.068: 0.2172% ( 2) 00:08:04.480 3554.068 - 3579.274: 0.2292% ( 2) 00:08:04.480 3579.274 - 3604.480: 0.2413% ( 2) 00:08:04.480 3604.480 - 3629.686: 0.2534% ( 2) 00:08:04.480 3629.686 - 3654.892: 0.2654% ( 2) 00:08:04.480 3654.892 - 3680.098: 0.2835% ( 3) 00:08:04.480 3680.098 - 3705.305: 0.2956% ( 2) 00:08:04.480 3705.305 - 3730.511: 0.3077% ( 2) 00:08:04.480 3730.511 - 3755.717: 0.3197% ( 2) 00:08:04.480 3755.717 - 3780.923: 0.3318% ( 2) 00:08:04.480 3780.923 - 3806.129: 0.3499% ( 3) 00:08:04.480 3806.129 - 3831.335: 0.3620% ( 2) 00:08:04.480 3831.335 - 3856.542: 0.3740% ( 2) 00:08:04.480 3856.542 - 3881.748: 0.3861% ( 2) 00:08:04.480 4713.551 - 4738.757: 0.4163% ( 5) 00:08:04.480 4738.757 - 4763.963: 0.4344% ( 3) 00:08:04.480 4763.963 - 4789.169: 0.4464% ( 2) 00:08:04.480 4789.169 - 4814.375: 0.4585% ( 2) 00:08:04.480 4814.375 - 4839.582: 0.4706% ( 2) 00:08:04.480 4839.582 - 4864.788: 0.4887% ( 3) 00:08:04.480 4864.788 - 4889.994: 0.5007% ( 2) 00:08:04.480 4889.994 - 4915.200: 0.5128% ( 2) 00:08:04.480 4915.200 - 4940.406: 0.5309% ( 3) 00:08:04.480 4940.406 - 4965.612: 0.5430% ( 2) 00:08:04.480 4965.612 - 4990.818: 0.5611% ( 3) 00:08:04.480 4990.818 - 5016.025: 0.5731% ( 2) 00:08:04.480 5016.025 - 5041.231: 0.5792% ( 1) 00:08:04.480 5041.231 - 5066.437: 0.5912% ( 2) 00:08:04.480 5066.437 - 5091.643: 0.6033% ( 2) 00:08:04.480 5091.643 - 5116.849: 0.6214% ( 3) 00:08:04.480 5116.849 - 5142.055: 0.6334% ( 2) 00:08:04.480 5142.055 - 5167.262: 0.6515% ( 3) 00:08:04.480 5167.262 - 5192.468: 0.6636% ( 2) 00:08:04.480 5192.468 - 5217.674: 0.6757% ( 2) 00:08:04.480 5217.674 - 5242.880: 0.6938% ( 3) 00:08:04.480 5242.880 - 5268.086: 0.7058% ( 2) 00:08:04.480 5268.086 - 5293.292: 0.7239% ( 3) 00:08:04.480 5293.292 - 5318.498: 0.7360% ( 2) 00:08:04.480 5318.498 - 5343.705: 0.7481% ( 2) 00:08:04.480 5343.705 - 5368.911: 0.7662% ( 3) 00:08:04.480 5368.911 - 5394.117: 0.7722% ( 1) 00:08:04.480 5620.972 - 5646.178: 0.7843% ( 2) 00:08:04.480 5646.178 - 5671.385: 0.7963% ( 2) 00:08:04.480 5671.385 - 5696.591: 0.8567% ( 10) 00:08:04.480 5696.591 - 5721.797: 0.9894% ( 22) 00:08:04.480 5721.797 - 5747.003: 1.2428% ( 42) 00:08:04.480 5747.003 - 5772.209: 1.6952% ( 75) 00:08:04.480 5772.209 - 5797.415: 2.2623% ( 94) 00:08:04.480 5797.415 - 5822.622: 2.9862% ( 120) 00:08:04.480 5822.622 - 5847.828: 3.8670% ( 146) 00:08:04.480 5847.828 - 5873.034: 4.8806% ( 168) 00:08:04.480 5873.034 - 5898.240: 6.1595% ( 212) 00:08:04.480 5898.240 - 5923.446: 7.6737% ( 251) 00:08:04.480 5923.446 - 5948.652: 9.2362% ( 259) 00:08:04.480 5948.652 - 5973.858: 10.7625% ( 253) 00:08:04.480 5973.858 - 5999.065: 12.4155% ( 274) 00:08:04.480 5999.065 - 6024.271: 14.0384% ( 269) 00:08:04.480 6024.271 - 6049.477: 15.6009% ( 259) 00:08:04.480 6049.477 - 6074.683: 17.1815% ( 262) 00:08:04.480 6074.683 - 6099.889: 18.7802% ( 265) 00:08:04.480 6099.889 - 6125.095: 20.4573% ( 278) 00:08:04.480 6125.095 - 6150.302: 22.2852% ( 303) 00:08:04.480 6150.302 - 6175.508: 24.0528% ( 293) 00:08:04.480 6175.508 - 6200.714: 25.8325% ( 295) 00:08:04.480 6200.714 - 6225.920: 27.5700% ( 288) 00:08:04.480 6225.920 - 6251.126: 29.3436% ( 294) 00:08:04.480 6251.126 - 6276.332: 31.2621% ( 318) 00:08:04.480 6276.332 - 6301.538: 33.1021% ( 305) 00:08:04.480 6301.538 - 6326.745: 34.8275% ( 286) 00:08:04.480 6326.745 - 6351.951: 36.6614% ( 304) 00:08:04.480 6351.951 - 6377.157: 38.5437% ( 312) 00:08:04.480 6377.157 - 6402.363: 40.4139% ( 310) 00:08:04.480 6402.363 - 6427.569: 42.2539% ( 305) 00:08:04.480 6427.569 - 6452.775: 44.1059% ( 307) 00:08:04.480 6452.775 - 6503.188: 47.8644% ( 623) 00:08:04.480 6503.188 - 6553.600: 51.4418% ( 593) 00:08:04.480 6553.600 - 6604.012: 54.7358% ( 546) 00:08:04.480 6604.012 - 6654.425: 57.4807% ( 455) 00:08:04.480 6654.425 - 6704.837: 59.6585% ( 361) 00:08:04.480 6704.837 - 6755.249: 61.1547% ( 248) 00:08:04.480 6755.249 - 6805.662: 62.2949% ( 189) 00:08:04.480 6805.662 - 6856.074: 63.2782% ( 163) 00:08:04.480 6856.074 - 6906.486: 64.2375% ( 159) 00:08:04.480 6906.486 - 6956.898: 64.9674% ( 121) 00:08:04.480 6956.898 - 7007.311: 65.5647% ( 99) 00:08:04.480 7007.311 - 7057.723: 66.1861% ( 103) 00:08:04.480 7057.723 - 7108.135: 66.7833% ( 99) 00:08:04.480 7108.135 - 7158.548: 67.3263% ( 90) 00:08:04.480 7158.548 - 7208.960: 67.8632% ( 89) 00:08:04.480 7208.960 - 7259.372: 68.3760% ( 85) 00:08:04.480 7259.372 - 7309.785: 68.8103% ( 72) 00:08:04.480 7309.785 - 7360.197: 69.2869% ( 79) 00:08:04.480 7360.197 - 7410.609: 69.7514% ( 77) 00:08:04.480 7410.609 - 7461.022: 70.1315% ( 63) 00:08:04.480 7461.022 - 7511.434: 70.5176% ( 64) 00:08:04.480 7511.434 - 7561.846: 70.8494% ( 55) 00:08:04.480 7561.846 - 7612.258: 71.1269% ( 46) 00:08:04.480 7612.258 - 7662.671: 71.4768% ( 58) 00:08:04.480 7662.671 - 7713.083: 71.8328% ( 59) 00:08:04.480 7713.083 - 7763.495: 72.1646% ( 55) 00:08:04.480 7763.495 - 7813.908: 72.4602% ( 49) 00:08:04.480 7813.908 - 7864.320: 72.7618% ( 50) 00:08:04.480 7864.320 - 7914.732: 73.0816% ( 53) 00:08:04.480 7914.732 - 7965.145: 73.4315% ( 58) 00:08:04.480 7965.145 - 8015.557: 73.7512% ( 53) 00:08:04.480 8015.557 - 8065.969: 74.1071% ( 59) 00:08:04.480 8065.969 - 8116.382: 74.4450% ( 56) 00:08:04.480 8116.382 - 8166.794: 74.7889% ( 57) 00:08:04.480 8166.794 - 8217.206: 75.1207% ( 55) 00:08:04.480 8217.206 - 8267.618: 75.4525% ( 55) 00:08:04.480 8267.618 - 8318.031: 75.8024% ( 58) 00:08:04.480 8318.031 - 8368.443: 76.1583% ( 59) 00:08:04.480 8368.443 - 8418.855: 76.5384% ( 63) 00:08:04.480 8418.855 - 8469.268: 76.9607% ( 70) 00:08:04.480 8469.268 - 8519.680: 77.3588% ( 66) 00:08:04.480 8519.680 - 8570.092: 77.6846% ( 54) 00:08:04.480 8570.092 - 8620.505: 78.0345% ( 58) 00:08:04.480 8620.505 - 8670.917: 78.4025% ( 61) 00:08:04.480 8670.917 - 8721.329: 78.7946% ( 65) 00:08:04.480 8721.329 - 8771.742: 79.2049% ( 68) 00:08:04.480 8771.742 - 8822.154: 79.5427% ( 56) 00:08:04.480 8822.154 - 8872.566: 79.8383% ( 49) 00:08:04.480 8872.566 - 8922.978: 80.2124% ( 62) 00:08:04.480 8922.978 - 8973.391: 80.4778% ( 44) 00:08:04.480 8973.391 - 9023.803: 80.7734% ( 49) 00:08:04.480 9023.803 - 9074.215: 81.0208% ( 41) 00:08:04.480 9074.215 - 9124.628: 81.2741% ( 42) 00:08:04.480 9124.628 - 9175.040: 81.5215% ( 41) 00:08:04.480 9175.040 - 9225.452: 81.7990% ( 46) 00:08:04.480 9225.452 - 9275.865: 82.0343% ( 39) 00:08:04.480 9275.865 - 9326.277: 82.2454% ( 35) 00:08:04.480 9326.277 - 9376.689: 82.4686% ( 37) 00:08:04.480 9376.689 - 9427.102: 82.7763% ( 51) 00:08:04.480 9427.102 - 9477.514: 83.0056% ( 38) 00:08:04.480 9477.514 - 9527.926: 83.2227% ( 36) 00:08:04.480 9527.926 - 9578.338: 83.3977% ( 29) 00:08:04.480 9578.338 - 9628.751: 83.5726% ( 29) 00:08:04.480 9628.751 - 9679.163: 83.7476% ( 29) 00:08:04.480 9679.163 - 9729.575: 83.9105% ( 27) 00:08:04.480 9729.575 - 9779.988: 84.0553% ( 24) 00:08:04.480 9779.988 - 9830.400: 84.2121% ( 26) 00:08:04.480 9830.400 - 9880.812: 84.3690% ( 26) 00:08:04.480 9880.812 - 9931.225: 84.5198% ( 25) 00:08:04.480 9931.225 - 9981.637: 84.6284% ( 18) 00:08:04.480 9981.637 - 10032.049: 84.7611% ( 22) 00:08:04.480 10032.049 - 10082.462: 84.8818% ( 20) 00:08:04.480 10082.462 - 10132.874: 85.0145% ( 22) 00:08:04.480 10132.874 - 10183.286: 85.1351% ( 20) 00:08:04.480 10183.286 - 10233.698: 85.2437% ( 18) 00:08:04.480 10233.698 - 10284.111: 85.3403% ( 16) 00:08:04.480 10284.111 - 10334.523: 85.4187% ( 13) 00:08:04.480 10334.523 - 10384.935: 85.5212% ( 17) 00:08:04.480 10384.935 - 10435.348: 85.6057% ( 14) 00:08:04.480 10435.348 - 10485.760: 85.7143% ( 18) 00:08:04.480 10485.760 - 10536.172: 85.8591% ( 24) 00:08:04.480 10536.172 - 10586.585: 86.0039% ( 24) 00:08:04.480 10586.585 - 10636.997: 86.1486% ( 24) 00:08:04.480 10636.997 - 10687.409: 86.2934% ( 24) 00:08:04.480 10687.409 - 10737.822: 86.4443% ( 25) 00:08:04.480 10737.822 - 10788.234: 86.5890% ( 24) 00:08:04.480 10788.234 - 10838.646: 86.7218% ( 22) 00:08:04.480 10838.646 - 10889.058: 86.8485% ( 21) 00:08:04.480 10889.058 - 10939.471: 86.9691% ( 20) 00:08:04.480 10939.471 - 10989.883: 87.1079% ( 23) 00:08:04.480 10989.883 - 11040.295: 87.2406% ( 22) 00:08:04.480 11040.295 - 11090.708: 87.3612% ( 20) 00:08:04.480 11090.708 - 11141.120: 87.4819% ( 20) 00:08:04.480 11141.120 - 11191.532: 87.6327% ( 25) 00:08:04.480 11191.532 - 11241.945: 87.7654% ( 22) 00:08:04.480 11241.945 - 11292.357: 87.9042% ( 23) 00:08:04.480 11292.357 - 11342.769: 88.0430% ( 23) 00:08:04.480 11342.769 - 11393.182: 88.1696% ( 21) 00:08:04.480 11393.182 - 11443.594: 88.2843% ( 19) 00:08:04.480 11443.594 - 11494.006: 88.3989% ( 19) 00:08:04.480 11494.006 - 11544.418: 88.5195% ( 20) 00:08:04.481 11544.418 - 11594.831: 88.6221% ( 17) 00:08:04.481 11594.831 - 11645.243: 88.7126% ( 15) 00:08:04.481 11645.243 - 11695.655: 88.8152% ( 17) 00:08:04.481 11695.655 - 11746.068: 88.9418% ( 21) 00:08:04.481 11746.068 - 11796.480: 89.0866% ( 24) 00:08:04.481 11796.480 - 11846.892: 89.2375% ( 25) 00:08:04.481 11846.892 - 11897.305: 89.4184% ( 30) 00:08:04.481 11897.305 - 11947.717: 89.5693% ( 25) 00:08:04.481 11947.717 - 11998.129: 89.7623% ( 32) 00:08:04.481 11998.129 - 12048.542: 89.9312% ( 28) 00:08:04.481 12048.542 - 12098.954: 90.0820% ( 25) 00:08:04.481 12098.954 - 12149.366: 90.2329% ( 25) 00:08:04.481 12149.366 - 12199.778: 90.4139% ( 30) 00:08:04.481 12199.778 - 12250.191: 90.5405% ( 21) 00:08:04.481 12250.191 - 12300.603: 90.6914% ( 25) 00:08:04.481 12300.603 - 12351.015: 90.8603% ( 28) 00:08:04.481 12351.015 - 12401.428: 91.0111% ( 25) 00:08:04.481 12401.428 - 12451.840: 91.1619% ( 25) 00:08:04.481 12451.840 - 12502.252: 91.2946% ( 22) 00:08:04.481 12502.252 - 12552.665: 91.4093% ( 19) 00:08:04.481 12552.665 - 12603.077: 91.5299% ( 20) 00:08:04.481 12603.077 - 12653.489: 91.6807% ( 25) 00:08:04.481 12653.489 - 12703.902: 91.8135% ( 22) 00:08:04.481 12703.902 - 12754.314: 91.9522% ( 23) 00:08:04.481 12754.314 - 12804.726: 92.0970% ( 24) 00:08:04.481 12804.726 - 12855.138: 92.2237% ( 21) 00:08:04.481 12855.138 - 12905.551: 92.3564% ( 22) 00:08:04.481 12905.551 - 13006.375: 92.6038% ( 41) 00:08:04.481 13006.375 - 13107.200: 92.8451% ( 40) 00:08:04.481 13107.200 - 13208.025: 93.0623% ( 36) 00:08:04.481 13208.025 - 13308.849: 93.2734% ( 35) 00:08:04.481 13308.849 - 13409.674: 93.4665% ( 32) 00:08:04.481 13409.674 - 13510.498: 93.6414% ( 29) 00:08:04.481 13510.498 - 13611.323: 93.8707% ( 38) 00:08:04.481 13611.323 - 13712.148: 94.0577% ( 31) 00:08:04.481 13712.148 - 13812.972: 94.2206% ( 27) 00:08:04.481 13812.972 - 13913.797: 94.4257% ( 34) 00:08:04.481 13913.797 - 14014.622: 94.6549% ( 38) 00:08:04.481 14014.622 - 14115.446: 94.9505% ( 49) 00:08:04.481 14115.446 - 14216.271: 95.2401% ( 48) 00:08:04.481 14216.271 - 14317.095: 95.5538% ( 52) 00:08:04.481 14317.095 - 14417.920: 95.7951% ( 40) 00:08:04.481 14417.920 - 14518.745: 96.0726% ( 46) 00:08:04.481 14518.745 - 14619.569: 96.3743% ( 50) 00:08:04.481 14619.569 - 14720.394: 96.7664% ( 65) 00:08:04.481 14720.394 - 14821.218: 97.0560% ( 48) 00:08:04.481 14821.218 - 14922.043: 97.3395% ( 47) 00:08:04.481 14922.043 - 15022.868: 97.5627% ( 37) 00:08:04.481 15022.868 - 15123.692: 97.7558% ( 32) 00:08:04.481 15123.692 - 15224.517: 97.9730% ( 36) 00:08:04.481 15224.517 - 15325.342: 98.2264% ( 42) 00:08:04.481 15325.342 - 15426.166: 98.3892% ( 27) 00:08:04.481 15426.166 - 15526.991: 98.5159% ( 21) 00:08:04.481 15526.991 - 15627.815: 98.6064% ( 15) 00:08:04.481 15627.815 - 15728.640: 98.7150% ( 18) 00:08:04.481 15728.640 - 15829.465: 98.8176% ( 17) 00:08:04.481 15829.465 - 15930.289: 98.8960% ( 13) 00:08:04.481 15930.289 - 16031.114: 98.9624% ( 11) 00:08:04.481 16031.114 - 16131.938: 99.0287% ( 11) 00:08:04.481 16131.938 - 16232.763: 99.0649% ( 6) 00:08:04.481 16232.763 - 16333.588: 99.1313% ( 11) 00:08:04.481 16333.588 - 16434.412: 99.1916% ( 10) 00:08:04.481 16434.412 - 16535.237: 99.2519% ( 10) 00:08:04.481 16535.237 - 16636.062: 99.3123% ( 10) 00:08:04.481 16636.062 - 16736.886: 99.3726% ( 10) 00:08:04.481 16736.886 - 16837.711: 99.4088% ( 6) 00:08:04.481 16837.711 - 16938.535: 99.4329% ( 4) 00:08:04.481 16938.535 - 17039.360: 99.4570% ( 4) 00:08:04.481 17039.360 - 17140.185: 99.4993% ( 7) 00:08:04.481 17140.185 - 17241.009: 99.5294% ( 5) 00:08:04.481 17241.009 - 17341.834: 99.5536% ( 4) 00:08:04.481 17341.834 - 17442.658: 99.5837% ( 5) 00:08:04.481 17442.658 - 17543.483: 99.6079% ( 4) 00:08:04.481 17543.483 - 17644.308: 99.6139% ( 1) 00:08:04.481 19963.274 - 20064.098: 99.6199% ( 1) 00:08:04.481 20366.572 - 20467.397: 99.6320% ( 2) 00:08:04.481 20467.397 - 20568.222: 99.6622% ( 5) 00:08:04.481 20568.222 - 20669.046: 99.6923% ( 5) 00:08:04.481 20669.046 - 20769.871: 99.7225% ( 5) 00:08:04.481 20769.871 - 20870.695: 99.7587% ( 6) 00:08:04.481 20870.695 - 20971.520: 99.7889% ( 5) 00:08:04.481 20971.520 - 21072.345: 99.8250% ( 6) 00:08:04.481 21072.345 - 21173.169: 99.8552% ( 5) 00:08:04.481 21173.169 - 21273.994: 99.8854% ( 5) 00:08:04.481 21273.994 - 21374.818: 99.9216% ( 6) 00:08:04.481 21374.818 - 21475.643: 99.9517% ( 5) 00:08:04.481 21475.643 - 21576.468: 99.9879% ( 6) 00:08:04.481 21576.468 - 21677.292: 100.0000% ( 2) 00:08:04.481 00:08:04.481 13:56:42 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:08:05.430 Initializing NVMe Controllers 00:08:05.430 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:05.430 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:05.430 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:05.430 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:05.430 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:05.430 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:05.430 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:05.430 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:05.430 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:05.430 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:05.430 Initialization complete. Launching workers. 00:08:05.430 ======================================================== 00:08:05.430 Latency(us) 00:08:05.430 Device Information : IOPS MiB/s Average min max 00:08:05.430 PCIE (0000:00:10.0) NSID 1 from core 0: 18038.15 211.38 7097.93 4581.01 21444.85 00:08:05.430 PCIE (0000:00:11.0) NSID 1 from core 0: 18038.15 211.38 7092.44 4343.21 20755.85 00:08:05.430 PCIE (0000:00:13.0) NSID 1 from core 0: 18038.15 211.38 7086.80 3969.81 20108.41 00:08:05.430 PCIE (0000:00:12.0) NSID 1 from core 0: 18038.15 211.38 7080.94 3760.67 19325.96 00:08:05.430 PCIE (0000:00:12.0) NSID 2 from core 0: 18038.15 211.38 7074.98 3518.62 18868.98 00:08:05.430 PCIE (0000:00:12.0) NSID 3 from core 0: 18038.15 211.38 7069.17 3282.37 18532.58 00:08:05.430 ======================================================== 00:08:05.430 Total : 108228.91 1268.31 7083.71 3282.37 21444.85 00:08:05.430 00:08:05.430 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:05.430 ================================================================================= 00:08:05.430 1.00000% : 6099.889us 00:08:05.430 10.00000% : 6427.569us 00:08:05.430 25.00000% : 6604.012us 00:08:05.430 50.00000% : 6856.074us 00:08:05.430 75.00000% : 7259.372us 00:08:05.430 90.00000% : 7965.145us 00:08:05.430 95.00000% : 8519.680us 00:08:05.430 98.00000% : 9679.163us 00:08:05.430 99.00000% : 12250.191us 00:08:05.430 99.50000% : 15627.815us 00:08:05.430 99.90000% : 21072.345us 00:08:05.430 99.99000% : 21374.818us 00:08:05.430 99.99900% : 21475.643us 00:08:05.430 99.99990% : 21475.643us 00:08:05.430 99.99999% : 21475.643us 00:08:05.430 00:08:05.430 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:05.430 ================================================================================= 00:08:05.430 1.00000% : 6276.332us 00:08:05.430 10.00000% : 6503.188us 00:08:05.430 25.00000% : 6654.425us 00:08:05.430 50.00000% : 6856.074us 00:08:05.430 75.00000% : 7158.548us 00:08:05.430 90.00000% : 7864.320us 00:08:05.430 95.00000% : 8469.268us 00:08:05.430 98.00000% : 9729.575us 00:08:05.430 99.00000% : 13107.200us 00:08:05.430 99.50000% : 15829.465us 00:08:05.430 99.90000% : 20467.397us 00:08:05.430 99.99000% : 20769.871us 00:08:05.430 99.99900% : 20769.871us 00:08:05.430 99.99990% : 20769.871us 00:08:05.430 99.99999% : 20769.871us 00:08:05.430 00:08:05.430 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:05.430 ================================================================================= 00:08:05.430 1.00000% : 6125.095us 00:08:05.430 10.00000% : 6503.188us 00:08:05.430 25.00000% : 6654.425us 00:08:05.430 50.00000% : 6856.074us 00:08:05.430 75.00000% : 7158.548us 00:08:05.430 90.00000% : 7864.320us 00:08:05.430 95.00000% : 8469.268us 00:08:05.430 98.00000% : 9729.575us 00:08:05.430 99.00000% : 13208.025us 00:08:05.430 99.50000% : 16333.588us 00:08:05.430 99.90000% : 19660.800us 00:08:05.430 99.99000% : 20164.923us 00:08:05.430 99.99900% : 20164.923us 00:08:05.430 99.99990% : 20164.923us 00:08:05.430 99.99999% : 20164.923us 00:08:05.430 00:08:05.430 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:05.430 ================================================================================= 00:08:05.430 1.00000% : 6150.302us 00:08:05.430 10.00000% : 6503.188us 00:08:05.430 25.00000% : 6654.425us 00:08:05.430 50.00000% : 6805.662us 00:08:05.430 75.00000% : 7208.960us 00:08:05.430 90.00000% : 7864.320us 00:08:05.430 95.00000% : 8418.855us 00:08:05.430 98.00000% : 9830.400us 00:08:05.430 99.00000% : 13510.498us 00:08:05.430 99.50000% : 15930.289us 00:08:05.430 99.90000% : 19055.852us 00:08:05.430 99.99000% : 19358.326us 00:08:05.430 99.99900% : 19358.326us 00:08:05.430 99.99990% : 19358.326us 00:08:05.430 99.99999% : 19358.326us 00:08:05.430 00:08:05.430 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:05.430 ================================================================================= 00:08:05.430 1.00000% : 6150.302us 00:08:05.430 10.00000% : 6503.188us 00:08:05.430 25.00000% : 6654.425us 00:08:05.430 50.00000% : 6805.662us 00:08:05.430 75.00000% : 7158.548us 00:08:05.430 90.00000% : 7864.320us 00:08:05.430 95.00000% : 8418.855us 00:08:05.430 98.00000% : 9628.751us 00:08:05.430 99.00000% : 12804.726us 00:08:05.430 99.50000% : 15325.342us 00:08:05.430 99.90000% : 18652.554us 00:08:05.430 99.99000% : 18854.203us 00:08:05.430 99.99900% : 18955.028us 00:08:05.430 99.99990% : 18955.028us 00:08:05.430 99.99999% : 18955.028us 00:08:05.430 00:08:05.430 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:05.431 ================================================================================= 00:08:05.431 1.00000% : 6150.302us 00:08:05.431 10.00000% : 6503.188us 00:08:05.431 25.00000% : 6654.425us 00:08:05.431 50.00000% : 6856.074us 00:08:05.431 75.00000% : 7158.548us 00:08:05.431 90.00000% : 7914.732us 00:08:05.431 95.00000% : 8418.855us 00:08:05.431 98.00000% : 9628.751us 00:08:05.431 99.00000% : 12149.366us 00:08:05.431 99.50000% : 14821.218us 00:08:05.431 99.90000% : 18047.606us 00:08:05.431 99.99000% : 18551.729us 00:08:05.431 99.99900% : 18551.729us 00:08:05.431 99.99990% : 18551.729us 00:08:05.431 99.99999% : 18551.729us 00:08:05.431 00:08:05.431 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:05.431 ============================================================================== 00:08:05.431 Range in us Cumulative IO count 00:08:05.431 4562.314 - 4587.520: 0.0055% ( 1) 00:08:05.431 4587.520 - 4612.726: 0.0222% ( 3) 00:08:05.431 4612.726 - 4637.932: 0.0554% ( 6) 00:08:05.431 4637.932 - 4663.138: 0.0665% ( 2) 00:08:05.431 4663.138 - 4688.345: 0.0776% ( 2) 00:08:05.431 4688.345 - 4713.551: 0.0997% ( 4) 00:08:05.431 4713.551 - 4738.757: 0.1274% ( 5) 00:08:05.431 4738.757 - 4763.963: 0.1662% ( 7) 00:08:05.431 4763.963 - 4789.169: 0.2105% ( 8) 00:08:05.431 4789.169 - 4814.375: 0.2549% ( 8) 00:08:05.431 4814.375 - 4839.582: 0.2604% ( 1) 00:08:05.431 4839.582 - 4864.788: 0.2715% ( 2) 00:08:05.431 4864.788 - 4889.994: 0.2770% ( 1) 00:08:05.431 4889.994 - 4915.200: 0.2937% ( 3) 00:08:05.431 4915.200 - 4940.406: 0.3047% ( 2) 00:08:05.431 4940.406 - 4965.612: 0.3214% ( 3) 00:08:05.431 4965.612 - 4990.818: 0.3269% ( 1) 00:08:05.431 4990.818 - 5016.025: 0.3380% ( 2) 00:08:05.431 5016.025 - 5041.231: 0.3546% ( 3) 00:08:05.431 5797.415 - 5822.622: 0.3712% ( 3) 00:08:05.431 5822.622 - 5847.828: 0.3823% ( 2) 00:08:05.431 5847.828 - 5873.034: 0.4045% ( 4) 00:08:05.431 5873.034 - 5898.240: 0.4433% ( 7) 00:08:05.431 5898.240 - 5923.446: 0.4765% ( 6) 00:08:05.431 5923.446 - 5948.652: 0.5153% ( 7) 00:08:05.431 5948.652 - 5973.858: 0.5485% ( 6) 00:08:05.431 5973.858 - 5999.065: 0.5929% ( 8) 00:08:05.431 5999.065 - 6024.271: 0.6538% ( 11) 00:08:05.431 6024.271 - 6049.477: 0.7258% ( 13) 00:08:05.431 6049.477 - 6074.683: 0.8367% ( 20) 00:08:05.431 6074.683 - 6099.889: 1.0084% ( 31) 00:08:05.431 6099.889 - 6125.095: 1.2245% ( 39) 00:08:05.431 6125.095 - 6150.302: 1.4461% ( 40) 00:08:05.431 6150.302 - 6175.508: 1.6844% ( 43) 00:08:05.431 6175.508 - 6200.714: 1.9393% ( 46) 00:08:05.431 6200.714 - 6225.920: 2.2219% ( 51) 00:08:05.431 6225.920 - 6251.126: 2.7371% ( 93) 00:08:05.431 6251.126 - 6276.332: 3.3134% ( 104) 00:08:05.431 6276.332 - 6301.538: 4.0891% ( 140) 00:08:05.431 6301.538 - 6326.745: 5.1308% ( 188) 00:08:05.431 6326.745 - 6351.951: 6.1724% ( 188) 00:08:05.431 6351.951 - 6377.157: 7.3859% ( 219) 00:08:05.431 6377.157 - 6402.363: 9.1257% ( 314) 00:08:05.431 6402.363 - 6427.569: 11.2533% ( 384) 00:08:05.431 6427.569 - 6452.775: 13.5805% ( 420) 00:08:05.431 6452.775 - 6503.188: 19.0049% ( 979) 00:08:05.431 6503.188 - 6553.600: 24.5844% ( 1007) 00:08:05.431 6553.600 - 6604.012: 30.0255% ( 982) 00:08:05.431 6604.012 - 6654.425: 34.9346% ( 886) 00:08:05.431 6654.425 - 6704.837: 39.0570% ( 744) 00:08:05.431 6704.837 - 6755.249: 43.0685% ( 724) 00:08:05.431 6755.249 - 6805.662: 47.1576% ( 738) 00:08:05.431 6805.662 - 6856.074: 51.0860% ( 709) 00:08:05.431 6856.074 - 6906.486: 54.8925% ( 687) 00:08:05.431 6906.486 - 6956.898: 58.5051% ( 652) 00:08:05.431 6956.898 - 7007.311: 62.1620% ( 660) 00:08:05.431 7007.311 - 7057.723: 65.5419% ( 610) 00:08:05.431 7057.723 - 7108.135: 68.7666% ( 582) 00:08:05.431 7108.135 - 7158.548: 71.9027% ( 566) 00:08:05.431 7158.548 - 7208.960: 74.3905% ( 449) 00:08:05.431 7208.960 - 7259.372: 76.3797% ( 359) 00:08:05.431 7259.372 - 7309.785: 78.2857% ( 344) 00:08:05.431 7309.785 - 7360.197: 79.6376% ( 244) 00:08:05.431 7360.197 - 7410.609: 81.0228% ( 250) 00:08:05.431 7410.609 - 7461.022: 82.5576% ( 277) 00:08:05.431 7461.022 - 7511.434: 83.6824% ( 203) 00:08:05.431 7511.434 - 7561.846: 84.6465% ( 174) 00:08:05.431 7561.846 - 7612.258: 85.5607% ( 165) 00:08:05.431 7612.258 - 7662.671: 86.3364% ( 140) 00:08:05.431 7662.671 - 7713.083: 87.0900% ( 136) 00:08:05.431 7713.083 - 7763.495: 87.7992% ( 128) 00:08:05.431 7763.495 - 7813.908: 88.4641% ( 120) 00:08:05.431 7813.908 - 7864.320: 89.0348% ( 103) 00:08:05.431 7864.320 - 7914.732: 89.7662% ( 132) 00:08:05.431 7914.732 - 7965.145: 90.4311% ( 120) 00:08:05.431 7965.145 - 8015.557: 91.0350% ( 109) 00:08:05.431 8015.557 - 8065.969: 91.7664% ( 132) 00:08:05.431 8065.969 - 8116.382: 92.3149% ( 99) 00:08:05.431 8116.382 - 8166.794: 92.7859% ( 85) 00:08:05.431 8166.794 - 8217.206: 93.2680% ( 87) 00:08:05.431 8217.206 - 8267.618: 93.5949% ( 59) 00:08:05.431 8267.618 - 8318.031: 93.8774% ( 51) 00:08:05.431 8318.031 - 8368.443: 94.1656% ( 52) 00:08:05.431 8368.443 - 8418.855: 94.4758% ( 56) 00:08:05.431 8418.855 - 8469.268: 94.7584% ( 51) 00:08:05.431 8469.268 - 8519.680: 95.0742% ( 57) 00:08:05.431 8519.680 - 8570.092: 95.4399% ( 66) 00:08:05.431 8570.092 - 8620.505: 95.6449% ( 37) 00:08:05.431 8620.505 - 8670.917: 95.8278% ( 33) 00:08:05.431 8670.917 - 8721.329: 96.0550% ( 41) 00:08:05.431 8721.329 - 8771.742: 96.2877% ( 42) 00:08:05.431 8771.742 - 8822.154: 96.4982% ( 38) 00:08:05.431 8822.154 - 8872.566: 96.6755% ( 32) 00:08:05.431 8872.566 - 8922.978: 96.8141% ( 25) 00:08:05.431 8922.978 - 8973.391: 96.9470% ( 24) 00:08:05.431 8973.391 - 9023.803: 97.0412% ( 17) 00:08:05.431 9023.803 - 9074.215: 97.2074% ( 30) 00:08:05.431 9074.215 - 9124.628: 97.2961% ( 16) 00:08:05.431 9124.628 - 9175.040: 97.3681% ( 13) 00:08:05.431 9175.040 - 9225.452: 97.4291% ( 11) 00:08:05.431 9225.452 - 9275.865: 97.5344% ( 19) 00:08:05.431 9275.865 - 9326.277: 97.6452% ( 20) 00:08:05.431 9326.277 - 9376.689: 97.7117% ( 12) 00:08:05.431 9376.689 - 9427.102: 97.7726% ( 11) 00:08:05.431 9427.102 - 9477.514: 97.8280% ( 10) 00:08:05.431 9477.514 - 9527.926: 97.8723% ( 8) 00:08:05.431 9527.926 - 9578.338: 97.9167% ( 8) 00:08:05.431 9578.338 - 9628.751: 97.9832% ( 12) 00:08:05.431 9628.751 - 9679.163: 98.0441% ( 11) 00:08:05.431 9679.163 - 9729.575: 98.0940% ( 9) 00:08:05.431 9729.575 - 9779.988: 98.1494% ( 10) 00:08:05.431 9779.988 - 9830.400: 98.1992% ( 9) 00:08:05.431 9830.400 - 9880.812: 98.2270% ( 5) 00:08:05.431 9880.812 - 9931.225: 98.2657% ( 7) 00:08:05.431 9931.225 - 9981.637: 98.2990% ( 6) 00:08:05.431 9981.637 - 10032.049: 98.3211% ( 4) 00:08:05.431 10032.049 - 10082.462: 98.3378% ( 3) 00:08:05.431 10082.462 - 10132.874: 98.3710% ( 6) 00:08:05.431 10132.874 - 10183.286: 98.4264% ( 10) 00:08:05.431 10183.286 - 10233.698: 98.4430% ( 3) 00:08:05.431 10233.698 - 10284.111: 98.4707% ( 5) 00:08:05.431 10284.111 - 10334.523: 98.4818% ( 2) 00:08:05.431 10334.523 - 10384.935: 98.4874% ( 1) 00:08:05.431 10384.935 - 10435.348: 98.4929% ( 1) 00:08:05.431 10435.348 - 10485.760: 98.4984% ( 1) 00:08:05.431 10485.760 - 10536.172: 98.5040% ( 1) 00:08:05.431 10536.172 - 10586.585: 98.5095% ( 1) 00:08:05.431 10586.585 - 10636.997: 98.5206% ( 2) 00:08:05.431 10636.997 - 10687.409: 98.5428% ( 4) 00:08:05.431 10687.409 - 10737.822: 98.5539% ( 2) 00:08:05.431 10737.822 - 10788.234: 98.5760% ( 4) 00:08:05.431 10788.234 - 10838.646: 98.5816% ( 1) 00:08:05.431 10889.058 - 10939.471: 98.5871% ( 1) 00:08:05.431 10939.471 - 10989.883: 98.5926% ( 1) 00:08:05.431 10989.883 - 11040.295: 98.6037% ( 2) 00:08:05.431 11040.295 - 11090.708: 98.6259% ( 4) 00:08:05.431 11090.708 - 11141.120: 98.6425% ( 3) 00:08:05.431 11141.120 - 11191.532: 98.6758% ( 6) 00:08:05.431 11191.532 - 11241.945: 98.7145% ( 7) 00:08:05.431 11241.945 - 11292.357: 98.7533% ( 7) 00:08:05.431 11292.357 - 11342.769: 98.7810% ( 5) 00:08:05.431 11393.182 - 11443.594: 98.7921% ( 2) 00:08:05.431 11443.594 - 11494.006: 98.7977% ( 1) 00:08:05.431 11494.006 - 11544.418: 98.8032% ( 1) 00:08:05.431 11544.418 - 11594.831: 98.8087% ( 1) 00:08:05.431 11594.831 - 11645.243: 98.8143% ( 1) 00:08:05.431 11645.243 - 11695.655: 98.8198% ( 1) 00:08:05.431 11695.655 - 11746.068: 98.8309% ( 2) 00:08:05.432 11796.480 - 11846.892: 98.8420% ( 2) 00:08:05.432 11846.892 - 11897.305: 98.8531% ( 2) 00:08:05.432 11897.305 - 11947.717: 98.8697% ( 3) 00:08:05.432 11947.717 - 11998.129: 98.9029% ( 6) 00:08:05.432 11998.129 - 12048.542: 98.9362% ( 6) 00:08:05.432 12048.542 - 12098.954: 98.9694% ( 6) 00:08:05.432 12098.954 - 12149.366: 98.9916% ( 4) 00:08:05.432 12149.366 - 12199.778: 98.9971% ( 1) 00:08:05.432 12199.778 - 12250.191: 99.0193% ( 4) 00:08:05.432 12250.191 - 12300.603: 99.0414% ( 4) 00:08:05.432 12300.603 - 12351.015: 99.0636% ( 4) 00:08:05.432 12351.015 - 12401.428: 99.0802% ( 3) 00:08:05.432 12401.428 - 12451.840: 99.0969% ( 3) 00:08:05.432 12451.840 - 12502.252: 99.1024% ( 1) 00:08:05.432 12502.252 - 12552.665: 99.1079% ( 1) 00:08:05.432 12552.665 - 12603.077: 99.1356% ( 5) 00:08:05.432 12603.077 - 12653.489: 99.1523% ( 3) 00:08:05.432 12653.489 - 12703.902: 99.1689% ( 3) 00:08:05.432 12703.902 - 12754.314: 99.1910% ( 4) 00:08:05.432 12754.314 - 12804.726: 99.2132% ( 4) 00:08:05.432 12804.726 - 12855.138: 99.2298% ( 3) 00:08:05.432 12855.138 - 12905.551: 99.2409% ( 2) 00:08:05.432 12905.551 - 13006.375: 99.2465% ( 1) 00:08:05.432 13006.375 - 13107.200: 99.2686% ( 4) 00:08:05.432 13107.200 - 13208.025: 99.2797% ( 2) 00:08:05.432 13308.849 - 13409.674: 99.2908% ( 2) 00:08:05.432 15123.692 - 15224.517: 99.4127% ( 22) 00:08:05.432 15224.517 - 15325.342: 99.4293% ( 3) 00:08:05.432 15325.342 - 15426.166: 99.4736% ( 8) 00:08:05.432 15426.166 - 15526.991: 99.4902% ( 3) 00:08:05.432 15526.991 - 15627.815: 99.5180% ( 5) 00:08:05.432 15627.815 - 15728.640: 99.5290% ( 2) 00:08:05.432 15728.640 - 15829.465: 99.5401% ( 2) 00:08:05.432 15829.465 - 15930.289: 99.5734% ( 6) 00:08:05.432 15930.289 - 16031.114: 99.5955% ( 4) 00:08:05.432 16031.114 - 16131.938: 99.6232% ( 5) 00:08:05.432 16131.938 - 16232.763: 99.6454% ( 4) 00:08:05.432 20164.923 - 20265.748: 99.6676% ( 4) 00:08:05.432 20265.748 - 20366.572: 99.6953% ( 5) 00:08:05.432 20366.572 - 20467.397: 99.7174% ( 4) 00:08:05.432 20467.397 - 20568.222: 99.7451% ( 5) 00:08:05.432 20568.222 - 20669.046: 99.7728% ( 5) 00:08:05.432 20669.046 - 20769.871: 99.8005% ( 5) 00:08:05.432 20769.871 - 20870.695: 99.8338% ( 6) 00:08:05.432 20870.695 - 20971.520: 99.8670% ( 6) 00:08:05.432 20971.520 - 21072.345: 99.9003% ( 6) 00:08:05.432 21072.345 - 21173.169: 99.9335% ( 6) 00:08:05.432 21173.169 - 21273.994: 99.9668% ( 6) 00:08:05.432 21273.994 - 21374.818: 99.9945% ( 5) 00:08:05.432 21374.818 - 21475.643: 100.0000% ( 1) 00:08:05.432 00:08:05.432 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:05.432 ============================================================================== 00:08:05.432 Range in us Cumulative IO count 00:08:05.432 4335.458 - 4360.665: 0.0111% ( 2) 00:08:05.432 4360.665 - 4385.871: 0.0222% ( 2) 00:08:05.432 4385.871 - 4411.077: 0.0332% ( 2) 00:08:05.432 4411.077 - 4436.283: 0.0720% ( 7) 00:08:05.432 4436.283 - 4461.489: 0.1219% ( 9) 00:08:05.432 4461.489 - 4486.695: 0.1828% ( 11) 00:08:05.432 4486.695 - 4511.902: 0.2327% ( 9) 00:08:05.432 4511.902 - 4537.108: 0.2881% ( 10) 00:08:05.432 4537.108 - 4562.314: 0.3269% ( 7) 00:08:05.432 4562.314 - 4587.520: 0.3380% ( 2) 00:08:05.432 4587.520 - 4612.726: 0.3491% ( 2) 00:08:05.432 4612.726 - 4637.932: 0.3546% ( 1) 00:08:05.432 5923.446 - 5948.652: 0.3602% ( 1) 00:08:05.432 5948.652 - 5973.858: 0.3657% ( 1) 00:08:05.432 5999.065 - 6024.271: 0.3768% ( 2) 00:08:05.432 6024.271 - 6049.477: 0.3823% ( 1) 00:08:05.432 6049.477 - 6074.683: 0.3879% ( 1) 00:08:05.432 6074.683 - 6099.889: 0.3934% ( 1) 00:08:05.432 6099.889 - 6125.095: 0.3989% ( 1) 00:08:05.432 6125.095 - 6150.302: 0.4377% ( 7) 00:08:05.432 6150.302 - 6175.508: 0.4820% ( 8) 00:08:05.432 6175.508 - 6200.714: 0.5762% ( 17) 00:08:05.432 6200.714 - 6225.920: 0.7148% ( 25) 00:08:05.432 6225.920 - 6251.126: 0.9586% ( 44) 00:08:05.432 6251.126 - 6276.332: 1.4295% ( 85) 00:08:05.432 6276.332 - 6301.538: 1.9337% ( 91) 00:08:05.432 6301.538 - 6326.745: 2.3604% ( 77) 00:08:05.432 6326.745 - 6351.951: 3.0197% ( 119) 00:08:05.432 6351.951 - 6377.157: 3.9007% ( 159) 00:08:05.432 6377.157 - 6402.363: 5.0643% ( 210) 00:08:05.432 6402.363 - 6427.569: 6.2999% ( 223) 00:08:05.432 6427.569 - 6452.775: 7.3748% ( 194) 00:08:05.432 6452.775 - 6503.188: 10.9209% ( 640) 00:08:05.432 6503.188 - 6553.600: 16.0572% ( 927) 00:08:05.432 6553.600 - 6604.012: 21.6977% ( 1018) 00:08:05.432 6604.012 - 6654.425: 29.0780% ( 1332) 00:08:05.432 6654.425 - 6704.837: 36.1425% ( 1275) 00:08:05.432 6704.837 - 6755.249: 44.2154% ( 1457) 00:08:05.432 6755.249 - 6805.662: 49.7562% ( 1000) 00:08:05.432 6805.662 - 6856.074: 55.0089% ( 948) 00:08:05.432 6856.074 - 6906.486: 59.5578% ( 821) 00:08:05.432 6906.486 - 6956.898: 63.7522% ( 757) 00:08:05.432 6956.898 - 7007.311: 67.4368% ( 665) 00:08:05.432 7007.311 - 7057.723: 70.6782% ( 585) 00:08:05.432 7057.723 - 7108.135: 73.2768% ( 469) 00:08:05.432 7108.135 - 7158.548: 75.4765% ( 397) 00:08:05.432 7158.548 - 7208.960: 77.3881% ( 345) 00:08:05.432 7208.960 - 7259.372: 78.9284% ( 278) 00:08:05.432 7259.372 - 7309.785: 80.4521% ( 275) 00:08:05.432 7309.785 - 7360.197: 81.6822% ( 222) 00:08:05.432 7360.197 - 7410.609: 82.5742% ( 161) 00:08:05.432 7410.609 - 7461.022: 83.3832% ( 146) 00:08:05.432 7461.022 - 7511.434: 84.1090% ( 131) 00:08:05.432 7511.434 - 7561.846: 84.9623% ( 154) 00:08:05.432 7561.846 - 7612.258: 85.8101% ( 153) 00:08:05.432 7612.258 - 7662.671: 87.0290% ( 220) 00:08:05.432 7662.671 - 7713.083: 88.0208% ( 179) 00:08:05.432 7713.083 - 7763.495: 88.8797% ( 155) 00:08:05.432 7763.495 - 7813.908: 89.6443% ( 138) 00:08:05.432 7813.908 - 7864.320: 90.2759% ( 114) 00:08:05.432 7864.320 - 7914.732: 90.9574% ( 123) 00:08:05.432 7914.732 - 7965.145: 91.4672% ( 92) 00:08:05.432 7965.145 - 8015.557: 91.9215% ( 82) 00:08:05.432 8015.557 - 8065.969: 92.2152% ( 53) 00:08:05.432 8065.969 - 8116.382: 92.5477% ( 60) 00:08:05.432 8116.382 - 8166.794: 92.8690% ( 58) 00:08:05.432 8166.794 - 8217.206: 93.4231% ( 100) 00:08:05.432 8217.206 - 8267.618: 93.8553% ( 78) 00:08:05.432 8267.618 - 8318.031: 94.3429% ( 88) 00:08:05.432 8318.031 - 8368.443: 94.5867% ( 44) 00:08:05.432 8368.443 - 8418.855: 94.8969% ( 56) 00:08:05.432 8418.855 - 8469.268: 95.0355% ( 25) 00:08:05.432 8469.268 - 8519.680: 95.1684% ( 24) 00:08:05.432 8519.680 - 8570.092: 95.3014% ( 24) 00:08:05.432 8570.092 - 8620.505: 95.4510% ( 27) 00:08:05.432 8620.505 - 8670.917: 95.5895% ( 25) 00:08:05.432 8670.917 - 8721.329: 95.7890% ( 36) 00:08:05.432 8721.329 - 8771.742: 96.1381% ( 63) 00:08:05.432 8771.742 - 8822.154: 96.5259% ( 70) 00:08:05.432 8822.154 - 8872.566: 96.7309% ( 37) 00:08:05.432 8872.566 - 8922.978: 96.9415% ( 38) 00:08:05.432 8922.978 - 8973.391: 97.0634% ( 22) 00:08:05.432 8973.391 - 9023.803: 97.1908% ( 23) 00:08:05.432 9023.803 - 9074.215: 97.2684% ( 14) 00:08:05.432 9074.215 - 9124.628: 97.3681% ( 18) 00:08:05.432 9124.628 - 9175.040: 97.4679% ( 18) 00:08:05.432 9175.040 - 9225.452: 97.5621% ( 17) 00:08:05.432 9225.452 - 9275.865: 97.6507% ( 16) 00:08:05.432 9275.865 - 9326.277: 97.7394% ( 16) 00:08:05.432 9326.277 - 9376.689: 97.8003% ( 11) 00:08:05.432 9376.689 - 9427.102: 97.8391% ( 7) 00:08:05.432 9427.102 - 9477.514: 97.8723% ( 6) 00:08:05.432 9477.514 - 9527.926: 97.9000% ( 5) 00:08:05.432 9527.926 - 9578.338: 97.9333% ( 6) 00:08:05.432 9578.338 - 9628.751: 97.9610% ( 5) 00:08:05.432 9628.751 - 9679.163: 97.9942% ( 6) 00:08:05.432 9679.163 - 9729.575: 98.0164% ( 4) 00:08:05.432 9729.575 - 9779.988: 98.1161% ( 18) 00:08:05.432 9779.988 - 9830.400: 98.1494% ( 6) 00:08:05.432 9830.400 - 9880.812: 98.1826% ( 6) 00:08:05.432 9880.812 - 9931.225: 98.2103% ( 5) 00:08:05.432 9931.225 - 9981.637: 98.2270% ( 3) 00:08:05.432 10082.462 - 10132.874: 98.2547% ( 5) 00:08:05.432 10132.874 - 10183.286: 98.2824% ( 5) 00:08:05.432 10183.286 - 10233.698: 98.3101% ( 5) 00:08:05.432 10233.698 - 10284.111: 98.5151% ( 37) 00:08:05.432 10284.111 - 10334.523: 98.5372% ( 4) 00:08:05.432 10334.523 - 10384.935: 98.5483% ( 2) 00:08:05.432 10384.935 - 10435.348: 98.5926% ( 8) 00:08:05.432 10435.348 - 10485.760: 98.6480% ( 10) 00:08:05.432 10485.760 - 10536.172: 98.7367% ( 16) 00:08:05.432 10536.172 - 10586.585: 98.7977% ( 11) 00:08:05.432 10586.585 - 10636.997: 98.8087% ( 2) 00:08:05.432 10636.997 - 10687.409: 98.8198% ( 2) 00:08:05.432 10687.409 - 10737.822: 98.8254% ( 1) 00:08:05.432 10737.822 - 10788.234: 98.8364% ( 2) 00:08:05.432 10788.234 - 10838.646: 98.8420% ( 1) 00:08:05.432 10838.646 - 10889.058: 98.8531% ( 2) 00:08:05.432 10889.058 - 10939.471: 98.8586% ( 1) 00:08:05.432 10939.471 - 10989.883: 98.8697% ( 2) 00:08:05.432 10989.883 - 11040.295: 98.8808% ( 2) 00:08:05.432 11040.295 - 11090.708: 98.8863% ( 1) 00:08:05.432 11090.708 - 11141.120: 98.8974% ( 2) 00:08:05.432 11141.120 - 11191.532: 98.9085% ( 2) 00:08:05.432 11191.532 - 11241.945: 98.9140% ( 1) 00:08:05.432 11241.945 - 11292.357: 98.9251% ( 2) 00:08:05.432 11292.357 - 11342.769: 98.9306% ( 1) 00:08:05.432 11342.769 - 11393.182: 98.9362% ( 1) 00:08:05.432 12603.077 - 12653.489: 98.9417% ( 1) 00:08:05.432 12804.726 - 12855.138: 98.9583% ( 3) 00:08:05.433 12855.138 - 12905.551: 98.9639% ( 1) 00:08:05.433 12905.551 - 13006.375: 98.9916% ( 5) 00:08:05.433 13006.375 - 13107.200: 99.0193% ( 5) 00:08:05.433 13107.200 - 13208.025: 99.0414% ( 4) 00:08:05.433 13208.025 - 13308.849: 99.0636% ( 4) 00:08:05.433 13308.849 - 13409.674: 99.1633% ( 18) 00:08:05.433 13409.674 - 13510.498: 99.2631% ( 18) 00:08:05.433 13510.498 - 13611.323: 99.2908% ( 5) 00:08:05.433 15022.868 - 15123.692: 99.3129% ( 4) 00:08:05.433 15123.692 - 15224.517: 99.3406% ( 5) 00:08:05.433 15224.517 - 15325.342: 99.3684% ( 5) 00:08:05.433 15325.342 - 15426.166: 99.4016% ( 6) 00:08:05.433 15426.166 - 15526.991: 99.4293% ( 5) 00:08:05.433 15526.991 - 15627.815: 99.4625% ( 6) 00:08:05.433 15627.815 - 15728.640: 99.4902% ( 5) 00:08:05.433 15728.640 - 15829.465: 99.5235% ( 6) 00:08:05.433 15829.465 - 15930.289: 99.5512% ( 5) 00:08:05.433 15930.289 - 16031.114: 99.5844% ( 6) 00:08:05.433 16031.114 - 16131.938: 99.6177% ( 6) 00:08:05.433 16131.938 - 16232.763: 99.6454% ( 5) 00:08:05.433 19660.800 - 19761.625: 99.6676% ( 4) 00:08:05.433 19761.625 - 19862.449: 99.7008% ( 6) 00:08:05.433 19862.449 - 19963.274: 99.7340% ( 6) 00:08:05.433 19963.274 - 20064.098: 99.7673% ( 6) 00:08:05.433 20064.098 - 20164.923: 99.8005% ( 6) 00:08:05.433 20164.923 - 20265.748: 99.8338% ( 6) 00:08:05.433 20265.748 - 20366.572: 99.8670% ( 6) 00:08:05.433 20366.572 - 20467.397: 99.9003% ( 6) 00:08:05.433 20467.397 - 20568.222: 99.9335% ( 6) 00:08:05.433 20568.222 - 20669.046: 99.9668% ( 6) 00:08:05.433 20669.046 - 20769.871: 100.0000% ( 6) 00:08:05.433 00:08:05.433 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:05.433 ============================================================================== 00:08:05.433 Range in us Cumulative IO count 00:08:05.433 3957.366 - 3982.572: 0.0055% ( 1) 00:08:05.433 4007.778 - 4032.985: 0.0166% ( 2) 00:08:05.433 4032.985 - 4058.191: 0.0277% ( 2) 00:08:05.433 4058.191 - 4083.397: 0.0388% ( 2) 00:08:05.433 4083.397 - 4108.603: 0.0499% ( 2) 00:08:05.433 4108.603 - 4133.809: 0.0942% ( 8) 00:08:05.433 4133.809 - 4159.015: 0.1274% ( 6) 00:08:05.433 4159.015 - 4184.222: 0.1662% ( 7) 00:08:05.433 4184.222 - 4209.428: 0.2161% ( 9) 00:08:05.433 4209.428 - 4234.634: 0.2660% ( 9) 00:08:05.433 4234.634 - 4259.840: 0.2881% ( 4) 00:08:05.433 4259.840 - 4285.046: 0.2992% ( 2) 00:08:05.433 4285.046 - 4310.252: 0.3103% ( 2) 00:08:05.433 4310.252 - 4335.458: 0.3214% ( 2) 00:08:05.433 4335.458 - 4360.665: 0.3324% ( 2) 00:08:05.433 4360.665 - 4385.871: 0.3380% ( 1) 00:08:05.433 4385.871 - 4411.077: 0.3491% ( 2) 00:08:05.433 4411.077 - 4436.283: 0.3546% ( 1) 00:08:05.433 5520.148 - 5545.354: 0.3602% ( 1) 00:08:05.433 5595.766 - 5620.972: 0.3712% ( 2) 00:08:05.433 5620.972 - 5646.178: 0.3879% ( 3) 00:08:05.433 5646.178 - 5671.385: 0.3989% ( 2) 00:08:05.433 5671.385 - 5696.591: 0.4211% ( 4) 00:08:05.433 5696.591 - 5721.797: 0.4710% ( 9) 00:08:05.433 5721.797 - 5747.003: 0.5319% ( 11) 00:08:05.433 5747.003 - 5772.209: 0.5596% ( 5) 00:08:05.433 5772.209 - 5797.415: 0.6095% ( 9) 00:08:05.433 5797.415 - 5822.622: 0.6261% ( 3) 00:08:05.433 5822.622 - 5847.828: 0.6372% ( 2) 00:08:05.433 5847.828 - 5873.034: 0.6649% ( 5) 00:08:05.433 5873.034 - 5898.240: 0.6981% ( 6) 00:08:05.433 5898.240 - 5923.446: 0.7258% ( 5) 00:08:05.433 5923.446 - 5948.652: 0.7702% ( 8) 00:08:05.433 5948.652 - 5973.858: 0.7812% ( 2) 00:08:05.433 5973.858 - 5999.065: 0.7979% ( 3) 00:08:05.433 5999.065 - 6024.271: 0.8256% ( 5) 00:08:05.433 6024.271 - 6049.477: 0.8699% ( 8) 00:08:05.433 6049.477 - 6074.683: 0.9142% ( 8) 00:08:05.433 6074.683 - 6099.889: 0.9586% ( 8) 00:08:05.433 6099.889 - 6125.095: 1.0638% ( 19) 00:08:05.433 6125.095 - 6150.302: 1.1636% ( 18) 00:08:05.433 6150.302 - 6175.508: 1.4074% ( 44) 00:08:05.433 6175.508 - 6200.714: 1.5957% ( 34) 00:08:05.433 6200.714 - 6225.920: 1.8672% ( 49) 00:08:05.433 6225.920 - 6251.126: 2.1221% ( 46) 00:08:05.433 6251.126 - 6276.332: 2.5266% ( 73) 00:08:05.433 6276.332 - 6301.538: 3.0031% ( 86) 00:08:05.433 6301.538 - 6326.745: 3.7289% ( 131) 00:08:05.433 6326.745 - 6351.951: 4.3938% ( 120) 00:08:05.433 6351.951 - 6377.157: 5.1862% ( 143) 00:08:05.433 6377.157 - 6402.363: 6.0616% ( 158) 00:08:05.433 6402.363 - 6427.569: 7.1254% ( 192) 00:08:05.433 6427.569 - 6452.775: 8.5660% ( 260) 00:08:05.433 6452.775 - 6503.188: 11.8739% ( 597) 00:08:05.433 6503.188 - 6553.600: 15.9187% ( 730) 00:08:05.433 6553.600 - 6604.012: 21.8362% ( 1068) 00:08:05.433 6604.012 - 6654.425: 28.5406% ( 1210) 00:08:05.433 6654.425 - 6704.837: 36.2367% ( 1389) 00:08:05.433 6704.837 - 6755.249: 43.1184% ( 1242) 00:08:05.433 6755.249 - 6805.662: 49.7950% ( 1205) 00:08:05.433 6805.662 - 6856.074: 55.0754% ( 953) 00:08:05.433 6856.074 - 6906.486: 59.2753% ( 758) 00:08:05.433 6906.486 - 6956.898: 64.0126% ( 855) 00:08:05.433 6956.898 - 7007.311: 67.4701% ( 624) 00:08:05.433 7007.311 - 7057.723: 70.7890% ( 599) 00:08:05.433 7057.723 - 7108.135: 73.0663% ( 411) 00:08:05.433 7108.135 - 7158.548: 75.2937% ( 402) 00:08:05.433 7158.548 - 7208.960: 77.1886% ( 342) 00:08:05.433 7208.960 - 7259.372: 78.5350% ( 243) 00:08:05.433 7259.372 - 7309.785: 79.9091% ( 248) 00:08:05.433 7309.785 - 7360.197: 81.2500% ( 242) 00:08:05.433 7360.197 - 7410.609: 82.1365% ( 160) 00:08:05.433 7410.609 - 7461.022: 82.9787% ( 152) 00:08:05.433 7461.022 - 7511.434: 84.1755% ( 216) 00:08:05.433 7511.434 - 7561.846: 85.1840% ( 182) 00:08:05.433 7561.846 - 7612.258: 85.9763% ( 143) 00:08:05.433 7612.258 - 7662.671: 86.8074% ( 150) 00:08:05.433 7662.671 - 7713.083: 87.8103% ( 181) 00:08:05.433 7713.083 - 7763.495: 88.5084% ( 126) 00:08:05.433 7763.495 - 7813.908: 89.2509% ( 134) 00:08:05.433 7813.908 - 7864.320: 90.0598% ( 146) 00:08:05.433 7864.320 - 7914.732: 90.8522% ( 143) 00:08:05.433 7914.732 - 7965.145: 91.5060% ( 118) 00:08:05.433 7965.145 - 8015.557: 92.0379% ( 96) 00:08:05.433 8015.557 - 8065.969: 92.5975% ( 101) 00:08:05.433 8065.969 - 8116.382: 93.0851% ( 88) 00:08:05.433 8116.382 - 8166.794: 93.4619% ( 68) 00:08:05.433 8166.794 - 8217.206: 93.7334% ( 49) 00:08:05.433 8217.206 - 8267.618: 94.0935% ( 65) 00:08:05.433 8267.618 - 8318.031: 94.5035% ( 74) 00:08:05.433 8318.031 - 8368.443: 94.7141% ( 38) 00:08:05.433 8368.443 - 8418.855: 94.8914% ( 32) 00:08:05.433 8418.855 - 8469.268: 95.0576% ( 30) 00:08:05.433 8469.268 - 8519.680: 95.2626% ( 37) 00:08:05.433 8519.680 - 8570.092: 95.5563% ( 53) 00:08:05.433 8570.092 - 8620.505: 95.8666% ( 56) 00:08:05.433 8620.505 - 8670.917: 96.0494% ( 33) 00:08:05.433 8670.917 - 8721.329: 96.2101% ( 29) 00:08:05.433 8721.329 - 8771.742: 96.3930% ( 33) 00:08:05.433 8771.742 - 8822.154: 96.6589% ( 48) 00:08:05.433 8822.154 - 8872.566: 96.8695% ( 38) 00:08:05.433 8872.566 - 8922.978: 96.9858% ( 21) 00:08:05.433 8922.978 - 8973.391: 97.0689% ( 15) 00:08:05.433 8973.391 - 9023.803: 97.1797% ( 20) 00:08:05.433 9023.803 - 9074.215: 97.3072% ( 23) 00:08:05.433 9074.215 - 9124.628: 97.4402% ( 24) 00:08:05.433 9124.628 - 9175.040: 97.5565% ( 21) 00:08:05.433 9175.040 - 9225.452: 97.6396% ( 15) 00:08:05.433 9225.452 - 9275.865: 97.7172% ( 14) 00:08:05.433 9275.865 - 9326.277: 97.7560% ( 7) 00:08:05.433 9326.277 - 9376.689: 97.7892% ( 6) 00:08:05.433 9376.689 - 9427.102: 97.8280% ( 7) 00:08:05.433 9427.102 - 9477.514: 97.8613% ( 6) 00:08:05.433 9477.514 - 9527.926: 97.9056% ( 8) 00:08:05.433 9527.926 - 9578.338: 97.9388% ( 6) 00:08:05.433 9578.338 - 9628.751: 97.9665% ( 5) 00:08:05.433 9628.751 - 9679.163: 97.9998% ( 6) 00:08:05.433 9679.163 - 9729.575: 98.0441% ( 8) 00:08:05.433 9729.575 - 9779.988: 98.0995% ( 10) 00:08:05.433 9779.988 - 9830.400: 98.1328% ( 6) 00:08:05.433 9830.400 - 9880.812: 98.1494% ( 3) 00:08:05.433 9880.812 - 9931.225: 98.1660% ( 3) 00:08:05.433 9931.225 - 9981.637: 98.1715% ( 1) 00:08:05.433 9981.637 - 10032.049: 98.1826% ( 2) 00:08:05.433 10032.049 - 10082.462: 98.1937% ( 2) 00:08:05.433 10082.462 - 10132.874: 98.2048% ( 2) 00:08:05.433 10132.874 - 10183.286: 98.2159% ( 2) 00:08:05.433 10183.286 - 10233.698: 98.2491% ( 6) 00:08:05.433 10233.698 - 10284.111: 98.2824% ( 6) 00:08:05.433 10284.111 - 10334.523: 98.3045% ( 4) 00:08:05.433 10334.523 - 10384.935: 98.3876% ( 15) 00:08:05.433 10384.935 - 10435.348: 98.4652% ( 14) 00:08:05.433 10435.348 - 10485.760: 98.5206% ( 10) 00:08:05.433 10485.760 - 10536.172: 98.5926% ( 13) 00:08:05.433 10536.172 - 10586.585: 98.6314% ( 7) 00:08:05.433 10586.585 - 10636.997: 98.6536% ( 4) 00:08:05.433 10636.997 - 10687.409: 98.6868% ( 6) 00:08:05.433 10687.409 - 10737.822: 98.7090% ( 4) 00:08:05.433 10737.822 - 10788.234: 98.7367% ( 5) 00:08:05.434 10788.234 - 10838.646: 98.7644% ( 5) 00:08:05.434 10838.646 - 10889.058: 98.7977% ( 6) 00:08:05.434 10889.058 - 10939.471: 98.8309% ( 6) 00:08:05.434 10939.471 - 10989.883: 98.8641% ( 6) 00:08:05.434 10989.883 - 11040.295: 98.8918% ( 5) 00:08:05.434 11040.295 - 11090.708: 98.9251% ( 6) 00:08:05.434 11090.708 - 11141.120: 98.9362% ( 2) 00:08:05.434 12653.489 - 12703.902: 98.9417% ( 1) 00:08:05.434 12703.902 - 12754.314: 98.9473% ( 1) 00:08:05.434 12905.551 - 13006.375: 98.9528% ( 1) 00:08:05.434 13006.375 - 13107.200: 98.9916% ( 7) 00:08:05.434 13107.200 - 13208.025: 99.0248% ( 6) 00:08:05.434 13208.025 - 13308.849: 99.0525% ( 5) 00:08:05.434 13308.849 - 13409.674: 99.0747% ( 4) 00:08:05.434 13409.674 - 13510.498: 99.1190% ( 8) 00:08:05.434 13510.498 - 13611.323: 99.2908% ( 31) 00:08:05.434 15728.640 - 15829.465: 99.3296% ( 7) 00:08:05.434 15829.465 - 15930.289: 99.3628% ( 6) 00:08:05.434 15930.289 - 16031.114: 99.4071% ( 8) 00:08:05.434 16031.114 - 16131.938: 99.4570% ( 9) 00:08:05.434 16131.938 - 16232.763: 99.4958% ( 7) 00:08:05.434 16232.763 - 16333.588: 99.5235% ( 5) 00:08:05.434 16333.588 - 16434.412: 99.5512% ( 5) 00:08:05.434 16434.412 - 16535.237: 99.5844% ( 6) 00:08:05.434 16535.237 - 16636.062: 99.6121% ( 5) 00:08:05.434 16636.062 - 16736.886: 99.6454% ( 6) 00:08:05.434 19156.677 - 19257.502: 99.6509% ( 1) 00:08:05.434 19257.502 - 19358.326: 99.6786% ( 5) 00:08:05.434 19358.326 - 19459.151: 99.7340% ( 10) 00:08:05.434 19459.151 - 19559.975: 99.8670% ( 24) 00:08:05.434 19559.975 - 19660.800: 99.9003% ( 6) 00:08:05.434 19660.800 - 19761.625: 99.9391% ( 7) 00:08:05.434 19761.625 - 19862.449: 99.9446% ( 1) 00:08:05.434 19862.449 - 19963.274: 99.9557% ( 2) 00:08:05.434 19963.274 - 20064.098: 99.9834% ( 5) 00:08:05.434 20064.098 - 20164.923: 100.0000% ( 3) 00:08:05.434 00:08:05.434 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:05.434 ============================================================================== 00:08:05.434 Range in us Cumulative IO count 00:08:05.434 3755.717 - 3780.923: 0.0111% ( 2) 00:08:05.434 3780.923 - 3806.129: 0.0277% ( 3) 00:08:05.434 3806.129 - 3831.335: 0.0388% ( 2) 00:08:05.434 3831.335 - 3856.542: 0.0887% ( 9) 00:08:05.434 3856.542 - 3881.748: 0.1496% ( 11) 00:08:05.434 3881.748 - 3906.954: 0.2105% ( 11) 00:08:05.434 3906.954 - 3932.160: 0.2660% ( 10) 00:08:05.434 3932.160 - 3957.366: 0.2826% ( 3) 00:08:05.434 3957.366 - 3982.572: 0.2937% ( 2) 00:08:05.434 3982.572 - 4007.778: 0.3047% ( 2) 00:08:05.434 4007.778 - 4032.985: 0.3158% ( 2) 00:08:05.434 4032.985 - 4058.191: 0.3269% ( 2) 00:08:05.434 4058.191 - 4083.397: 0.3380% ( 2) 00:08:05.434 4083.397 - 4108.603: 0.3491% ( 2) 00:08:05.434 4108.603 - 4133.809: 0.3546% ( 1) 00:08:05.434 5343.705 - 5368.911: 0.3602% ( 1) 00:08:05.434 5419.323 - 5444.529: 0.3657% ( 1) 00:08:05.434 5444.529 - 5469.735: 0.3823% ( 3) 00:08:05.434 5469.735 - 5494.942: 0.4045% ( 4) 00:08:05.434 5494.942 - 5520.148: 0.4876% ( 15) 00:08:05.434 5520.148 - 5545.354: 0.5541% ( 12) 00:08:05.434 5545.354 - 5570.560: 0.5984% ( 8) 00:08:05.434 5570.560 - 5595.766: 0.6150% ( 3) 00:08:05.434 5595.766 - 5620.972: 0.6261% ( 2) 00:08:05.434 5620.972 - 5646.178: 0.6372% ( 2) 00:08:05.434 5646.178 - 5671.385: 0.6483% ( 2) 00:08:05.434 5671.385 - 5696.591: 0.6594% ( 2) 00:08:05.434 5696.591 - 5721.797: 0.6704% ( 2) 00:08:05.434 5721.797 - 5747.003: 0.6815% ( 2) 00:08:05.434 5747.003 - 5772.209: 0.6926% ( 2) 00:08:05.434 5772.209 - 5797.415: 0.7092% ( 3) 00:08:05.434 5898.240 - 5923.446: 0.7148% ( 1) 00:08:05.434 5973.858 - 5999.065: 0.7203% ( 1) 00:08:05.434 5999.065 - 6024.271: 0.7258% ( 1) 00:08:05.434 6049.477 - 6074.683: 0.7535% ( 5) 00:08:05.434 6074.683 - 6099.889: 0.8200% ( 12) 00:08:05.434 6099.889 - 6125.095: 0.9031% ( 15) 00:08:05.434 6125.095 - 6150.302: 1.0084% ( 19) 00:08:05.434 6150.302 - 6175.508: 1.1303% ( 22) 00:08:05.434 6175.508 - 6200.714: 1.2522% ( 22) 00:08:05.434 6200.714 - 6225.920: 1.4129% ( 29) 00:08:05.434 6225.920 - 6251.126: 1.6733% ( 47) 00:08:05.434 6251.126 - 6276.332: 2.0833% ( 74) 00:08:05.434 6276.332 - 6301.538: 2.6042% ( 94) 00:08:05.434 6301.538 - 6326.745: 3.2247% ( 112) 00:08:05.434 6326.745 - 6351.951: 3.7733% ( 99) 00:08:05.434 6351.951 - 6377.157: 4.3994% ( 113) 00:08:05.434 6377.157 - 6402.363: 5.4023% ( 181) 00:08:05.434 6402.363 - 6427.569: 6.7043% ( 235) 00:08:05.434 6427.569 - 6452.775: 7.9676% ( 228) 00:08:05.434 6452.775 - 6503.188: 11.4528% ( 629) 00:08:05.434 6503.188 - 6553.600: 16.0295% ( 826) 00:08:05.434 6553.600 - 6604.012: 21.7365% ( 1030) 00:08:05.434 6604.012 - 6654.425: 28.2469% ( 1175) 00:08:05.434 6654.425 - 6704.837: 35.8821% ( 1378) 00:08:05.434 6704.837 - 6755.249: 43.5173% ( 1378) 00:08:05.434 6755.249 - 6805.662: 50.0332% ( 1176) 00:08:05.434 6805.662 - 6856.074: 54.8039% ( 861) 00:08:05.434 6856.074 - 6906.486: 60.0676% ( 950) 00:08:05.434 6906.486 - 6956.898: 63.7245% ( 660) 00:08:05.434 6956.898 - 7007.311: 67.3648% ( 657) 00:08:05.434 7007.311 - 7057.723: 70.4510% ( 557) 00:08:05.434 7057.723 - 7108.135: 73.1106% ( 480) 00:08:05.434 7108.135 - 7158.548: 74.9058% ( 324) 00:08:05.434 7158.548 - 7208.960: 76.7453% ( 332) 00:08:05.434 7208.960 - 7259.372: 78.3854% ( 296) 00:08:05.434 7259.372 - 7309.785: 80.0199% ( 295) 00:08:05.434 7309.785 - 7360.197: 81.3497% ( 240) 00:08:05.434 7360.197 - 7410.609: 82.3859% ( 187) 00:08:05.434 7410.609 - 7461.022: 83.7434% ( 245) 00:08:05.434 7461.022 - 7511.434: 84.6797% ( 169) 00:08:05.434 7511.434 - 7561.846: 85.3834% ( 127) 00:08:05.434 7561.846 - 7612.258: 86.2699% ( 160) 00:08:05.434 7612.258 - 7662.671: 87.4003% ( 204) 00:08:05.434 7662.671 - 7713.083: 88.2535% ( 154) 00:08:05.434 7713.083 - 7763.495: 89.1234% ( 157) 00:08:05.434 7763.495 - 7813.908: 89.7606% ( 115) 00:08:05.434 7813.908 - 7864.320: 90.5751% ( 147) 00:08:05.434 7864.320 - 7914.732: 91.2954% ( 130) 00:08:05.434 7914.732 - 7965.145: 91.8717% ( 104) 00:08:05.434 7965.145 - 8015.557: 92.3149% ( 80) 00:08:05.434 8015.557 - 8065.969: 92.6418% ( 59) 00:08:05.434 8065.969 - 8116.382: 92.9577% ( 57) 00:08:05.434 8116.382 - 8166.794: 93.2957% ( 61) 00:08:05.434 8166.794 - 8217.206: 93.6669% ( 67) 00:08:05.434 8217.206 - 8267.618: 94.1656% ( 90) 00:08:05.434 8267.618 - 8318.031: 94.7086% ( 98) 00:08:05.434 8318.031 - 8368.443: 94.9967% ( 52) 00:08:05.434 8368.443 - 8418.855: 95.2848% ( 52) 00:08:05.434 8418.855 - 8469.268: 95.5175% ( 42) 00:08:05.434 8469.268 - 8519.680: 95.7114% ( 35) 00:08:05.434 8519.680 - 8570.092: 95.9552% ( 44) 00:08:05.434 8570.092 - 8620.505: 96.1879% ( 42) 00:08:05.434 8620.505 - 8670.917: 96.3763% ( 34) 00:08:05.434 8670.917 - 8721.329: 96.5481% ( 31) 00:08:05.434 8721.329 - 8771.742: 96.6866% ( 25) 00:08:05.434 8771.742 - 8822.154: 96.8528% ( 30) 00:08:05.434 8822.154 - 8872.566: 97.0024% ( 27) 00:08:05.434 8872.566 - 8922.978: 97.1299% ( 23) 00:08:05.434 8922.978 - 8973.391: 97.2019% ( 13) 00:08:05.434 8973.391 - 9023.803: 97.3183% ( 21) 00:08:05.434 9023.803 - 9074.215: 97.3903% ( 13) 00:08:05.434 9074.215 - 9124.628: 97.4180% ( 5) 00:08:05.434 9124.628 - 9175.040: 97.4457% ( 5) 00:08:05.434 9175.040 - 9225.452: 97.4734% ( 5) 00:08:05.434 9225.452 - 9275.865: 97.5066% ( 6) 00:08:05.434 9275.865 - 9326.277: 97.5565% ( 9) 00:08:05.434 9326.277 - 9376.689: 97.5898% ( 6) 00:08:05.434 9376.689 - 9427.102: 97.6230% ( 6) 00:08:05.434 9427.102 - 9477.514: 97.6562% ( 6) 00:08:05.434 9477.514 - 9527.926: 97.6950% ( 7) 00:08:05.434 9527.926 - 9578.338: 97.7394% ( 8) 00:08:05.434 9578.338 - 9628.751: 97.7892% ( 9) 00:08:05.434 9628.751 - 9679.163: 97.8391% ( 9) 00:08:05.434 9679.163 - 9729.575: 97.8834% ( 8) 00:08:05.434 9729.575 - 9779.988: 97.9555% ( 13) 00:08:05.434 9779.988 - 9830.400: 98.0219% ( 12) 00:08:05.434 9830.400 - 9880.812: 98.0718% ( 9) 00:08:05.434 9880.812 - 9931.225: 98.1051% ( 6) 00:08:05.434 9931.225 - 9981.637: 98.1383% ( 6) 00:08:05.434 9981.637 - 10032.049: 98.1660% ( 5) 00:08:05.434 10032.049 - 10082.462: 98.1771% ( 2) 00:08:05.434 10082.462 - 10132.874: 98.1992% ( 4) 00:08:05.434 10132.874 - 10183.286: 98.2270% ( 5) 00:08:05.434 10183.286 - 10233.698: 98.2491% ( 4) 00:08:05.434 10233.698 - 10284.111: 98.2934% ( 8) 00:08:05.434 10284.111 - 10334.523: 98.3378% ( 8) 00:08:05.434 10334.523 - 10384.935: 98.3876% ( 9) 00:08:05.434 10384.935 - 10435.348: 98.4209% ( 6) 00:08:05.434 10435.348 - 10485.760: 98.4707% ( 9) 00:08:05.434 10485.760 - 10536.172: 98.5206% ( 9) 00:08:05.435 10536.172 - 10586.585: 98.5816% ( 11) 00:08:05.435 10586.585 - 10636.997: 98.6425% ( 11) 00:08:05.435 10636.997 - 10687.409: 98.6868% ( 8) 00:08:05.435 10687.409 - 10737.822: 98.7312% ( 8) 00:08:05.435 10737.822 - 10788.234: 98.7644% ( 6) 00:08:05.435 10788.234 - 10838.646: 98.7921% ( 5) 00:08:05.435 10838.646 - 10889.058: 98.8254% ( 6) 00:08:05.435 10889.058 - 10939.471: 98.8531% ( 5) 00:08:05.435 10939.471 - 10989.883: 98.8808% ( 5) 00:08:05.435 10989.883 - 11040.295: 98.8974% ( 3) 00:08:05.435 11040.295 - 11090.708: 98.9195% ( 4) 00:08:05.435 11090.708 - 11141.120: 98.9362% ( 3) 00:08:05.435 13208.025 - 13308.849: 98.9417% ( 1) 00:08:05.435 13308.849 - 13409.674: 98.9473% ( 1) 00:08:05.435 13409.674 - 13510.498: 99.0027% ( 10) 00:08:05.435 13510.498 - 13611.323: 99.2520% ( 45) 00:08:05.435 13611.323 - 13712.148: 99.2852% ( 6) 00:08:05.435 13712.148 - 13812.972: 99.2908% ( 1) 00:08:05.435 15224.517 - 15325.342: 99.2963% ( 1) 00:08:05.435 15325.342 - 15426.166: 99.3019% ( 1) 00:08:05.435 15526.991 - 15627.815: 99.3684% ( 12) 00:08:05.435 15627.815 - 15728.640: 99.4238% ( 10) 00:08:05.435 15728.640 - 15829.465: 99.4736% ( 9) 00:08:05.435 15829.465 - 15930.289: 99.5567% ( 15) 00:08:05.435 15930.289 - 16031.114: 99.5900% ( 6) 00:08:05.435 16031.114 - 16131.938: 99.6177% ( 5) 00:08:05.435 16131.938 - 16232.763: 99.6454% ( 5) 00:08:05.435 18753.378 - 18854.203: 99.6565% ( 2) 00:08:05.435 18854.203 - 18955.028: 99.7230% ( 12) 00:08:05.435 18955.028 - 19055.852: 99.9224% ( 36) 00:08:05.435 19055.852 - 19156.677: 99.9501% ( 5) 00:08:05.435 19156.677 - 19257.502: 99.9778% ( 5) 00:08:05.435 19257.502 - 19358.326: 100.0000% ( 4) 00:08:05.435 00:08:05.435 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:05.435 ============================================================================== 00:08:05.435 Range in us Cumulative IO count 00:08:05.435 3503.655 - 3528.862: 0.0055% ( 1) 00:08:05.435 3528.862 - 3554.068: 0.0222% ( 3) 00:08:05.435 3554.068 - 3579.274: 0.0499% ( 5) 00:08:05.435 3579.274 - 3604.480: 0.0776% ( 5) 00:08:05.435 3604.480 - 3629.686: 0.1108% ( 6) 00:08:05.435 3629.686 - 3654.892: 0.1995% ( 16) 00:08:05.435 3654.892 - 3680.098: 0.2272% ( 5) 00:08:05.435 3680.098 - 3705.305: 0.2549% ( 5) 00:08:05.435 3705.305 - 3730.511: 0.2660% ( 2) 00:08:05.435 3730.511 - 3755.717: 0.2770% ( 2) 00:08:05.435 3755.717 - 3780.923: 0.2881% ( 2) 00:08:05.435 3780.923 - 3806.129: 0.2992% ( 2) 00:08:05.435 3806.129 - 3831.335: 0.3158% ( 3) 00:08:05.435 3831.335 - 3856.542: 0.3269% ( 2) 00:08:05.435 3856.542 - 3881.748: 0.3380% ( 2) 00:08:05.435 3881.748 - 3906.954: 0.3546% ( 3) 00:08:05.435 5016.025 - 5041.231: 0.3602% ( 1) 00:08:05.435 5066.437 - 5091.643: 0.3657% ( 1) 00:08:05.435 5142.055 - 5167.262: 0.3768% ( 2) 00:08:05.435 5167.262 - 5192.468: 0.3934% ( 3) 00:08:05.435 5192.468 - 5217.674: 0.4045% ( 2) 00:08:05.435 5217.674 - 5242.880: 0.4156% ( 2) 00:08:05.435 5242.880 - 5268.086: 0.4266% ( 2) 00:08:05.435 5268.086 - 5293.292: 0.4433% ( 3) 00:08:05.435 5293.292 - 5318.498: 0.4654% ( 4) 00:08:05.435 5318.498 - 5343.705: 0.5208% ( 10) 00:08:05.435 5343.705 - 5368.911: 0.5430% ( 4) 00:08:05.435 5368.911 - 5394.117: 0.6039% ( 11) 00:08:05.435 5394.117 - 5419.323: 0.6594% ( 10) 00:08:05.435 5419.323 - 5444.529: 0.6704% ( 2) 00:08:05.435 5444.529 - 5469.735: 0.6815% ( 2) 00:08:05.435 5469.735 - 5494.942: 0.6926% ( 2) 00:08:05.435 5494.942 - 5520.148: 0.7037% ( 2) 00:08:05.435 5520.148 - 5545.354: 0.7092% ( 1) 00:08:05.435 5847.828 - 5873.034: 0.7148% ( 1) 00:08:05.435 5898.240 - 5923.446: 0.7203% ( 1) 00:08:05.435 5999.065 - 6024.271: 0.7314% ( 2) 00:08:05.435 6024.271 - 6049.477: 0.7480% ( 3) 00:08:05.435 6049.477 - 6074.683: 0.8145% ( 12) 00:08:05.435 6074.683 - 6099.889: 0.8810% ( 12) 00:08:05.435 6099.889 - 6125.095: 0.9586% ( 14) 00:08:05.435 6125.095 - 6150.302: 1.0805% ( 22) 00:08:05.435 6150.302 - 6175.508: 1.2079% ( 23) 00:08:05.435 6175.508 - 6200.714: 1.3630% ( 28) 00:08:05.435 6200.714 - 6225.920: 1.5182% ( 28) 00:08:05.435 6225.920 - 6251.126: 1.7897% ( 49) 00:08:05.435 6251.126 - 6276.332: 2.0335% ( 44) 00:08:05.435 6276.332 - 6301.538: 2.6873% ( 118) 00:08:05.435 6301.538 - 6326.745: 3.2469% ( 101) 00:08:05.435 6326.745 - 6351.951: 3.9173% ( 121) 00:08:05.435 6351.951 - 6377.157: 4.6321% ( 129) 00:08:05.435 6377.157 - 6402.363: 5.6904% ( 191) 00:08:05.435 6402.363 - 6427.569: 6.6046% ( 165) 00:08:05.435 6427.569 - 6452.775: 7.7072% ( 199) 00:08:05.435 6452.775 - 6503.188: 11.6523% ( 712) 00:08:05.435 6503.188 - 6553.600: 16.6390% ( 900) 00:08:05.435 6553.600 - 6604.012: 22.3460% ( 1030) 00:08:05.435 6604.012 - 6654.425: 28.5073% ( 1112) 00:08:05.435 6654.425 - 6704.837: 36.4195% ( 1428) 00:08:05.435 6704.837 - 6755.249: 43.6613% ( 1307) 00:08:05.435 6755.249 - 6805.662: 50.2161% ( 1183) 00:08:05.435 6805.662 - 6856.074: 55.3524% ( 927) 00:08:05.435 6856.074 - 6906.486: 59.7573% ( 795) 00:08:05.435 6906.486 - 6956.898: 63.8464% ( 738) 00:08:05.435 6956.898 - 7007.311: 67.0324% ( 575) 00:08:05.435 7007.311 - 7057.723: 70.3568% ( 600) 00:08:05.435 7057.723 - 7108.135: 72.7615% ( 434) 00:08:05.435 7108.135 - 7158.548: 75.0887% ( 420) 00:08:05.435 7158.548 - 7208.960: 77.2994% ( 399) 00:08:05.435 7208.960 - 7259.372: 78.6735% ( 248) 00:08:05.435 7259.372 - 7309.785: 79.7484% ( 194) 00:08:05.435 7309.785 - 7360.197: 80.6294% ( 159) 00:08:05.435 7360.197 - 7410.609: 81.8041% ( 212) 00:08:05.435 7410.609 - 7461.022: 82.8347% ( 186) 00:08:05.435 7461.022 - 7511.434: 83.6879% ( 154) 00:08:05.435 7511.434 - 7561.846: 85.0011% ( 237) 00:08:05.435 7561.846 - 7612.258: 86.1536% ( 208) 00:08:05.435 7612.258 - 7662.671: 86.9293% ( 140) 00:08:05.435 7662.671 - 7713.083: 87.8158% ( 160) 00:08:05.435 7713.083 - 7763.495: 88.6968% ( 159) 00:08:05.435 7763.495 - 7813.908: 89.6166% ( 166) 00:08:05.435 7813.908 - 7864.320: 90.1263% ( 92) 00:08:05.435 7864.320 - 7914.732: 90.8245% ( 126) 00:08:05.435 7914.732 - 7965.145: 91.3952% ( 103) 00:08:05.435 7965.145 - 8015.557: 91.8828% ( 88) 00:08:05.435 8015.557 - 8065.969: 92.2762% ( 71) 00:08:05.435 8065.969 - 8116.382: 92.9300% ( 118) 00:08:05.435 8116.382 - 8166.794: 93.2458% ( 57) 00:08:05.435 8166.794 - 8217.206: 93.8774% ( 114) 00:08:05.435 8217.206 - 8267.618: 94.2376% ( 65) 00:08:05.435 8267.618 - 8318.031: 94.5534% ( 57) 00:08:05.435 8318.031 - 8368.443: 94.8969% ( 62) 00:08:05.435 8368.443 - 8418.855: 95.1518% ( 46) 00:08:05.435 8418.855 - 8469.268: 95.4233% ( 49) 00:08:05.435 8469.268 - 8519.680: 95.6449% ( 40) 00:08:05.435 8519.680 - 8570.092: 95.8832% ( 43) 00:08:05.435 8570.092 - 8620.505: 96.1104% ( 41) 00:08:05.435 8620.505 - 8670.917: 96.2489% ( 25) 00:08:05.435 8670.917 - 8721.329: 96.4151% ( 30) 00:08:05.435 8721.329 - 8771.742: 96.5592% ( 26) 00:08:05.435 8771.742 - 8822.154: 96.6922% ( 24) 00:08:05.435 8822.154 - 8872.566: 96.7974% ( 19) 00:08:05.435 8872.566 - 8922.978: 96.8916% ( 17) 00:08:05.435 8922.978 - 8973.391: 96.9969% ( 19) 00:08:05.435 8973.391 - 9023.803: 97.0689% ( 13) 00:08:05.435 9023.803 - 9074.215: 97.1964% ( 23) 00:08:05.435 9074.215 - 9124.628: 97.3127% ( 21) 00:08:05.435 9124.628 - 9175.040: 97.3737% ( 11) 00:08:05.435 9175.040 - 9225.452: 97.4457% ( 13) 00:08:05.435 9225.452 - 9275.865: 97.5676% ( 22) 00:08:05.435 9275.865 - 9326.277: 97.7227% ( 28) 00:08:05.435 9326.277 - 9376.689: 97.8003% ( 14) 00:08:05.436 9376.689 - 9427.102: 97.8502% ( 9) 00:08:05.436 9427.102 - 9477.514: 97.8945% ( 8) 00:08:05.436 9477.514 - 9527.926: 97.9444% ( 9) 00:08:05.436 9527.926 - 9578.338: 97.9998% ( 10) 00:08:05.436 9578.338 - 9628.751: 98.0829% ( 15) 00:08:05.436 9628.751 - 9679.163: 98.2436% ( 29) 00:08:05.436 9679.163 - 9729.575: 98.3101% ( 12) 00:08:05.436 9729.575 - 9779.988: 98.3710% ( 11) 00:08:05.436 9779.988 - 9830.400: 98.4098% ( 7) 00:08:05.436 9830.400 - 9880.812: 98.4430% ( 6) 00:08:05.436 9880.812 - 9931.225: 98.4652% ( 4) 00:08:05.436 9931.225 - 9981.637: 98.4707% ( 1) 00:08:05.436 9981.637 - 10032.049: 98.4763% ( 1) 00:08:05.436 10032.049 - 10082.462: 98.4874% ( 2) 00:08:05.436 10082.462 - 10132.874: 98.4929% ( 1) 00:08:05.436 10132.874 - 10183.286: 98.4984% ( 1) 00:08:05.436 10183.286 - 10233.698: 98.5040% ( 1) 00:08:05.436 10233.698 - 10284.111: 98.5151% ( 2) 00:08:05.436 10284.111 - 10334.523: 98.5206% ( 1) 00:08:05.436 10334.523 - 10384.935: 98.5262% ( 1) 00:08:05.436 10384.935 - 10435.348: 98.5317% ( 1) 00:08:05.436 10435.348 - 10485.760: 98.5428% ( 2) 00:08:05.436 10485.760 - 10536.172: 98.5483% ( 1) 00:08:05.436 10536.172 - 10586.585: 98.5539% ( 1) 00:08:05.436 10586.585 - 10636.997: 98.5594% ( 1) 00:08:05.436 10636.997 - 10687.409: 98.5760% ( 3) 00:08:05.436 10687.409 - 10737.822: 98.6037% ( 5) 00:08:05.436 10737.822 - 10788.234: 98.6203% ( 3) 00:08:05.436 10788.234 - 10838.646: 98.6259% ( 1) 00:08:05.436 10838.646 - 10889.058: 98.6425% ( 3) 00:08:05.436 10889.058 - 10939.471: 98.6536% ( 2) 00:08:05.436 10939.471 - 10989.883: 98.6647% ( 2) 00:08:05.436 11040.295 - 11090.708: 98.6813% ( 3) 00:08:05.436 11090.708 - 11141.120: 98.7035% ( 4) 00:08:05.436 11141.120 - 11191.532: 98.7312% ( 5) 00:08:05.436 11191.532 - 11241.945: 98.7755% ( 8) 00:08:05.436 11241.945 - 11292.357: 98.8475% ( 13) 00:08:05.436 11292.357 - 11342.769: 98.8697% ( 4) 00:08:05.436 11342.769 - 11393.182: 98.8863% ( 3) 00:08:05.436 11393.182 - 11443.594: 98.9195% ( 6) 00:08:05.436 11443.594 - 11494.006: 98.9362% ( 3) 00:08:05.436 12653.489 - 12703.902: 98.9528% ( 3) 00:08:05.436 12703.902 - 12754.314: 98.9750% ( 4) 00:08:05.436 12754.314 - 12804.726: 99.0193% ( 8) 00:08:05.436 12804.726 - 12855.138: 99.0636% ( 8) 00:08:05.436 12855.138 - 12905.551: 99.0858% ( 4) 00:08:05.436 12905.551 - 13006.375: 99.1578% ( 13) 00:08:05.436 13006.375 - 13107.200: 99.1800% ( 4) 00:08:05.436 13107.200 - 13208.025: 99.1910% ( 2) 00:08:05.436 13208.025 - 13308.849: 99.2132% ( 4) 00:08:05.436 13308.849 - 13409.674: 99.2354% ( 4) 00:08:05.436 13409.674 - 13510.498: 99.2520% ( 3) 00:08:05.436 13510.498 - 13611.323: 99.2742% ( 4) 00:08:05.436 13611.323 - 13712.148: 99.2908% ( 3) 00:08:05.436 15123.692 - 15224.517: 99.3240% ( 6) 00:08:05.436 15224.517 - 15325.342: 99.5789% ( 46) 00:08:05.436 15325.342 - 15426.166: 99.6232% ( 8) 00:08:05.436 15426.166 - 15526.991: 99.6454% ( 4) 00:08:05.436 18047.606 - 18148.431: 99.6731% ( 5) 00:08:05.436 18148.431 - 18249.255: 99.7063% ( 6) 00:08:05.436 18249.255 - 18350.080: 99.7340% ( 5) 00:08:05.436 18350.080 - 18450.905: 99.7673% ( 6) 00:08:05.436 18450.905 - 18551.729: 99.8892% ( 22) 00:08:05.436 18551.729 - 18652.554: 99.9224% ( 6) 00:08:05.436 18652.554 - 18753.378: 99.9557% ( 6) 00:08:05.436 18753.378 - 18854.203: 99.9945% ( 7) 00:08:05.436 18854.203 - 18955.028: 100.0000% ( 1) 00:08:05.436 00:08:05.436 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:05.436 ============================================================================== 00:08:05.436 Range in us Cumulative IO count 00:08:05.436 3276.800 - 3302.006: 0.0222% ( 4) 00:08:05.436 3302.006 - 3327.212: 0.0720% ( 9) 00:08:05.436 3327.212 - 3352.418: 0.1496% ( 14) 00:08:05.436 3352.418 - 3377.625: 0.2327% ( 15) 00:08:05.436 3377.625 - 3402.831: 0.2770% ( 8) 00:08:05.436 3402.831 - 3428.037: 0.2881% ( 2) 00:08:05.436 3428.037 - 3453.243: 0.2992% ( 2) 00:08:05.436 3453.243 - 3478.449: 0.3103% ( 2) 00:08:05.436 3478.449 - 3503.655: 0.3214% ( 2) 00:08:05.436 3503.655 - 3528.862: 0.3324% ( 2) 00:08:05.436 3528.862 - 3554.068: 0.3380% ( 1) 00:08:05.436 3554.068 - 3579.274: 0.3546% ( 3) 00:08:05.436 4915.200 - 4940.406: 0.3602% ( 1) 00:08:05.436 4940.406 - 4965.612: 0.4045% ( 8) 00:08:05.436 4965.612 - 4990.818: 0.4377% ( 6) 00:08:05.436 4990.818 - 5016.025: 0.4765% ( 7) 00:08:05.436 5016.025 - 5041.231: 0.5208% ( 8) 00:08:05.436 5041.231 - 5066.437: 0.5762% ( 10) 00:08:05.436 5066.437 - 5091.643: 0.6261% ( 9) 00:08:05.436 5091.643 - 5116.849: 0.6372% ( 2) 00:08:05.436 5116.849 - 5142.055: 0.6538% ( 3) 00:08:05.436 5142.055 - 5167.262: 0.6649% ( 2) 00:08:05.436 5167.262 - 5192.468: 0.6760% ( 2) 00:08:05.436 5192.468 - 5217.674: 0.6926% ( 3) 00:08:05.436 5217.674 - 5242.880: 0.7037% ( 2) 00:08:05.436 5242.880 - 5268.086: 0.7092% ( 1) 00:08:05.436 5898.240 - 5923.446: 0.7148% ( 1) 00:08:05.436 5923.446 - 5948.652: 0.7203% ( 1) 00:08:05.436 5999.065 - 6024.271: 0.7369% ( 3) 00:08:05.436 6024.271 - 6049.477: 0.7757% ( 7) 00:08:05.436 6049.477 - 6074.683: 0.8034% ( 5) 00:08:05.436 6074.683 - 6099.889: 0.8477% ( 8) 00:08:05.436 6099.889 - 6125.095: 0.8976% ( 9) 00:08:05.436 6125.095 - 6150.302: 1.0306% ( 24) 00:08:05.436 6150.302 - 6175.508: 1.1636% ( 24) 00:08:05.436 6175.508 - 6200.714: 1.3520% ( 34) 00:08:05.436 6200.714 - 6225.920: 1.5348% ( 33) 00:08:05.436 6225.920 - 6251.126: 1.7841% ( 45) 00:08:05.436 6251.126 - 6276.332: 2.1941% ( 74) 00:08:05.436 6276.332 - 6301.538: 2.6374% ( 80) 00:08:05.436 6301.538 - 6326.745: 3.1693% ( 96) 00:08:05.436 6326.745 - 6351.951: 3.7677% ( 108) 00:08:05.436 6351.951 - 6377.157: 4.5434% ( 140) 00:08:05.436 6377.157 - 6402.363: 5.7569% ( 219) 00:08:05.436 6402.363 - 6427.569: 6.7154% ( 173) 00:08:05.436 6427.569 - 6452.775: 7.9012% ( 214) 00:08:05.436 6452.775 - 6503.188: 11.5248% ( 654) 00:08:05.436 6503.188 - 6553.600: 16.1514% ( 835) 00:08:05.436 6553.600 - 6604.012: 22.5177% ( 1149) 00:08:05.436 6604.012 - 6654.425: 29.2609% ( 1217) 00:08:05.436 6654.425 - 6704.837: 36.5248% ( 1311) 00:08:05.436 6704.837 - 6755.249: 43.1793% ( 1201) 00:08:05.436 6755.249 - 6805.662: 48.9528% ( 1042) 00:08:05.436 6805.662 - 6856.074: 54.3218% ( 969) 00:08:05.436 6856.074 - 6906.486: 59.6576% ( 963) 00:08:05.436 6906.486 - 6956.898: 63.6414% ( 719) 00:08:05.436 6956.898 - 7007.311: 67.3482% ( 669) 00:08:05.436 7007.311 - 7057.723: 70.5230% ( 573) 00:08:05.436 7057.723 - 7108.135: 73.0829% ( 462) 00:08:05.436 7108.135 - 7158.548: 75.3435% ( 408) 00:08:05.436 7158.548 - 7208.960: 77.3382% ( 360) 00:08:05.436 7208.960 - 7259.372: 79.2332% ( 342) 00:08:05.436 7259.372 - 7309.785: 80.2637% ( 186) 00:08:05.436 7309.785 - 7360.197: 81.2057% ( 170) 00:08:05.436 7360.197 - 7410.609: 82.0922% ( 160) 00:08:05.436 7410.609 - 7461.022: 82.9676% ( 158) 00:08:05.436 7461.022 - 7511.434: 84.0536% ( 196) 00:08:05.436 7511.434 - 7561.846: 84.6465% ( 107) 00:08:05.436 7561.846 - 7612.258: 85.7491% ( 199) 00:08:05.436 7612.258 - 7662.671: 86.9016% ( 208) 00:08:05.436 7662.671 - 7713.083: 87.6607% ( 137) 00:08:05.436 7713.083 - 7763.495: 88.7411% ( 195) 00:08:05.436 7763.495 - 7813.908: 89.2232% ( 87) 00:08:05.436 7813.908 - 7864.320: 89.8050% ( 105) 00:08:05.436 7864.320 - 7914.732: 90.3313% ( 95) 00:08:05.436 7914.732 - 7965.145: 91.0627% ( 132) 00:08:05.436 7965.145 - 8015.557: 91.6556% ( 107) 00:08:05.436 8015.557 - 8065.969: 92.1210% ( 84) 00:08:05.436 8065.969 - 8116.382: 92.5975% ( 86) 00:08:05.436 8116.382 - 8166.794: 93.1627% ( 102) 00:08:05.436 8166.794 - 8217.206: 93.7611% ( 108) 00:08:05.436 8217.206 - 8267.618: 94.1046% ( 62) 00:08:05.436 8267.618 - 8318.031: 94.4149% ( 56) 00:08:05.436 8318.031 - 8368.443: 94.8083% ( 71) 00:08:05.436 8368.443 - 8418.855: 95.0853% ( 50) 00:08:05.436 8418.855 - 8469.268: 95.3457% ( 47) 00:08:05.436 8469.268 - 8519.680: 95.7225% ( 68) 00:08:05.436 8519.680 - 8570.092: 96.0494% ( 59) 00:08:05.436 8570.092 - 8620.505: 96.2267% ( 32) 00:08:05.436 8620.505 - 8670.917: 96.3874% ( 29) 00:08:05.436 8670.917 - 8721.329: 96.5426% ( 28) 00:08:05.436 8721.329 - 8771.742: 96.6922% ( 27) 00:08:05.436 8771.742 - 8822.154: 96.7974% ( 19) 00:08:05.436 8822.154 - 8872.566: 96.8805% ( 15) 00:08:05.436 8872.566 - 8922.978: 96.9858% ( 19) 00:08:05.436 8922.978 - 8973.391: 97.0800% ( 17) 00:08:05.436 8973.391 - 9023.803: 97.1853% ( 19) 00:08:05.436 9023.803 - 9074.215: 97.2684% ( 15) 00:08:05.436 9074.215 - 9124.628: 97.3570% ( 16) 00:08:05.436 9124.628 - 9175.040: 97.4014% ( 8) 00:08:05.436 9175.040 - 9225.452: 97.5233% ( 22) 00:08:05.436 9225.452 - 9275.865: 97.5842% ( 11) 00:08:05.436 9275.865 - 9326.277: 97.6230% ( 7) 00:08:05.436 9326.277 - 9376.689: 97.7227% ( 18) 00:08:05.436 9376.689 - 9427.102: 97.7948% ( 13) 00:08:05.436 9427.102 - 9477.514: 97.8668% ( 13) 00:08:05.436 9477.514 - 9527.926: 97.9222% ( 10) 00:08:05.436 9527.926 - 9578.338: 97.9721% ( 9) 00:08:05.436 9578.338 - 9628.751: 98.0109% ( 7) 00:08:05.436 9628.751 - 9679.163: 98.0552% ( 8) 00:08:05.436 9679.163 - 9729.575: 98.1217% ( 12) 00:08:05.436 9729.575 - 9779.988: 98.1826% ( 11) 00:08:05.437 9779.988 - 9830.400: 98.3932% ( 38) 00:08:05.437 9830.400 - 9880.812: 98.4375% ( 8) 00:08:05.437 9880.812 - 9931.225: 98.4818% ( 8) 00:08:05.437 9931.225 - 9981.637: 98.5206% ( 7) 00:08:05.437 9981.637 - 10032.049: 98.5594% ( 7) 00:08:05.437 10032.049 - 10082.462: 98.5816% ( 4) 00:08:05.437 11040.295 - 11090.708: 98.5926% ( 2) 00:08:05.437 11090.708 - 11141.120: 98.6037% ( 2) 00:08:05.437 11141.120 - 11191.532: 98.6259% ( 4) 00:08:05.437 11191.532 - 11241.945: 98.6425% ( 3) 00:08:05.437 11241.945 - 11292.357: 98.6591% ( 3) 00:08:05.437 11292.357 - 11342.769: 98.6647% ( 1) 00:08:05.437 11342.769 - 11393.182: 98.6758% ( 2) 00:08:05.437 11393.182 - 11443.594: 98.6868% ( 2) 00:08:05.437 11443.594 - 11494.006: 98.6924% ( 1) 00:08:05.437 11494.006 - 11544.418: 98.6979% ( 1) 00:08:05.437 11544.418 - 11594.831: 98.7145% ( 3) 00:08:05.437 11594.831 - 11645.243: 98.7256% ( 2) 00:08:05.437 11645.243 - 11695.655: 98.7312% ( 1) 00:08:05.437 11695.655 - 11746.068: 98.7478% ( 3) 00:08:05.437 11746.068 - 11796.480: 98.7755% ( 5) 00:08:05.437 11796.480 - 11846.892: 98.7977% ( 4) 00:08:05.437 11846.892 - 11897.305: 98.8309% ( 6) 00:08:05.437 11897.305 - 11947.717: 98.8697% ( 7) 00:08:05.437 11947.717 - 11998.129: 98.9140% ( 8) 00:08:05.437 11998.129 - 12048.542: 98.9805% ( 12) 00:08:05.437 12048.542 - 12098.954: 98.9971% ( 3) 00:08:05.437 12098.954 - 12149.366: 99.0193% ( 4) 00:08:05.437 12149.366 - 12199.778: 99.0470% ( 5) 00:08:05.437 12199.778 - 12250.191: 99.0913% ( 8) 00:08:05.437 12250.191 - 12300.603: 99.1412% ( 9) 00:08:05.437 12300.603 - 12351.015: 99.1744% ( 6) 00:08:05.437 12351.015 - 12401.428: 99.1855% ( 2) 00:08:05.437 12401.428 - 12451.840: 99.1966% ( 2) 00:08:05.437 12451.840 - 12502.252: 99.2077% ( 2) 00:08:05.437 12502.252 - 12552.665: 99.2188% ( 2) 00:08:05.437 12552.665 - 12603.077: 99.2298% ( 2) 00:08:05.437 12603.077 - 12653.489: 99.2409% ( 2) 00:08:05.437 12653.489 - 12703.902: 99.2520% ( 2) 00:08:05.437 12703.902 - 12754.314: 99.2575% ( 1) 00:08:05.437 12754.314 - 12804.726: 99.2686% ( 2) 00:08:05.437 12804.726 - 12855.138: 99.2797% ( 2) 00:08:05.437 12855.138 - 12905.551: 99.2852% ( 1) 00:08:05.437 12905.551 - 13006.375: 99.2908% ( 1) 00:08:05.437 13812.972 - 13913.797: 99.2963% ( 1) 00:08:05.437 14216.271 - 14317.095: 99.3185% ( 4) 00:08:05.437 14317.095 - 14417.920: 99.3573% ( 7) 00:08:05.437 14417.920 - 14518.745: 99.4016% ( 8) 00:08:05.437 14518.745 - 14619.569: 99.4459% ( 8) 00:08:05.437 14619.569 - 14720.394: 99.4847% ( 7) 00:08:05.437 14720.394 - 14821.218: 99.5124% ( 5) 00:08:05.437 14821.218 - 14922.043: 99.5457% ( 6) 00:08:05.437 14922.043 - 15022.868: 99.5789% ( 6) 00:08:05.437 15022.868 - 15123.692: 99.6066% ( 5) 00:08:05.437 15123.692 - 15224.517: 99.6398% ( 6) 00:08:05.437 15224.517 - 15325.342: 99.6454% ( 1) 00:08:05.437 17644.308 - 17745.132: 99.6731% ( 5) 00:08:05.437 17745.132 - 17845.957: 99.7008% ( 5) 00:08:05.437 17845.957 - 17946.782: 99.7285% ( 5) 00:08:05.437 17946.782 - 18047.606: 99.9003% ( 31) 00:08:05.437 18047.606 - 18148.431: 99.9280% ( 5) 00:08:05.437 18148.431 - 18249.255: 99.9612% ( 6) 00:08:05.437 18249.255 - 18350.080: 99.9889% ( 5) 00:08:05.437 18450.905 - 18551.729: 100.0000% ( 2) 00:08:05.437 00:08:05.437 13:56:43 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:08:05.437 00:08:05.437 real 0m2.440s 00:08:05.437 user 0m2.144s 00:08:05.437 sys 0m0.187s 00:08:05.437 13:56:43 nvme.nvme_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:05.437 13:56:43 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:08:05.437 ************************************ 00:08:05.437 END TEST nvme_perf 00:08:05.437 ************************************ 00:08:05.437 13:56:43 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:05.437 13:56:43 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:05.437 13:56:43 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:05.437 13:56:43 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:05.437 ************************************ 00:08:05.437 START TEST nvme_hello_world 00:08:05.437 ************************************ 00:08:05.437 13:56:43 nvme.nvme_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:05.696 Initializing NVMe Controllers 00:08:05.696 Attached to 0000:00:10.0 00:08:05.696 Namespace ID: 1 size: 6GB 00:08:05.696 Attached to 0000:00:11.0 00:08:05.696 Namespace ID: 1 size: 5GB 00:08:05.696 Attached to 0000:00:13.0 00:08:05.696 Namespace ID: 1 size: 1GB 00:08:05.696 Attached to 0000:00:12.0 00:08:05.696 Namespace ID: 1 size: 4GB 00:08:05.696 Namespace ID: 2 size: 4GB 00:08:05.696 Namespace ID: 3 size: 4GB 00:08:05.696 Initialization complete. 00:08:05.696 INFO: using host memory buffer for IO 00:08:05.696 Hello world! 00:08:05.696 INFO: using host memory buffer for IO 00:08:05.696 Hello world! 00:08:05.696 INFO: using host memory buffer for IO 00:08:05.696 Hello world! 00:08:05.696 INFO: using host memory buffer for IO 00:08:05.696 Hello world! 00:08:05.696 INFO: using host memory buffer for IO 00:08:05.696 Hello world! 00:08:05.696 INFO: using host memory buffer for IO 00:08:05.696 Hello world! 00:08:05.696 00:08:05.696 real 0m0.187s 00:08:05.696 user 0m0.053s 00:08:05.696 sys 0m0.089s 00:08:05.696 13:56:43 nvme.nvme_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:05.696 ************************************ 00:08:05.696 END TEST nvme_hello_world 00:08:05.696 ************************************ 00:08:05.696 13:56:43 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:05.696 13:56:43 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:05.696 13:56:43 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:05.696 13:56:43 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:05.696 13:56:43 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:05.696 ************************************ 00:08:05.696 START TEST nvme_sgl 00:08:05.696 ************************************ 00:08:05.696 13:56:43 nvme.nvme_sgl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:05.954 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:08:05.954 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:08:05.954 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:08:05.954 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:08:05.954 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:08:05.954 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:08:05.954 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:08:05.954 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:08:05.954 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:08:06.212 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:08:06.213 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:08:06.213 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:08:06.213 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:08:06.213 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:08:06.213 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:08:06.213 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:08:06.213 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:08:06.213 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:08:06.213 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:08:06.213 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:08:06.213 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:08:06.213 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:08:06.213 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:08:06.213 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:08:06.213 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:08:06.213 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:08:06.213 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:08:06.213 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:08:06.213 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:08:06.213 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:08:06.213 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:08:06.213 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:08:06.213 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:08:06.213 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:08:06.213 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:08:06.213 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:08:06.213 NVMe Readv/Writev Request test 00:08:06.213 Attached to 0000:00:10.0 00:08:06.213 Attached to 0000:00:11.0 00:08:06.213 Attached to 0000:00:13.0 00:08:06.213 Attached to 0000:00:12.0 00:08:06.213 0000:00:10.0: build_io_request_2 test passed 00:08:06.213 0000:00:10.0: build_io_request_4 test passed 00:08:06.213 0000:00:10.0: build_io_request_5 test passed 00:08:06.213 0000:00:10.0: build_io_request_6 test passed 00:08:06.213 0000:00:10.0: build_io_request_7 test passed 00:08:06.213 0000:00:10.0: build_io_request_10 test passed 00:08:06.213 0000:00:11.0: build_io_request_2 test passed 00:08:06.213 0000:00:11.0: build_io_request_4 test passed 00:08:06.213 0000:00:11.0: build_io_request_5 test passed 00:08:06.213 0000:00:11.0: build_io_request_6 test passed 00:08:06.213 0000:00:11.0: build_io_request_7 test passed 00:08:06.213 0000:00:11.0: build_io_request_10 test passed 00:08:06.213 Cleaning up... 00:08:06.213 00:08:06.213 real 0m0.378s 00:08:06.213 user 0m0.258s 00:08:06.213 sys 0m0.076s 00:08:06.213 13:56:44 nvme.nvme_sgl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:06.213 ************************************ 00:08:06.213 END TEST nvme_sgl 00:08:06.213 ************************************ 00:08:06.213 13:56:44 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:08:06.213 13:56:44 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:06.213 13:56:44 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:06.213 13:56:44 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:06.213 13:56:44 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:06.213 ************************************ 00:08:06.213 START TEST nvme_e2edp 00:08:06.213 ************************************ 00:08:06.213 13:56:44 nvme.nvme_e2edp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:06.213 NVMe Write/Read with End-to-End data protection test 00:08:06.213 Attached to 0000:00:10.0 00:08:06.213 Attached to 0000:00:11.0 00:08:06.213 Attached to 0000:00:13.0 00:08:06.213 Attached to 0000:00:12.0 00:08:06.213 Cleaning up... 00:08:06.472 00:08:06.472 real 0m0.183s 00:08:06.472 user 0m0.053s 00:08:06.472 sys 0m0.087s 00:08:06.472 ************************************ 00:08:06.472 END TEST nvme_e2edp 00:08:06.472 ************************************ 00:08:06.472 13:56:44 nvme.nvme_e2edp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:06.472 13:56:44 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:08:06.472 13:56:44 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:06.472 13:56:44 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:06.472 13:56:44 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:06.472 13:56:44 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:06.472 ************************************ 00:08:06.472 START TEST nvme_reserve 00:08:06.472 ************************************ 00:08:06.472 13:56:44 nvme.nvme_reserve -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:06.472 ===================================================== 00:08:06.472 NVMe Controller at PCI bus 0, device 16, function 0 00:08:06.472 ===================================================== 00:08:06.472 Reservations: Not Supported 00:08:06.472 ===================================================== 00:08:06.472 NVMe Controller at PCI bus 0, device 17, function 0 00:08:06.472 ===================================================== 00:08:06.472 Reservations: Not Supported 00:08:06.472 ===================================================== 00:08:06.472 NVMe Controller at PCI bus 0, device 19, function 0 00:08:06.472 ===================================================== 00:08:06.472 Reservations: Not Supported 00:08:06.472 ===================================================== 00:08:06.472 NVMe Controller at PCI bus 0, device 18, function 0 00:08:06.472 ===================================================== 00:08:06.472 Reservations: Not Supported 00:08:06.472 Reservation test passed 00:08:06.472 00:08:06.472 real 0m0.158s 00:08:06.472 user 0m0.047s 00:08:06.472 sys 0m0.074s 00:08:06.472 13:56:44 nvme.nvme_reserve -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:06.472 ************************************ 00:08:06.472 13:56:44 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:08:06.472 END TEST nvme_reserve 00:08:06.472 ************************************ 00:08:06.472 13:56:44 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:06.472 13:56:44 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:06.472 13:56:44 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:06.472 13:56:44 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:06.472 ************************************ 00:08:06.472 START TEST nvme_err_injection 00:08:06.472 ************************************ 00:08:06.472 13:56:44 nvme.nvme_err_injection -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:06.731 NVMe Error Injection test 00:08:06.731 Attached to 0000:00:10.0 00:08:06.731 Attached to 0000:00:11.0 00:08:06.731 Attached to 0000:00:13.0 00:08:06.731 Attached to 0000:00:12.0 00:08:06.731 0000:00:10.0: get features failed as expected 00:08:06.731 0000:00:11.0: get features failed as expected 00:08:06.731 0000:00:13.0: get features failed as expected 00:08:06.731 0000:00:12.0: get features failed as expected 00:08:06.731 0000:00:10.0: get features successfully as expected 00:08:06.731 0000:00:11.0: get features successfully as expected 00:08:06.731 0000:00:13.0: get features successfully as expected 00:08:06.731 0000:00:12.0: get features successfully as expected 00:08:06.731 0000:00:11.0: read failed as expected 00:08:06.731 0000:00:13.0: read failed as expected 00:08:06.731 0000:00:12.0: read failed as expected 00:08:06.731 0000:00:10.0: read failed as expected 00:08:06.731 0000:00:10.0: read successfully as expected 00:08:06.731 0000:00:11.0: read successfully as expected 00:08:06.731 0000:00:13.0: read successfully as expected 00:08:06.731 0000:00:12.0: read successfully as expected 00:08:06.731 Cleaning up... 00:08:06.731 00:08:06.731 real 0m0.179s 00:08:06.731 user 0m0.062s 00:08:06.731 sys 0m0.077s 00:08:06.731 13:56:44 nvme.nvme_err_injection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:06.731 ************************************ 00:08:06.731 END TEST nvme_err_injection 00:08:06.731 13:56:44 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:08:06.731 ************************************ 00:08:06.731 13:56:44 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:06.731 13:56:44 nvme -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:08:06.731 13:56:44 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:06.731 13:56:44 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:06.731 ************************************ 00:08:06.731 START TEST nvme_overhead 00:08:06.731 ************************************ 00:08:06.731 13:56:44 nvme.nvme_overhead -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:08.105 Initializing NVMe Controllers 00:08:08.105 Attached to 0000:00:10.0 00:08:08.105 Attached to 0000:00:11.0 00:08:08.105 Attached to 0000:00:13.0 00:08:08.105 Attached to 0000:00:12.0 00:08:08.105 Initialization complete. Launching workers. 00:08:08.105 submit (in ns) avg, min, max = 11162.6, 9756.9, 180006.9 00:08:08.105 complete (in ns) avg, min, max = 7493.6, 7153.8, 66441.5 00:08:08.105 00:08:08.105 Submit histogram 00:08:08.105 ================ 00:08:08.105 Range in us Cumulative Count 00:08:08.105 9.748 - 9.797: 0.0054% ( 1) 00:08:08.105 9.994 - 10.043: 0.0108% ( 1) 00:08:08.105 10.142 - 10.191: 0.0163% ( 1) 00:08:08.105 10.388 - 10.437: 0.0217% ( 1) 00:08:08.105 10.683 - 10.732: 0.0922% ( 13) 00:08:08.105 10.732 - 10.782: 1.2585% ( 215) 00:08:08.105 10.782 - 10.831: 6.4500% ( 957) 00:08:08.105 10.831 - 10.880: 19.4857% ( 2403) 00:08:08.105 10.880 - 10.929: 38.7762% ( 3556) 00:08:08.105 10.929 - 10.978: 58.8369% ( 3698) 00:08:08.105 10.978 - 11.028: 74.2432% ( 2840) 00:08:08.105 11.028 - 11.077: 82.9608% ( 1607) 00:08:08.105 11.077 - 11.126: 87.1704% ( 776) 00:08:08.105 11.126 - 11.175: 89.4868% ( 427) 00:08:08.105 11.175 - 11.225: 90.7996% ( 242) 00:08:08.105 11.225 - 11.274: 91.7381% ( 173) 00:08:08.105 11.274 - 11.323: 92.2534% ( 95) 00:08:08.105 11.323 - 11.372: 92.6549% ( 74) 00:08:08.105 11.372 - 11.422: 92.9912% ( 62) 00:08:08.105 11.422 - 11.471: 93.3058% ( 58) 00:08:08.105 11.471 - 11.520: 93.6259% ( 59) 00:08:08.105 11.520 - 11.569: 93.9134% ( 53) 00:08:08.105 11.569 - 11.618: 94.1901% ( 51) 00:08:08.105 11.618 - 11.668: 94.4450% ( 47) 00:08:08.105 11.668 - 11.717: 94.6132% ( 31) 00:08:08.105 11.717 - 11.766: 94.8085% ( 36) 00:08:08.105 11.766 - 11.815: 94.9604% ( 28) 00:08:08.105 11.815 - 11.865: 95.1557% ( 36) 00:08:08.105 11.865 - 11.914: 95.2805% ( 23) 00:08:08.105 11.914 - 11.963: 95.5137% ( 43) 00:08:08.105 11.963 - 12.012: 95.6493% ( 25) 00:08:08.105 12.012 - 12.062: 95.8121% ( 30) 00:08:08.105 12.062 - 12.111: 95.9803% ( 31) 00:08:08.105 12.111 - 12.160: 96.1484% ( 31) 00:08:08.105 12.160 - 12.209: 96.3112% ( 30) 00:08:08.105 12.209 - 12.258: 96.3654% ( 10) 00:08:08.105 12.258 - 12.308: 96.4251% ( 11) 00:08:08.105 12.308 - 12.357: 96.4793% ( 10) 00:08:08.105 12.357 - 12.406: 96.4956% ( 3) 00:08:08.105 12.406 - 12.455: 96.5390% ( 8) 00:08:08.105 12.455 - 12.505: 96.5553% ( 3) 00:08:08.105 12.505 - 12.554: 96.5987% ( 8) 00:08:08.105 12.554 - 12.603: 96.6366% ( 7) 00:08:08.105 12.603 - 12.702: 96.6909% ( 10) 00:08:08.105 12.702 - 12.800: 96.8102% ( 22) 00:08:08.105 12.800 - 12.898: 96.9133% ( 19) 00:08:08.105 12.898 - 12.997: 97.0055% ( 17) 00:08:08.105 12.997 - 13.095: 97.1357% ( 24) 00:08:08.105 13.095 - 13.194: 97.3039% ( 31) 00:08:08.105 13.194 - 13.292: 97.4992% ( 36) 00:08:08.105 13.292 - 13.391: 97.6511% ( 28) 00:08:08.105 13.391 - 13.489: 97.7433% ( 17) 00:08:08.105 13.489 - 13.588: 97.8138% ( 13) 00:08:08.105 13.588 - 13.686: 97.8626% ( 9) 00:08:08.105 13.686 - 13.785: 97.9060% ( 8) 00:08:08.105 13.785 - 13.883: 97.9332% ( 5) 00:08:08.105 13.883 - 13.982: 97.9711% ( 7) 00:08:08.105 13.982 - 14.080: 98.0145% ( 8) 00:08:08.105 14.080 - 14.178: 98.0362% ( 4) 00:08:08.105 14.178 - 14.277: 98.0688% ( 6) 00:08:08.105 14.277 - 14.375: 98.1122% ( 8) 00:08:08.105 14.375 - 14.474: 98.1447% ( 6) 00:08:08.105 14.474 - 14.572: 98.2044% ( 11) 00:08:08.105 14.572 - 14.671: 98.2424% ( 7) 00:08:08.106 14.671 - 14.769: 98.3021% ( 11) 00:08:08.106 14.769 - 14.868: 98.3237% ( 4) 00:08:08.106 14.868 - 14.966: 98.3726% ( 9) 00:08:08.106 14.966 - 15.065: 98.4322% ( 11) 00:08:08.106 15.065 - 15.163: 98.4594% ( 5) 00:08:08.106 15.163 - 15.262: 98.4811% ( 4) 00:08:08.106 15.262 - 15.360: 98.4973% ( 3) 00:08:08.106 15.360 - 15.458: 98.5136% ( 3) 00:08:08.106 15.458 - 15.557: 98.5353% ( 4) 00:08:08.106 15.557 - 15.655: 98.5516% ( 3) 00:08:08.106 15.655 - 15.754: 98.5624% ( 2) 00:08:08.106 15.754 - 15.852: 98.5733% ( 2) 00:08:08.106 15.852 - 15.951: 98.5787% ( 1) 00:08:08.106 15.951 - 16.049: 98.5841% ( 1) 00:08:08.106 16.049 - 16.148: 98.6004% ( 3) 00:08:08.106 16.148 - 16.246: 98.6167% ( 3) 00:08:08.106 16.246 - 16.345: 98.6438% ( 5) 00:08:08.106 16.345 - 16.443: 98.7035% ( 11) 00:08:08.106 16.443 - 16.542: 98.7740% ( 13) 00:08:08.106 16.542 - 16.640: 98.8825% ( 20) 00:08:08.106 16.640 - 16.738: 98.9747% ( 17) 00:08:08.106 16.738 - 16.837: 99.0452% ( 13) 00:08:08.106 16.837 - 16.935: 99.1375% ( 17) 00:08:08.106 16.935 - 17.034: 99.1971% ( 11) 00:08:08.106 17.034 - 17.132: 99.2568% ( 11) 00:08:08.106 17.132 - 17.231: 99.3273% ( 13) 00:08:08.106 17.231 - 17.329: 99.4033% ( 14) 00:08:08.106 17.329 - 17.428: 99.4412% ( 7) 00:08:08.106 17.428 - 17.526: 99.4792% ( 7) 00:08:08.106 17.526 - 17.625: 99.5172% ( 7) 00:08:08.106 17.625 - 17.723: 99.5443% ( 5) 00:08:08.106 17.723 - 17.822: 99.5931% ( 9) 00:08:08.106 17.822 - 17.920: 99.6203% ( 5) 00:08:08.106 17.920 - 18.018: 99.6257% ( 1) 00:08:08.106 18.018 - 18.117: 99.6474% ( 4) 00:08:08.106 18.117 - 18.215: 99.6691% ( 4) 00:08:08.106 18.215 - 18.314: 99.6745% ( 1) 00:08:08.106 18.314 - 18.412: 99.6962% ( 4) 00:08:08.106 18.511 - 18.609: 99.7016% ( 1) 00:08:08.106 18.609 - 18.708: 99.7071% ( 1) 00:08:08.106 18.708 - 18.806: 99.7125% ( 1) 00:08:08.106 18.806 - 18.905: 99.7179% ( 1) 00:08:08.106 18.905 - 19.003: 99.7233% ( 1) 00:08:08.106 19.003 - 19.102: 99.7342% ( 2) 00:08:08.106 19.102 - 19.200: 99.7396% ( 1) 00:08:08.106 19.200 - 19.298: 99.7450% ( 1) 00:08:08.106 19.298 - 19.397: 99.7505% ( 1) 00:08:08.106 19.594 - 19.692: 99.7559% ( 1) 00:08:08.106 19.692 - 19.791: 99.7613% ( 1) 00:08:08.106 19.791 - 19.889: 99.7722% ( 2) 00:08:08.106 19.889 - 19.988: 99.7776% ( 1) 00:08:08.106 19.988 - 20.086: 99.7939% ( 3) 00:08:08.106 20.086 - 20.185: 99.7993% ( 1) 00:08:08.106 20.185 - 20.283: 99.8047% ( 1) 00:08:08.106 20.283 - 20.382: 99.8101% ( 1) 00:08:08.106 20.382 - 20.480: 99.8156% ( 1) 00:08:08.106 20.677 - 20.775: 99.8264% ( 2) 00:08:08.106 20.775 - 20.874: 99.8318% ( 1) 00:08:08.106 20.874 - 20.972: 99.8373% ( 1) 00:08:08.106 20.972 - 21.071: 99.8427% ( 1) 00:08:08.106 21.169 - 21.268: 99.8481% ( 1) 00:08:08.106 21.268 - 21.366: 99.8535% ( 1) 00:08:08.106 21.366 - 21.465: 99.8590% ( 1) 00:08:08.106 21.662 - 21.760: 99.8644% ( 1) 00:08:08.106 21.858 - 21.957: 99.8698% ( 1) 00:08:08.106 21.957 - 22.055: 99.8752% ( 1) 00:08:08.106 22.449 - 22.548: 99.8807% ( 1) 00:08:08.106 22.548 - 22.646: 99.8861% ( 1) 00:08:08.106 22.646 - 22.745: 99.8915% ( 1) 00:08:08.106 22.843 - 22.942: 99.8969% ( 1) 00:08:08.106 23.532 - 23.631: 99.9024% ( 1) 00:08:08.106 23.729 - 23.828: 99.9078% ( 1) 00:08:08.106 24.025 - 24.123: 99.9132% ( 1) 00:08:08.106 24.123 - 24.222: 99.9186% ( 1) 00:08:08.106 24.222 - 24.320: 99.9241% ( 1) 00:08:08.106 25.403 - 25.600: 99.9295% ( 1) 00:08:08.106 25.600 - 25.797: 99.9349% ( 1) 00:08:08.106 26.585 - 26.782: 99.9403% ( 1) 00:08:08.106 27.569 - 27.766: 99.9458% ( 1) 00:08:08.106 30.129 - 30.326: 99.9512% ( 1) 00:08:08.106 37.612 - 37.809: 99.9566% ( 1) 00:08:08.106 40.369 - 40.566: 99.9620% ( 1) 00:08:08.106 44.308 - 44.505: 99.9675% ( 1) 00:08:08.106 45.095 - 45.292: 99.9729% ( 1) 00:08:08.106 50.018 - 50.215: 99.9783% ( 1) 00:08:08.106 51.200 - 51.594: 99.9892% ( 2) 00:08:08.106 54.351 - 54.745: 99.9946% ( 1) 00:08:08.106 179.594 - 180.382: 100.0000% ( 1) 00:08:08.106 00:08:08.106 Complete histogram 00:08:08.106 ================== 00:08:08.106 Range in us Cumulative Count 00:08:08.106 7.138 - 7.188: 0.0759% ( 14) 00:08:08.106 7.188 - 7.237: 2.1591% ( 384) 00:08:08.106 7.237 - 7.286: 13.6595% ( 2120) 00:08:08.106 7.286 - 7.335: 38.0493% ( 4496) 00:08:08.106 7.335 - 7.385: 63.6704% ( 4723) 00:08:08.106 7.385 - 7.434: 80.8940% ( 3175) 00:08:08.106 7.434 - 7.483: 89.0474% ( 1503) 00:08:08.106 7.483 - 7.532: 92.6223% ( 659) 00:08:08.106 7.532 - 7.582: 94.4179% ( 331) 00:08:08.106 7.582 - 7.631: 95.5463% ( 208) 00:08:08.106 7.631 - 7.680: 96.0291% ( 89) 00:08:08.106 7.680 - 7.729: 96.3437% ( 58) 00:08:08.106 7.729 - 7.778: 96.4848% ( 26) 00:08:08.106 7.778 - 7.828: 96.6041% ( 22) 00:08:08.106 7.828 - 7.877: 96.6800% ( 14) 00:08:08.106 7.877 - 7.926: 96.7451% ( 12) 00:08:08.106 7.926 - 7.975: 96.8048% ( 11) 00:08:08.106 7.975 - 8.025: 96.8970% ( 17) 00:08:08.106 8.025 - 8.074: 97.0327% ( 25) 00:08:08.106 8.074 - 8.123: 97.2985% ( 49) 00:08:08.106 8.123 - 8.172: 97.6240% ( 60) 00:08:08.106 8.172 - 8.222: 97.8464% ( 41) 00:08:08.106 8.222 - 8.271: 98.0200% ( 32) 00:08:08.106 8.271 - 8.320: 98.0959% ( 14) 00:08:08.106 8.320 - 8.369: 98.1393% ( 8) 00:08:08.106 8.369 - 8.418: 98.1610% ( 4) 00:08:08.106 8.418 - 8.468: 98.1881% ( 5) 00:08:08.106 8.468 - 8.517: 98.1990% ( 2) 00:08:08.106 8.517 - 8.566: 98.2044% ( 1) 00:08:08.106 8.615 - 8.665: 98.2098% ( 1) 00:08:08.106 8.714 - 8.763: 98.2315% ( 4) 00:08:08.106 8.763 - 8.812: 98.2370% ( 1) 00:08:08.106 8.812 - 8.862: 98.2532% ( 3) 00:08:08.106 8.862 - 8.911: 98.2695% ( 3) 00:08:08.106 9.058 - 9.108: 98.2749% ( 1) 00:08:08.106 9.157 - 9.206: 98.2804% ( 1) 00:08:08.106 9.255 - 9.305: 98.2912% ( 2) 00:08:08.106 9.305 - 9.354: 98.3021% ( 2) 00:08:08.106 9.452 - 9.502: 98.3075% ( 1) 00:08:08.106 9.502 - 9.551: 98.3129% ( 1) 00:08:08.106 9.551 - 9.600: 98.3346% ( 4) 00:08:08.106 9.600 - 9.649: 98.3454% ( 2) 00:08:08.106 9.649 - 9.698: 98.3509% ( 1) 00:08:08.106 9.698 - 9.748: 98.3617% ( 2) 00:08:08.106 9.748 - 9.797: 98.3888% ( 5) 00:08:08.106 9.797 - 9.846: 98.3997% ( 2) 00:08:08.106 9.846 - 9.895: 98.4105% ( 2) 00:08:08.106 9.895 - 9.945: 98.4322% ( 4) 00:08:08.106 9.945 - 9.994: 98.4594% ( 5) 00:08:08.106 9.994 - 10.043: 98.4811% ( 4) 00:08:08.106 10.043 - 10.092: 98.4865% ( 1) 00:08:08.106 10.092 - 10.142: 98.5082% ( 4) 00:08:08.106 10.142 - 10.191: 98.5190% ( 2) 00:08:08.106 10.191 - 10.240: 98.5353% ( 3) 00:08:08.106 10.240 - 10.289: 98.5462% ( 2) 00:08:08.106 10.289 - 10.338: 98.5570% ( 2) 00:08:08.106 10.338 - 10.388: 98.5624% ( 1) 00:08:08.106 10.388 - 10.437: 98.5679% ( 1) 00:08:08.106 10.437 - 10.486: 98.5733% ( 1) 00:08:08.106 10.535 - 10.585: 98.5787% ( 1) 00:08:08.106 10.585 - 10.634: 98.5841% ( 1) 00:08:08.106 10.634 - 10.683: 98.5896% ( 1) 00:08:08.106 10.683 - 10.732: 98.5950% ( 1) 00:08:08.106 10.732 - 10.782: 98.6004% ( 1) 00:08:08.106 10.782 - 10.831: 98.6058% ( 1) 00:08:08.106 10.831 - 10.880: 98.6167% ( 2) 00:08:08.106 10.929 - 10.978: 98.6221% ( 1) 00:08:08.106 11.323 - 11.372: 98.6275% ( 1) 00:08:08.106 11.471 - 11.520: 98.6330% ( 1) 00:08:08.106 11.766 - 11.815: 98.6384% ( 1) 00:08:08.106 12.012 - 12.062: 98.6492% ( 2) 00:08:08.106 12.357 - 12.406: 98.6547% ( 1) 00:08:08.106 12.455 - 12.505: 98.6655% ( 2) 00:08:08.106 12.554 - 12.603: 98.6709% ( 1) 00:08:08.106 12.603 - 12.702: 98.6764% ( 1) 00:08:08.106 12.702 - 12.800: 98.6981% ( 4) 00:08:08.106 12.800 - 12.898: 98.7415% ( 8) 00:08:08.106 12.898 - 12.997: 98.7794% ( 7) 00:08:08.106 12.997 - 13.095: 98.8445% ( 12) 00:08:08.106 13.095 - 13.194: 98.8933% ( 9) 00:08:08.106 13.194 - 13.292: 98.9476% ( 10) 00:08:08.106 13.292 - 13.391: 99.0290% ( 15) 00:08:08.106 13.391 - 13.489: 99.1049% ( 14) 00:08:08.106 13.489 - 13.588: 99.2080% ( 19) 00:08:08.106 13.588 - 13.686: 99.2785% ( 13) 00:08:08.106 13.686 - 13.785: 99.3490% ( 13) 00:08:08.106 13.785 - 13.883: 99.4141% ( 12) 00:08:08.106 13.883 - 13.982: 99.4684% ( 10) 00:08:08.106 13.982 - 14.080: 99.5389% ( 13) 00:08:08.106 14.080 - 14.178: 99.5823% ( 8) 00:08:08.107 14.178 - 14.277: 99.6148% ( 6) 00:08:08.107 14.277 - 14.375: 99.6420% ( 5) 00:08:08.107 14.375 - 14.474: 99.6745% ( 6) 00:08:08.107 14.474 - 14.572: 99.7016% ( 5) 00:08:08.107 14.572 - 14.671: 99.7233% ( 4) 00:08:08.107 14.671 - 14.769: 99.7288% ( 1) 00:08:08.107 14.868 - 14.966: 99.7505% ( 4) 00:08:08.107 14.966 - 15.065: 99.7613% ( 2) 00:08:08.107 15.065 - 15.163: 99.7722% ( 2) 00:08:08.107 15.360 - 15.458: 99.7776% ( 1) 00:08:08.107 15.458 - 15.557: 99.7939% ( 3) 00:08:08.107 15.754 - 15.852: 99.7993% ( 1) 00:08:08.107 16.049 - 16.148: 99.8047% ( 1) 00:08:08.107 16.148 - 16.246: 99.8101% ( 1) 00:08:08.107 16.345 - 16.443: 99.8156% ( 1) 00:08:08.107 16.542 - 16.640: 99.8318% ( 3) 00:08:08.107 16.738 - 16.837: 99.8373% ( 1) 00:08:08.107 16.837 - 16.935: 99.8481% ( 2) 00:08:08.107 16.935 - 17.034: 99.8535% ( 1) 00:08:08.107 17.132 - 17.231: 99.8590% ( 1) 00:08:08.107 17.231 - 17.329: 99.8644% ( 1) 00:08:08.107 17.329 - 17.428: 99.8698% ( 1) 00:08:08.107 17.625 - 17.723: 99.8752% ( 1) 00:08:08.107 17.822 - 17.920: 99.8807% ( 1) 00:08:08.107 18.018 - 18.117: 99.8861% ( 1) 00:08:08.107 18.215 - 18.314: 99.8915% ( 1) 00:08:08.107 18.511 - 18.609: 99.8969% ( 1) 00:08:08.107 18.609 - 18.708: 99.9078% ( 2) 00:08:08.107 18.806 - 18.905: 99.9132% ( 1) 00:08:08.107 19.200 - 19.298: 99.9186% ( 1) 00:08:08.107 19.397 - 19.495: 99.9241% ( 1) 00:08:08.107 19.692 - 19.791: 99.9295% ( 1) 00:08:08.107 20.578 - 20.677: 99.9349% ( 1) 00:08:08.107 20.874 - 20.972: 99.9403% ( 1) 00:08:08.107 21.071 - 21.169: 99.9458% ( 1) 00:08:08.107 21.662 - 21.760: 99.9512% ( 1) 00:08:08.107 22.252 - 22.351: 99.9566% ( 1) 00:08:08.107 23.729 - 23.828: 99.9620% ( 1) 00:08:08.107 23.828 - 23.926: 99.9675% ( 1) 00:08:08.107 24.615 - 24.714: 99.9729% ( 1) 00:08:08.107 30.326 - 30.523: 99.9783% ( 1) 00:08:08.107 41.551 - 41.748: 99.9837% ( 1) 00:08:08.107 58.683 - 59.077: 99.9892% ( 1) 00:08:08.107 64.591 - 64.985: 99.9946% ( 1) 00:08:08.107 66.166 - 66.560: 100.0000% ( 1) 00:08:08.107 00:08:08.107 00:08:08.107 real 0m1.194s 00:08:08.107 user 0m1.051s 00:08:08.107 sys 0m0.092s 00:08:08.107 13:56:46 nvme.nvme_overhead -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:08.107 13:56:46 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:08:08.107 ************************************ 00:08:08.107 END TEST nvme_overhead 00:08:08.107 ************************************ 00:08:08.107 13:56:46 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:08.107 13:56:46 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:08:08.107 13:56:46 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:08.107 13:56:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:08.107 ************************************ 00:08:08.107 START TEST nvme_arbitration 00:08:08.107 ************************************ 00:08:08.107 13:56:46 nvme.nvme_arbitration -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:11.385 Initializing NVMe Controllers 00:08:11.385 Attached to 0000:00:10.0 00:08:11.385 Attached to 0000:00:11.0 00:08:11.385 Attached to 0000:00:13.0 00:08:11.385 Attached to 0000:00:12.0 00:08:11.386 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:08:11.386 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:08:11.386 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:08:11.386 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:08:11.386 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:08:11.386 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:08:11.386 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:08:11.386 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:08:11.386 Initialization complete. Launching workers. 00:08:11.386 Starting thread on core 1 with urgent priority queue 00:08:11.386 Starting thread on core 2 with urgent priority queue 00:08:11.386 Starting thread on core 3 with urgent priority queue 00:08:11.386 Starting thread on core 0 with urgent priority queue 00:08:11.386 QEMU NVMe Ctrl (12340 ) core 0: 6570.67 IO/s 15.22 secs/100000 ios 00:08:11.386 QEMU NVMe Ctrl (12342 ) core 0: 6570.67 IO/s 15.22 secs/100000 ios 00:08:11.386 QEMU NVMe Ctrl (12341 ) core 1: 6656.00 IO/s 15.02 secs/100000 ios 00:08:11.386 QEMU NVMe Ctrl (12342 ) core 1: 6656.00 IO/s 15.02 secs/100000 ios 00:08:11.386 QEMU NVMe Ctrl (12343 ) core 2: 6229.33 IO/s 16.05 secs/100000 ios 00:08:11.386 QEMU NVMe Ctrl (12342 ) core 3: 6250.67 IO/s 16.00 secs/100000 ios 00:08:11.386 ======================================================== 00:08:11.386 00:08:11.386 00:08:11.386 real 0m3.202s 00:08:11.386 user 0m9.009s 00:08:11.386 sys 0m0.097s 00:08:11.386 13:56:49 nvme.nvme_arbitration -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:11.386 ************************************ 00:08:11.386 END TEST nvme_arbitration 00:08:11.386 ************************************ 00:08:11.386 13:56:49 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:08:11.386 13:56:49 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:11.386 13:56:49 nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:11.386 13:56:49 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:11.386 13:56:49 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:11.386 ************************************ 00:08:11.386 START TEST nvme_single_aen 00:08:11.386 ************************************ 00:08:11.386 13:56:49 nvme.nvme_single_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:11.386 Asynchronous Event Request test 00:08:11.386 Attached to 0000:00:10.0 00:08:11.386 Attached to 0000:00:11.0 00:08:11.386 Attached to 0000:00:13.0 00:08:11.386 Attached to 0000:00:12.0 00:08:11.386 Reset controller to setup AER completions for this process 00:08:11.386 Registering asynchronous event callbacks... 00:08:11.386 Getting orig temperature thresholds of all controllers 00:08:11.386 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:11.386 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:11.386 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:11.386 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:11.386 Setting all controllers temperature threshold low to trigger AER 00:08:11.386 Waiting for all controllers temperature threshold to be set lower 00:08:11.386 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:11.386 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:11.386 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:11.386 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:11.386 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:11.386 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:11.386 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:11.386 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:11.386 Waiting for all controllers to trigger AER and reset threshold 00:08:11.386 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:11.386 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:11.386 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:11.386 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:11.386 Cleaning up... 00:08:11.386 00:08:11.386 real 0m0.187s 00:08:11.386 user 0m0.058s 00:08:11.386 sys 0m0.087s 00:08:11.386 13:56:49 nvme.nvme_single_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:11.386 ************************************ 00:08:11.386 END TEST nvme_single_aen 00:08:11.386 ************************************ 00:08:11.386 13:56:49 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:08:11.644 13:56:49 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:08:11.644 13:56:49 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:11.644 13:56:49 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:11.644 13:56:49 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:11.644 ************************************ 00:08:11.644 START TEST nvme_doorbell_aers 00:08:11.644 ************************************ 00:08:11.644 13:56:49 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1125 -- # nvme_doorbell_aers 00:08:11.644 13:56:49 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:08:11.644 13:56:49 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:08:11.644 13:56:49 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:08:11.644 13:56:49 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:08:11.644 13:56:49 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:11.644 13:56:49 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # local bdfs 00:08:11.644 13:56:49 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:11.644 13:56:49 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:11.644 13:56:49 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:11.644 13:56:49 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:08:11.644 13:56:49 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:11.644 13:56:49 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:11.644 13:56:49 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:11.902 [2024-11-17 13:56:49.949606] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75406) is not found. Dropping the request. 00:08:21.946 Executing: test_write_invalid_db 00:08:21.946 Waiting for AER completion... 00:08:21.946 Failure: test_write_invalid_db 00:08:21.946 00:08:21.946 Executing: test_invalid_db_write_overflow_sq 00:08:21.946 Waiting for AER completion... 00:08:21.946 Failure: test_invalid_db_write_overflow_sq 00:08:21.946 00:08:21.946 Executing: test_invalid_db_write_overflow_cq 00:08:21.946 Waiting for AER completion... 00:08:21.946 Failure: test_invalid_db_write_overflow_cq 00:08:21.946 00:08:21.946 13:56:59 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:21.946 13:56:59 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:21.946 [2024-11-17 13:56:59.954571] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75406) is not found. Dropping the request. 00:08:31.937 Executing: test_write_invalid_db 00:08:31.937 Waiting for AER completion... 00:08:31.937 Failure: test_write_invalid_db 00:08:31.937 00:08:31.937 Executing: test_invalid_db_write_overflow_sq 00:08:31.937 Waiting for AER completion... 00:08:31.937 Failure: test_invalid_db_write_overflow_sq 00:08:31.937 00:08:31.937 Executing: test_invalid_db_write_overflow_cq 00:08:31.937 Waiting for AER completion... 00:08:31.937 Failure: test_invalid_db_write_overflow_cq 00:08:31.937 00:08:31.937 13:57:09 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:31.937 13:57:09 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:31.937 [2024-11-17 13:57:09.966768] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75406) is not found. Dropping the request. 00:08:41.900 Executing: test_write_invalid_db 00:08:41.900 Waiting for AER completion... 00:08:41.900 Failure: test_write_invalid_db 00:08:41.900 00:08:41.900 Executing: test_invalid_db_write_overflow_sq 00:08:41.900 Waiting for AER completion... 00:08:41.900 Failure: test_invalid_db_write_overflow_sq 00:08:41.900 00:08:41.900 Executing: test_invalid_db_write_overflow_cq 00:08:41.900 Waiting for AER completion... 00:08:41.900 Failure: test_invalid_db_write_overflow_cq 00:08:41.900 00:08:41.900 13:57:19 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:41.900 13:57:19 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:41.900 [2024-11-17 13:57:20.009858] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75406) is not found. Dropping the request. 00:08:51.938 Executing: test_write_invalid_db 00:08:51.938 Waiting for AER completion... 00:08:51.938 Failure: test_write_invalid_db 00:08:51.938 00:08:51.938 Executing: test_invalid_db_write_overflow_sq 00:08:51.938 Waiting for AER completion... 00:08:51.938 Failure: test_invalid_db_write_overflow_sq 00:08:51.938 00:08:51.938 Executing: test_invalid_db_write_overflow_cq 00:08:51.938 Waiting for AER completion... 00:08:51.938 Failure: test_invalid_db_write_overflow_cq 00:08:51.938 00:08:51.938 00:08:51.938 real 0m40.165s 00:08:51.938 user 0m34.272s 00:08:51.938 sys 0m5.547s 00:08:51.938 13:57:29 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:51.938 ************************************ 00:08:51.938 END TEST nvme_doorbell_aers 00:08:51.938 ************************************ 00:08:51.938 13:57:29 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:08:51.938 13:57:29 nvme -- nvme/nvme.sh@97 -- # uname 00:08:51.938 13:57:29 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:08:51.938 13:57:29 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:51.938 13:57:29 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:08:51.938 13:57:29 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:51.938 13:57:29 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:51.938 ************************************ 00:08:51.938 START TEST nvme_multi_aen 00:08:51.938 ************************************ 00:08:51.938 13:57:29 nvme.nvme_multi_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:51.938 [2024-11-17 13:57:30.045203] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75406) is not found. Dropping the request. 00:08:51.938 [2024-11-17 13:57:30.045281] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75406) is not found. Dropping the request. 00:08:51.938 [2024-11-17 13:57:30.045291] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75406) is not found. Dropping the request. 00:08:51.938 [2024-11-17 13:57:30.046456] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75406) is not found. Dropping the request. 00:08:51.938 [2024-11-17 13:57:30.046486] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75406) is not found. Dropping the request. 00:08:51.938 [2024-11-17 13:57:30.046494] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75406) is not found. Dropping the request. 00:08:51.938 [2024-11-17 13:57:30.047428] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75406) is not found. Dropping the request. 00:08:51.938 [2024-11-17 13:57:30.047454] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75406) is not found. Dropping the request. 00:08:51.938 [2024-11-17 13:57:30.047461] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75406) is not found. Dropping the request. 00:08:51.938 [2024-11-17 13:57:30.048337] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75406) is not found. Dropping the request. 00:08:51.938 [2024-11-17 13:57:30.048363] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75406) is not found. Dropping the request. 00:08:51.938 [2024-11-17 13:57:30.048371] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75406) is not found. Dropping the request. 00:08:51.938 Child process pid: 75926 00:08:51.938 [Child] Asynchronous Event Request test 00:08:51.938 [Child] Attached to 0000:00:10.0 00:08:51.938 [Child] Attached to 0000:00:11.0 00:08:51.938 [Child] Attached to 0000:00:13.0 00:08:51.938 [Child] Attached to 0000:00:12.0 00:08:51.938 [Child] Registering asynchronous event callbacks... 00:08:51.938 [Child] Getting orig temperature thresholds of all controllers 00:08:51.938 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:51.938 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:51.938 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:51.938 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:51.938 [Child] Waiting for all controllers to trigger AER and reset threshold 00:08:51.938 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:51.938 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:51.938 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:51.938 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:51.938 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:51.938 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:51.938 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:51.938 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:51.938 [Child] Cleaning up... 00:08:52.197 Asynchronous Event Request test 00:08:52.198 Attached to 0000:00:10.0 00:08:52.198 Attached to 0000:00:11.0 00:08:52.198 Attached to 0000:00:13.0 00:08:52.198 Attached to 0000:00:12.0 00:08:52.198 Reset controller to setup AER completions for this process 00:08:52.198 Registering asynchronous event callbacks... 00:08:52.198 Getting orig temperature thresholds of all controllers 00:08:52.198 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:52.198 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:52.198 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:52.198 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:52.198 Setting all controllers temperature threshold low to trigger AER 00:08:52.198 Waiting for all controllers temperature threshold to be set lower 00:08:52.198 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:52.198 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:52.198 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:52.198 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:52.198 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:52.198 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:52.198 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:52.198 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:52.198 Waiting for all controllers to trigger AER and reset threshold 00:08:52.198 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:52.198 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:52.198 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:52.198 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:52.198 Cleaning up... 00:08:52.198 00:08:52.198 real 0m0.367s 00:08:52.198 user 0m0.114s 00:08:52.198 sys 0m0.161s 00:08:52.198 13:57:30 nvme.nvme_multi_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:52.198 13:57:30 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:08:52.198 ************************************ 00:08:52.198 END TEST nvme_multi_aen 00:08:52.198 ************************************ 00:08:52.198 13:57:30 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:52.198 13:57:30 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:52.198 13:57:30 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:52.198 13:57:30 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:52.198 ************************************ 00:08:52.198 START TEST nvme_startup 00:08:52.198 ************************************ 00:08:52.198 13:57:30 nvme.nvme_startup -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:52.198 Initializing NVMe Controllers 00:08:52.198 Attached to 0000:00:10.0 00:08:52.198 Attached to 0000:00:11.0 00:08:52.198 Attached to 0000:00:13.0 00:08:52.198 Attached to 0000:00:12.0 00:08:52.198 Initialization complete. 00:08:52.198 Time used:118798.211 (us). 00:08:52.198 00:08:52.198 real 0m0.170s 00:08:52.198 user 0m0.052s 00:08:52.198 sys 0m0.075s 00:08:52.198 13:57:30 nvme.nvme_startup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:52.198 13:57:30 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:08:52.198 ************************************ 00:08:52.198 END TEST nvme_startup 00:08:52.198 ************************************ 00:08:52.460 13:57:30 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:08:52.460 13:57:30 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:52.460 13:57:30 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:52.460 13:57:30 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:52.460 ************************************ 00:08:52.460 START TEST nvme_multi_secondary 00:08:52.460 ************************************ 00:08:52.460 13:57:30 nvme.nvme_multi_secondary -- common/autotest_common.sh@1125 -- # nvme_multi_secondary 00:08:52.460 13:57:30 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=75977 00:08:52.460 13:57:30 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=75978 00:08:52.460 13:57:30 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:08:52.460 13:57:30 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:08:52.460 13:57:30 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:55.750 Initializing NVMe Controllers 00:08:55.750 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:55.750 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:55.750 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:55.750 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:55.750 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:55.750 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:55.750 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:55.750 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:55.750 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:55.750 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:55.750 Initialization complete. Launching workers. 00:08:55.750 ======================================================== 00:08:55.750 Latency(us) 00:08:55.750 Device Information : IOPS MiB/s Average min max 00:08:55.750 PCIE (0000:00:10.0) NSID 1 from core 2: 3204.89 12.52 4989.89 1444.72 12813.00 00:08:55.750 PCIE (0000:00:11.0) NSID 1 from core 2: 3204.89 12.52 4991.98 1219.71 13091.15 00:08:55.750 PCIE (0000:00:13.0) NSID 1 from core 2: 3204.89 12.52 4998.77 1304.65 12801.56 00:08:55.750 PCIE (0000:00:12.0) NSID 1 from core 2: 3204.89 12.52 4998.92 1384.66 12986.95 00:08:55.750 PCIE (0000:00:12.0) NSID 2 from core 2: 3204.89 12.52 4998.98 1293.94 12380.01 00:08:55.750 PCIE (0000:00:12.0) NSID 3 from core 2: 3204.89 12.52 4998.71 1450.62 12795.06 00:08:55.750 ======================================================== 00:08:55.750 Total : 19229.34 75.11 4996.21 1219.71 13091.15 00:08:55.750 00:08:55.750 13:57:33 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 75977 00:08:55.750 Initializing NVMe Controllers 00:08:55.750 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:55.750 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:55.750 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:55.750 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:55.750 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:55.750 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:55.750 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:55.750 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:55.750 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:55.750 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:55.750 Initialization complete. Launching workers. 00:08:55.750 ======================================================== 00:08:55.750 Latency(us) 00:08:55.750 Device Information : IOPS MiB/s Average min max 00:08:55.750 PCIE (0000:00:10.0) NSID 1 from core 1: 7497.42 29.29 2132.67 1078.66 6325.79 00:08:55.750 PCIE (0000:00:11.0) NSID 1 from core 1: 7497.42 29.29 2133.63 1111.13 6829.44 00:08:55.750 PCIE (0000:00:13.0) NSID 1 from core 1: 7497.42 29.29 2133.61 1056.37 6569.37 00:08:55.750 PCIE (0000:00:12.0) NSID 1 from core 1: 7497.42 29.29 2133.59 1201.27 6991.48 00:08:55.750 PCIE (0000:00:12.0) NSID 2 from core 1: 7497.42 29.29 2133.57 1095.35 6607.42 00:08:55.750 PCIE (0000:00:12.0) NSID 3 from core 1: 7497.42 29.29 2133.53 1000.23 6066.40 00:08:55.750 ======================================================== 00:08:55.750 Total : 44984.50 175.72 2133.43 1000.23 6991.48 00:08:55.750 00:08:57.649 Initializing NVMe Controllers 00:08:57.649 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:57.649 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:57.649 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:57.649 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:57.649 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:57.649 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:57.649 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:57.649 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:57.649 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:57.649 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:57.649 Initialization complete. Launching workers. 00:08:57.649 ======================================================== 00:08:57.649 Latency(us) 00:08:57.649 Device Information : IOPS MiB/s Average min max 00:08:57.649 PCIE (0000:00:10.0) NSID 1 from core 0: 10731.54 41.92 1489.73 695.19 6099.22 00:08:57.649 PCIE (0000:00:11.0) NSID 1 from core 0: 10731.54 41.92 1490.56 715.04 6292.10 00:08:57.649 PCIE (0000:00:13.0) NSID 1 from core 0: 10731.54 41.92 1490.55 632.15 6284.79 00:08:57.649 PCIE (0000:00:12.0) NSID 1 from core 0: 10731.54 41.92 1490.54 546.22 5919.99 00:08:57.649 PCIE (0000:00:12.0) NSID 2 from core 0: 10731.54 41.92 1490.53 461.12 6139.56 00:08:57.649 PCIE (0000:00:12.0) NSID 3 from core 0: 10731.54 41.92 1490.52 392.38 6273.69 00:08:57.649 ======================================================== 00:08:57.649 Total : 64389.24 251.52 1490.40 392.38 6292.10 00:08:57.649 00:08:57.649 13:57:35 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 75978 00:08:57.649 13:57:35 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=76047 00:08:57.649 13:57:35 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:08:57.649 13:57:35 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=76048 00:08:57.649 13:57:35 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:08:57.650 13:57:35 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:00.930 Initializing NVMe Controllers 00:09:00.930 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:00.930 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:00.930 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:00.930 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:00.930 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:00.930 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:00.930 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:00.930 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:00.930 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:00.930 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:00.930 Initialization complete. Launching workers. 00:09:00.930 ======================================================== 00:09:00.930 Latency(us) 00:09:00.930 Device Information : IOPS MiB/s Average min max 00:09:00.930 PCIE (0000:00:10.0) NSID 1 from core 0: 7917.86 30.93 2019.45 716.58 6365.87 00:09:00.930 PCIE (0000:00:11.0) NSID 1 from core 0: 7917.86 30.93 2020.51 739.65 6464.77 00:09:00.930 PCIE (0000:00:13.0) NSID 1 from core 0: 7917.86 30.93 2020.57 737.40 6306.43 00:09:00.930 PCIE (0000:00:12.0) NSID 1 from core 0: 7917.86 30.93 2020.69 745.54 6109.33 00:09:00.930 PCIE (0000:00:12.0) NSID 2 from core 0: 7917.86 30.93 2020.81 727.12 6074.79 00:09:00.930 PCIE (0000:00:12.0) NSID 3 from core 0: 7917.86 30.93 2020.82 725.85 5919.64 00:09:00.930 ======================================================== 00:09:00.930 Total : 47507.19 185.57 2020.47 716.58 6464.77 00:09:00.930 00:09:00.930 Initializing NVMe Controllers 00:09:00.930 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:00.930 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:00.930 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:00.930 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:00.930 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:00.930 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:00.930 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:00.930 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:00.930 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:00.930 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:00.930 Initialization complete. Launching workers. 00:09:00.930 ======================================================== 00:09:00.931 Latency(us) 00:09:00.931 Device Information : IOPS MiB/s Average min max 00:09:00.931 PCIE (0000:00:10.0) NSID 1 from core 1: 7861.70 30.71 2033.85 730.59 6246.34 00:09:00.931 PCIE (0000:00:11.0) NSID 1 from core 1: 7861.70 30.71 2034.70 748.31 6026.81 00:09:00.931 PCIE (0000:00:13.0) NSID 1 from core 1: 7861.70 30.71 2034.64 746.82 5532.94 00:09:00.931 PCIE (0000:00:12.0) NSID 1 from core 1: 7861.70 30.71 2034.56 495.60 5509.70 00:09:00.931 PCIE (0000:00:12.0) NSID 2 from core 1: 7861.70 30.71 2034.53 465.23 5782.14 00:09:00.931 PCIE (0000:00:12.0) NSID 3 from core 1: 7861.70 30.71 2034.48 405.09 6122.11 00:09:00.931 ======================================================== 00:09:00.931 Total : 47170.17 184.26 2034.46 405.09 6246.34 00:09:00.931 00:09:02.828 Initializing NVMe Controllers 00:09:02.828 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:02.828 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:02.828 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:02.828 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:02.828 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:02.828 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:02.828 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:02.828 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:02.828 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:02.828 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:02.828 Initialization complete. Launching workers. 00:09:02.828 ======================================================== 00:09:02.828 Latency(us) 00:09:02.828 Device Information : IOPS MiB/s Average min max 00:09:02.828 PCIE (0000:00:10.0) NSID 1 from core 2: 4706.40 18.38 3397.96 747.86 12979.68 00:09:02.828 PCIE (0000:00:11.0) NSID 1 from core 2: 4706.40 18.38 3399.26 735.64 12376.96 00:09:02.828 PCIE (0000:00:13.0) NSID 1 from core 2: 4706.40 18.38 3399.20 734.52 15299.32 00:09:02.828 PCIE (0000:00:12.0) NSID 1 from core 2: 4706.40 18.38 3399.14 631.96 12791.30 00:09:02.829 PCIE (0000:00:12.0) NSID 2 from core 2: 4706.40 18.38 3398.91 542.66 12522.55 00:09:02.829 PCIE (0000:00:12.0) NSID 3 from core 2: 4706.40 18.38 3398.69 433.92 12616.22 00:09:02.829 ======================================================== 00:09:02.829 Total : 28238.39 110.31 3398.86 433.92 15299.32 00:09:02.829 00:09:02.829 13:57:41 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 76047 00:09:02.829 13:57:41 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 76048 00:09:02.829 00:09:02.829 real 0m10.577s 00:09:02.829 user 0m18.255s 00:09:02.829 sys 0m0.561s 00:09:02.829 13:57:41 nvme.nvme_multi_secondary -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:02.829 13:57:41 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:09:02.829 ************************************ 00:09:02.829 END TEST nvme_multi_secondary 00:09:02.829 ************************************ 00:09:02.829 13:57:41 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:09:02.829 13:57:41 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:09:02.829 13:57:41 nvme -- common/autotest_common.sh@1089 -- # [[ -e /proc/75015 ]] 00:09:02.829 13:57:41 nvme -- common/autotest_common.sh@1090 -- # kill 75015 00:09:02.829 13:57:41 nvme -- common/autotest_common.sh@1091 -- # wait 75015 00:09:02.829 [2024-11-17 13:57:41.125016] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75925) is not found. Dropping the request. 00:09:02.829 [2024-11-17 13:57:41.125155] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75925) is not found. Dropping the request. 00:09:02.829 [2024-11-17 13:57:41.125188] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75925) is not found. Dropping the request. 00:09:02.829 [2024-11-17 13:57:41.125221] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75925) is not found. Dropping the request. 00:09:02.829 [2024-11-17 13:57:41.126130] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75925) is not found. Dropping the request. 00:09:02.829 [2024-11-17 13:57:41.126224] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75925) is not found. Dropping the request. 00:09:02.829 [2024-11-17 13:57:41.126344] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75925) is not found. Dropping the request. 00:09:02.829 [2024-11-17 13:57:41.126383] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75925) is not found. Dropping the request. 00:09:02.829 [2024-11-17 13:57:41.127199] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75925) is not found. Dropping the request. 00:09:02.829 [2024-11-17 13:57:41.127330] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75925) is not found. Dropping the request. 00:09:02.829 [2024-11-17 13:57:41.127376] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75925) is not found. Dropping the request. 00:09:02.829 [2024-11-17 13:57:41.127420] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75925) is not found. Dropping the request. 00:09:03.087 [2024-11-17 13:57:41.128298] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75925) is not found. Dropping the request. 00:09:03.087 [2024-11-17 13:57:41.128387] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75925) is not found. Dropping the request. 00:09:03.087 [2024-11-17 13:57:41.128419] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75925) is not found. Dropping the request. 00:09:03.087 [2024-11-17 13:57:41.128454] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75925) is not found. Dropping the request. 00:09:03.087 13:57:41 nvme -- common/autotest_common.sh@1093 -- # rm -f /var/run/spdk_stub0 00:09:03.087 13:57:41 nvme -- common/autotest_common.sh@1097 -- # echo 2 00:09:03.087 13:57:41 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:03.087 13:57:41 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:03.087 13:57:41 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:03.087 13:57:41 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:03.087 ************************************ 00:09:03.087 START TEST bdev_nvme_reset_stuck_adm_cmd 00:09:03.087 ************************************ 00:09:03.087 13:57:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:03.087 * Looking for test storage... 00:09:03.087 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:03.087 13:57:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:03.087 13:57:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # lcov --version 00:09:03.087 13:57:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:03.087 13:57:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:03.087 13:57:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:03.087 13:57:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:03.087 13:57:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:03.087 13:57:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:09:03.087 13:57:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:09:03.087 13:57:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:09:03.087 13:57:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:09:03.087 13:57:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:09:03.087 13:57:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:09:03.087 13:57:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:09:03.087 13:57:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:03.087 13:57:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:09:03.087 13:57:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:09:03.087 13:57:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:03.087 13:57:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:03.087 13:57:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:09:03.087 13:57:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:09:03.087 13:57:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:03.087 13:57:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:09:03.087 13:57:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:09:03.087 13:57:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:09:03.087 13:57:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:09:03.087 13:57:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:03.087 13:57:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:09:03.087 13:57:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:09:03.087 13:57:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:03.087 13:57:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:03.087 13:57:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:09:03.087 13:57:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:03.087 13:57:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:03.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.087 --rc genhtml_branch_coverage=1 00:09:03.087 --rc genhtml_function_coverage=1 00:09:03.087 --rc genhtml_legend=1 00:09:03.087 --rc geninfo_all_blocks=1 00:09:03.087 --rc geninfo_unexecuted_blocks=1 00:09:03.087 00:09:03.087 ' 00:09:03.087 13:57:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:03.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.087 --rc genhtml_branch_coverage=1 00:09:03.087 --rc genhtml_function_coverage=1 00:09:03.087 --rc genhtml_legend=1 00:09:03.087 --rc geninfo_all_blocks=1 00:09:03.087 --rc geninfo_unexecuted_blocks=1 00:09:03.087 00:09:03.087 ' 00:09:03.087 13:57:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:03.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.087 --rc genhtml_branch_coverage=1 00:09:03.087 --rc genhtml_function_coverage=1 00:09:03.087 --rc genhtml_legend=1 00:09:03.087 --rc geninfo_all_blocks=1 00:09:03.087 --rc geninfo_unexecuted_blocks=1 00:09:03.087 00:09:03.087 ' 00:09:03.087 13:57:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:03.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.087 --rc genhtml_branch_coverage=1 00:09:03.087 --rc genhtml_function_coverage=1 00:09:03.087 --rc genhtml_legend=1 00:09:03.087 --rc geninfo_all_blocks=1 00:09:03.087 --rc geninfo_unexecuted_blocks=1 00:09:03.087 00:09:03.087 ' 00:09:03.087 13:57:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:09:03.087 13:57:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:09:03.087 13:57:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:09:03.088 13:57:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:09:03.088 13:57:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:09:03.088 13:57:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:09:03.088 13:57:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # bdfs=() 00:09:03.088 13:57:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # local bdfs 00:09:03.088 13:57:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:09:03.088 13:57:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:09:03.088 13:57:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:03.088 13:57:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # local bdfs 00:09:03.088 13:57:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:03.088 13:57:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:03.088 13:57:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:03.346 13:57:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:09:03.346 13:57:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:03.346 13:57:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:09:03.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:03.346 13:57:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:09:03.346 13:57:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:09:03.346 13:57:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=76207 00:09:03.346 13:57:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:03.347 13:57:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 76207 00:09:03.347 13:57:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@831 -- # '[' -z 76207 ']' 00:09:03.347 13:57:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:03.347 13:57:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:03.347 13:57:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:03.347 13:57:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:09:03.347 13:57:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:03.347 13:57:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:03.347 [2024-11-17 13:57:41.499844] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:03.347 [2024-11-17 13:57:41.499958] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76207 ] 00:09:03.605 [2024-11-17 13:57:41.657806] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:03.605 [2024-11-17 13:57:41.693312] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:03.605 [2024-11-17 13:57:41.693493] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:09:03.605 [2024-11-17 13:57:41.693749] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.605 [2024-11-17 13:57:41.693826] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:09:04.232 13:57:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:04.232 13:57:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # return 0 00:09:04.232 13:57:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:09:04.232 13:57:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.232 13:57:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:04.232 nvme0n1 00:09:04.232 13:57:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.232 13:57:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:09:04.232 13:57:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_fPLMz.txt 00:09:04.232 13:57:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:09:04.232 13:57:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.232 13:57:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:04.232 true 00:09:04.232 13:57:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.232 13:57:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:09:04.232 13:57:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1731851862 00:09:04.232 13:57:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=76230 00:09:04.232 13:57:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:04.233 13:57:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:09:04.233 13:57:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:09:06.135 13:57:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:09:06.135 13:57:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.135 13:57:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:06.135 [2024-11-17 13:57:44.414902] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:09:06.135 [2024-11-17 13:57:44.415153] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:09:06.135 [2024-11-17 13:57:44.415174] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:06.135 [2024-11-17 13:57:44.415200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:06.135 [2024-11-17 13:57:44.417095] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:06.135 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 76230 00:09:06.135 13:57:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.135 13:57:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 76230 00:09:06.135 13:57:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 76230 00:09:06.394 13:57:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:09:06.394 13:57:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:09:06.394 13:57:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:09:06.394 13:57:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.394 13:57:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:06.394 13:57:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.394 13:57:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:09:06.394 13:57:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_fPLMz.txt 00:09:06.395 13:57:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:09:06.395 13:57:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:09:06.395 13:57:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:06.395 13:57:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:06.395 13:57:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:06.395 13:57:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:06.395 13:57:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:06.395 13:57:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:06.395 13:57:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:09:06.395 13:57:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:09:06.395 13:57:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:09:06.395 13:57:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:06.395 13:57:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:06.395 13:57:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:06.395 13:57:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:06.395 13:57:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:06.395 13:57:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:06.395 13:57:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:09:06.395 13:57:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:09:06.395 13:57:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_fPLMz.txt 00:09:06.395 13:57:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 76207 00:09:06.395 13:57:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@950 -- # '[' -z 76207 ']' 00:09:06.395 13:57:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # kill -0 76207 00:09:06.395 13:57:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # uname 00:09:06.395 13:57:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:06.395 13:57:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76207 00:09:06.395 killing process with pid 76207 00:09:06.395 13:57:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:06.395 13:57:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:06.395 13:57:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76207' 00:09:06.395 13:57:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@969 -- # kill 76207 00:09:06.395 13:57:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@974 -- # wait 76207 00:09:06.654 13:57:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:09:06.654 13:57:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:09:06.654 00:09:06.654 real 0m3.543s 00:09:06.654 user 0m12.588s 00:09:06.654 sys 0m0.456s 00:09:06.654 ************************************ 00:09:06.654 END TEST bdev_nvme_reset_stuck_adm_cmd 00:09:06.654 ************************************ 00:09:06.654 13:57:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:06.654 13:57:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:06.654 13:57:44 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:09:06.654 13:57:44 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:09:06.654 13:57:44 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:06.654 13:57:44 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:06.654 13:57:44 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:06.654 ************************************ 00:09:06.654 START TEST nvme_fio 00:09:06.654 ************************************ 00:09:06.654 13:57:44 nvme.nvme_fio -- common/autotest_common.sh@1125 -- # nvme_fio_test 00:09:06.654 13:57:44 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:09:06.654 13:57:44 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:09:06.654 13:57:44 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:09:06.654 13:57:44 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:06.654 13:57:44 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # local bdfs 00:09:06.654 13:57:44 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:06.654 13:57:44 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:06.654 13:57:44 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:06.654 13:57:44 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:09:06.654 13:57:44 nvme.nvme_fio -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:06.654 13:57:44 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:09:06.654 13:57:44 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:09:06.654 13:57:44 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:06.654 13:57:44 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:06.655 13:57:44 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:06.912 13:57:45 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:06.913 13:57:45 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:07.171 13:57:45 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:07.171 13:57:45 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:07.171 13:57:45 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:07.171 13:57:45 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:07.171 13:57:45 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:07.171 13:57:45 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:07.171 13:57:45 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:07.171 13:57:45 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:07.171 13:57:45 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:07.171 13:57:45 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:07.171 13:57:45 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:07.171 13:57:45 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:07.171 13:57:45 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:07.171 13:57:45 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:07.171 13:57:45 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:07.171 13:57:45 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:07.171 13:57:45 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:07.171 13:57:45 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:07.171 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:07.171 fio-3.35 00:09:07.171 Starting 1 thread 00:09:13.729 00:09:13.729 test: (groupid=0, jobs=1): err= 0: pid=76359: Sun Nov 17 13:57:51 2024 00:09:13.729 read: IOPS=21.1k, BW=82.5MiB/s (86.5MB/s)(165MiB/2001msec) 00:09:13.729 slat (nsec): min=3314, max=62729, avg=4968.92, stdev=2044.67 00:09:13.729 clat (usec): min=362, max=10760, avg=3024.54, stdev=1034.57 00:09:13.729 lat (usec): min=367, max=10800, avg=3029.50, stdev=1035.49 00:09:13.729 clat percentiles (usec): 00:09:13.729 | 1.00th=[ 1729], 5.00th=[ 2114], 10.00th=[ 2212], 20.00th=[ 2376], 00:09:13.729 | 30.00th=[ 2442], 40.00th=[ 2540], 50.00th=[ 2638], 60.00th=[ 2802], 00:09:13.729 | 70.00th=[ 3032], 80.00th=[ 3490], 90.00th=[ 4621], 95.00th=[ 5342], 00:09:13.729 | 99.00th=[ 6652], 99.50th=[ 6980], 99.90th=[ 8094], 99.95th=[ 8848], 00:09:13.729 | 99.99th=[10552] 00:09:13.729 bw ( KiB/s): min=79680, max=89240, per=99.45%, avg=84008.00, stdev=4843.69, samples=3 00:09:13.729 iops : min=19920, max=22310, avg=21002.00, stdev=1210.92, samples=3 00:09:13.729 write: IOPS=21.0k, BW=82.0MiB/s (86.0MB/s)(164MiB/2001msec); 0 zone resets 00:09:13.729 slat (nsec): min=3373, max=71861, avg=5167.78, stdev=2167.83 00:09:13.729 clat (usec): min=354, max=10657, avg=3036.08, stdev=1033.13 00:09:13.729 lat (usec): min=359, max=10670, avg=3041.25, stdev=1034.05 00:09:13.729 clat percentiles (usec): 00:09:13.729 | 1.00th=[ 1713], 5.00th=[ 2114], 10.00th=[ 2245], 20.00th=[ 2376], 00:09:13.729 | 30.00th=[ 2474], 40.00th=[ 2540], 50.00th=[ 2671], 60.00th=[ 2835], 00:09:13.729 | 70.00th=[ 3064], 80.00th=[ 3523], 90.00th=[ 4555], 95.00th=[ 5342], 00:09:13.729 | 99.00th=[ 6652], 99.50th=[ 7046], 99.90th=[ 8291], 99.95th=[ 8979], 00:09:13.729 | 99.99th=[10421] 00:09:13.729 bw ( KiB/s): min=80232, max=88808, per=100.00%, avg=84093.33, stdev=4351.22, samples=3 00:09:13.729 iops : min=20058, max=22202, avg=21023.33, stdev=1087.80, samples=3 00:09:13.729 lat (usec) : 500=0.02%, 750=0.01%, 1000=0.05% 00:09:13.729 lat (msec) : 2=2.14%, 4=83.25%, 10=14.50%, 20=0.03% 00:09:13.729 cpu : usr=99.00%, sys=0.20%, ctx=4, majf=0, minf=626 00:09:13.729 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:13.730 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:13.730 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:13.730 issued rwts: total=42259,41997,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:13.730 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:13.730 00:09:13.730 Run status group 0 (all jobs): 00:09:13.730 READ: bw=82.5MiB/s (86.5MB/s), 82.5MiB/s-82.5MiB/s (86.5MB/s-86.5MB/s), io=165MiB (173MB), run=2001-2001msec 00:09:13.730 WRITE: bw=82.0MiB/s (86.0MB/s), 82.0MiB/s-82.0MiB/s (86.0MB/s-86.0MB/s), io=164MiB (172MB), run=2001-2001msec 00:09:13.730 ----------------------------------------------------- 00:09:13.730 Suppressions used: 00:09:13.730 count bytes template 00:09:13.730 1 32 /usr/src/fio/parse.c 00:09:13.730 1 8 libtcmalloc_minimal.so 00:09:13.730 ----------------------------------------------------- 00:09:13.730 00:09:13.730 13:57:51 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:13.730 13:57:51 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:13.730 13:57:51 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:13.730 13:57:51 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:13.730 13:57:51 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:13.730 13:57:51 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:13.730 13:57:51 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:13.730 13:57:51 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:13.730 13:57:51 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:13.730 13:57:51 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:13.730 13:57:51 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:13.730 13:57:51 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:13.730 13:57:51 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:13.730 13:57:51 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:13.730 13:57:51 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:13.730 13:57:51 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:13.730 13:57:51 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:13.730 13:57:51 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:13.730 13:57:51 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:13.730 13:57:52 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:13.730 13:57:52 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:13.730 13:57:52 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:13.730 13:57:52 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:13.730 13:57:52 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:13.988 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:13.988 fio-3.35 00:09:13.988 Starting 1 thread 00:09:20.557 00:09:20.557 test: (groupid=0, jobs=1): err= 0: pid=76415: Sun Nov 17 13:57:57 2024 00:09:20.557 read: IOPS=18.7k, BW=72.9MiB/s (76.4MB/s)(146MiB/2001msec) 00:09:20.557 slat (nsec): min=4198, max=74156, avg=5323.35, stdev=2666.47 00:09:20.557 clat (usec): min=364, max=11640, avg=3414.67, stdev=1315.32 00:09:20.557 lat (usec): min=370, max=11714, avg=3420.00, stdev=1316.37 00:09:20.557 clat percentiles (usec): 00:09:20.557 | 1.00th=[ 1778], 5.00th=[ 2212], 10.00th=[ 2278], 20.00th=[ 2376], 00:09:20.557 | 30.00th=[ 2507], 40.00th=[ 2638], 50.00th=[ 2868], 60.00th=[ 3163], 00:09:20.558 | 70.00th=[ 3818], 80.00th=[ 4686], 90.00th=[ 5407], 95.00th=[ 5932], 00:09:20.558 | 99.00th=[ 7242], 99.50th=[ 7767], 99.90th=[ 9372], 99.95th=[10290], 00:09:20.558 | 99.99th=[11469] 00:09:20.558 bw ( KiB/s): min=62736, max=87312, per=98.77%, avg=73733.33, stdev=12489.69, samples=3 00:09:20.558 iops : min=15684, max=21828, avg=18433.33, stdev=3122.42, samples=3 00:09:20.558 write: IOPS=18.7k, BW=72.9MiB/s (76.4MB/s)(146MiB/2001msec); 0 zone resets 00:09:20.558 slat (nsec): min=4267, max=51188, avg=5473.48, stdev=2600.74 00:09:20.558 clat (usec): min=342, max=11527, avg=3423.19, stdev=1320.18 00:09:20.558 lat (usec): min=348, max=11540, avg=3428.66, stdev=1321.18 00:09:20.558 clat percentiles (usec): 00:09:20.558 | 1.00th=[ 1745], 5.00th=[ 2212], 10.00th=[ 2278], 20.00th=[ 2376], 00:09:20.558 | 30.00th=[ 2507], 40.00th=[ 2671], 50.00th=[ 2868], 60.00th=[ 3163], 00:09:20.558 | 70.00th=[ 3818], 80.00th=[ 4686], 90.00th=[ 5473], 95.00th=[ 5997], 00:09:20.558 | 99.00th=[ 7242], 99.50th=[ 7767], 99.90th=[ 9372], 99.95th=[10290], 00:09:20.558 | 99.99th=[11338] 00:09:20.558 bw ( KiB/s): min=62320, max=87440, per=98.70%, avg=73661.33, stdev=12736.13, samples=3 00:09:20.558 iops : min=15580, max=21860, avg=18415.33, stdev=3184.03, samples=3 00:09:20.558 lat (usec) : 500=0.01%, 1000=0.04% 00:09:20.558 lat (msec) : 2=1.79%, 4=70.07%, 10=28.03%, 20=0.06% 00:09:20.558 cpu : usr=98.95%, sys=0.05%, ctx=3, majf=0, minf=627 00:09:20.558 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:20.558 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:20.558 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:20.558 issued rwts: total=37344,37336,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:20.558 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:20.558 00:09:20.558 Run status group 0 (all jobs): 00:09:20.558 READ: bw=72.9MiB/s (76.4MB/s), 72.9MiB/s-72.9MiB/s (76.4MB/s-76.4MB/s), io=146MiB (153MB), run=2001-2001msec 00:09:20.558 WRITE: bw=72.9MiB/s (76.4MB/s), 72.9MiB/s-72.9MiB/s (76.4MB/s-76.4MB/s), io=146MiB (153MB), run=2001-2001msec 00:09:20.558 ----------------------------------------------------- 00:09:20.558 Suppressions used: 00:09:20.558 count bytes template 00:09:20.558 1 32 /usr/src/fio/parse.c 00:09:20.558 1 8 libtcmalloc_minimal.so 00:09:20.558 ----------------------------------------------------- 00:09:20.558 00:09:20.558 13:57:58 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:20.558 13:57:58 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:20.558 13:57:58 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:20.558 13:57:58 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:20.558 13:57:58 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:20.558 13:57:58 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:20.558 13:57:58 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:20.558 13:57:58 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:20.558 13:57:58 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:20.558 13:57:58 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:20.558 13:57:58 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:20.558 13:57:58 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:20.558 13:57:58 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:20.558 13:57:58 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:20.558 13:57:58 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:20.558 13:57:58 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:20.558 13:57:58 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:20.558 13:57:58 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:20.558 13:57:58 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:20.558 13:57:58 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:20.558 13:57:58 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:20.558 13:57:58 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:20.558 13:57:58 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:20.558 13:57:58 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:20.558 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:20.558 fio-3.35 00:09:20.558 Starting 1 thread 00:09:25.849 00:09:25.849 test: (groupid=0, jobs=1): err= 0: pid=76471: Sun Nov 17 13:58:03 2024 00:09:25.849 read: IOPS=13.9k, BW=54.4MiB/s (57.0MB/s)(109MiB/2001msec) 00:09:25.849 slat (nsec): min=4848, max=99798, avg=7103.33, stdev=4487.96 00:09:25.849 clat (usec): min=441, max=13421, avg=4572.08, stdev=1614.89 00:09:25.849 lat (usec): min=464, max=13460, avg=4579.18, stdev=1616.36 00:09:25.849 clat percentiles (usec): 00:09:25.849 | 1.00th=[ 2409], 5.00th=[ 2737], 10.00th=[ 2900], 20.00th=[ 3130], 00:09:25.849 | 30.00th=[ 3326], 40.00th=[ 3654], 50.00th=[ 4178], 60.00th=[ 4817], 00:09:25.849 | 70.00th=[ 5342], 80.00th=[ 5997], 90.00th=[ 6849], 95.00th=[ 7635], 00:09:25.849 | 99.00th=[ 8979], 99.50th=[ 9634], 99.90th=[11600], 99.95th=[12911], 00:09:25.849 | 99.99th=[13304] 00:09:25.849 bw ( KiB/s): min=51776, max=58456, per=97.69%, avg=54405.33, stdev=3559.60, samples=3 00:09:25.849 iops : min=12944, max=14614, avg=13601.33, stdev=889.90, samples=3 00:09:25.849 write: IOPS=13.9k, BW=54.4MiB/s (57.0MB/s)(109MiB/2001msec); 0 zone resets 00:09:25.849 slat (usec): min=4, max=106, avg= 7.32, stdev= 4.64 00:09:25.849 clat (usec): min=943, max=13546, avg=4593.04, stdev=1611.04 00:09:25.849 lat (usec): min=949, max=13552, avg=4600.37, stdev=1612.48 00:09:25.849 clat percentiles (usec): 00:09:25.849 | 1.00th=[ 2442], 5.00th=[ 2769], 10.00th=[ 2933], 20.00th=[ 3130], 00:09:25.849 | 30.00th=[ 3359], 40.00th=[ 3687], 50.00th=[ 4178], 60.00th=[ 4817], 00:09:25.849 | 70.00th=[ 5407], 80.00th=[ 5997], 90.00th=[ 6849], 95.00th=[ 7635], 00:09:25.849 | 99.00th=[ 8979], 99.50th=[ 9634], 99.90th=[11469], 99.95th=[12911], 00:09:25.849 | 99.99th=[13304] 00:09:25.849 bw ( KiB/s): min=51936, max=58352, per=97.77%, avg=54466.67, stdev=3415.79, samples=3 00:09:25.849 iops : min=12984, max=14588, avg=13616.67, stdev=853.95, samples=3 00:09:25.849 lat (usec) : 500=0.01%, 1000=0.01% 00:09:25.849 lat (msec) : 2=0.19%, 4=46.86%, 10=52.55%, 20=0.39% 00:09:25.849 cpu : usr=98.20%, sys=0.20%, ctx=4, majf=0, minf=627 00:09:25.849 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:25.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:25.849 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:25.849 issued rwts: total=27859,27869,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:25.849 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:25.849 00:09:25.849 Run status group 0 (all jobs): 00:09:25.849 READ: bw=54.4MiB/s (57.0MB/s), 54.4MiB/s-54.4MiB/s (57.0MB/s-57.0MB/s), io=109MiB (114MB), run=2001-2001msec 00:09:25.849 WRITE: bw=54.4MiB/s (57.0MB/s), 54.4MiB/s-54.4MiB/s (57.0MB/s-57.0MB/s), io=109MiB (114MB), run=2001-2001msec 00:09:25.849 ----------------------------------------------------- 00:09:25.849 Suppressions used: 00:09:25.849 count bytes template 00:09:25.849 1 32 /usr/src/fio/parse.c 00:09:25.849 1 8 libtcmalloc_minimal.so 00:09:25.849 ----------------------------------------------------- 00:09:25.849 00:09:25.849 13:58:04 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:25.849 13:58:04 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:25.849 13:58:04 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:25.849 13:58:04 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:26.108 13:58:04 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:26.109 13:58:04 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:26.367 13:58:04 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:26.367 13:58:04 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:26.367 13:58:04 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:26.367 13:58:04 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:26.367 13:58:04 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:26.367 13:58:04 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:26.367 13:58:04 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:26.367 13:58:04 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:26.367 13:58:04 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:26.367 13:58:04 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:26.367 13:58:04 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:26.367 13:58:04 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:26.367 13:58:04 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:26.367 13:58:04 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:26.367 13:58:04 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:26.367 13:58:04 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:26.367 13:58:04 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:26.367 13:58:04 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:26.367 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:26.367 fio-3.35 00:09:26.367 Starting 1 thread 00:09:31.665 00:09:31.665 test: (groupid=0, jobs=1): err= 0: pid=76531: Sun Nov 17 13:58:09 2024 00:09:31.665 read: IOPS=15.7k, BW=61.5MiB/s (64.4MB/s)(123MiB/2001msec) 00:09:31.665 slat (nsec): min=4222, max=80573, avg=6004.88, stdev=3564.51 00:09:31.665 clat (usec): min=941, max=12796, avg=4038.69, stdev=1405.33 00:09:31.665 lat (usec): min=945, max=12850, avg=4044.69, stdev=1406.56 00:09:31.665 clat percentiles (usec): 00:09:31.665 | 1.00th=[ 2114], 5.00th=[ 2343], 10.00th=[ 2474], 20.00th=[ 2704], 00:09:31.665 | 30.00th=[ 2933], 40.00th=[ 3261], 50.00th=[ 3785], 60.00th=[ 4359], 00:09:31.665 | 70.00th=[ 4817], 80.00th=[ 5276], 90.00th=[ 5997], 95.00th=[ 6521], 00:09:31.665 | 99.00th=[ 7701], 99.50th=[ 8029], 99.90th=[ 9634], 99.95th=[10421], 00:09:31.665 | 99.99th=[11076] 00:09:31.665 bw ( KiB/s): min=60312, max=68592, per=100.00%, avg=63720.00, stdev=4329.79, samples=3 00:09:31.665 iops : min=15078, max=17148, avg=15930.00, stdev=1082.45, samples=3 00:09:31.665 write: IOPS=15.7k, BW=61.5MiB/s (64.5MB/s)(123MiB/2001msec); 0 zone resets 00:09:31.665 slat (nsec): min=4273, max=71254, avg=6156.28, stdev=3481.36 00:09:31.665 clat (usec): min=933, max=11106, avg=4068.81, stdev=1408.69 00:09:31.665 lat (usec): min=937, max=11147, avg=4074.97, stdev=1409.86 00:09:31.665 clat percentiles (usec): 00:09:31.665 | 1.00th=[ 2114], 5.00th=[ 2343], 10.00th=[ 2474], 20.00th=[ 2704], 00:09:31.665 | 30.00th=[ 2966], 40.00th=[ 3326], 50.00th=[ 3851], 60.00th=[ 4359], 00:09:31.665 | 70.00th=[ 4883], 80.00th=[ 5342], 90.00th=[ 5997], 95.00th=[ 6587], 00:09:31.665 | 99.00th=[ 7701], 99.50th=[ 8029], 99.90th=[ 9896], 99.95th=[10552], 00:09:31.665 | 99.99th=[10945] 00:09:31.665 bw ( KiB/s): min=60728, max=68088, per=100.00%, avg=63448.00, stdev=4038.22, samples=3 00:09:31.665 iops : min=15182, max=17022, avg=15862.00, stdev=1009.55, samples=3 00:09:31.665 lat (usec) : 1000=0.01% 00:09:31.665 lat (msec) : 2=0.33%, 4=53.05%, 10=46.52%, 20=0.08% 00:09:31.665 cpu : usr=98.65%, sys=0.15%, ctx=3, majf=0, minf=626 00:09:31.665 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:31.665 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:31.665 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:31.665 issued rwts: total=31482,31515,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:31.665 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:31.665 00:09:31.665 Run status group 0 (all jobs): 00:09:31.665 READ: bw=61.5MiB/s (64.4MB/s), 61.5MiB/s-61.5MiB/s (64.4MB/s-64.4MB/s), io=123MiB (129MB), run=2001-2001msec 00:09:31.665 WRITE: bw=61.5MiB/s (64.5MB/s), 61.5MiB/s-61.5MiB/s (64.5MB/s-64.5MB/s), io=123MiB (129MB), run=2001-2001msec 00:09:31.665 ----------------------------------------------------- 00:09:31.665 Suppressions used: 00:09:31.665 count bytes template 00:09:31.665 1 32 /usr/src/fio/parse.c 00:09:31.665 1 8 libtcmalloc_minimal.so 00:09:31.665 ----------------------------------------------------- 00:09:31.665 00:09:31.665 13:58:09 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:31.665 ************************************ 00:09:31.665 END TEST nvme_fio 00:09:31.665 ************************************ 00:09:31.665 13:58:09 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:09:31.665 00:09:31.665 real 0m24.669s 00:09:31.665 user 0m16.225s 00:09:31.665 sys 0m14.110s 00:09:31.665 13:58:09 nvme.nvme_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:31.665 13:58:09 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:09:31.665 00:09:31.666 real 1m31.659s 00:09:31.666 user 3m30.642s 00:09:31.666 sys 0m23.992s 00:09:31.666 13:58:09 nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:31.666 ************************************ 00:09:31.666 END TEST nvme 00:09:31.666 ************************************ 00:09:31.666 13:58:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:31.666 13:58:09 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:09:31.666 13:58:09 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:31.666 13:58:09 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:31.666 13:58:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:31.666 13:58:09 -- common/autotest_common.sh@10 -- # set +x 00:09:31.666 ************************************ 00:09:31.666 START TEST nvme_scc 00:09:31.666 ************************************ 00:09:31.666 13:58:09 nvme_scc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:31.666 * Looking for test storage... 00:09:31.666 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:31.666 13:58:09 nvme_scc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:31.666 13:58:09 nvme_scc -- common/autotest_common.sh@1681 -- # lcov --version 00:09:31.666 13:58:09 nvme_scc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:31.666 13:58:09 nvme_scc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:31.666 13:58:09 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:31.666 13:58:09 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:31.666 13:58:09 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:31.666 13:58:09 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:09:31.666 13:58:09 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:09:31.666 13:58:09 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:09:31.666 13:58:09 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:09:31.666 13:58:09 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:09:31.666 13:58:09 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:09:31.666 13:58:09 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:09:31.666 13:58:09 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:31.666 13:58:09 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:09:31.666 13:58:09 nvme_scc -- scripts/common.sh@345 -- # : 1 00:09:31.666 13:58:09 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:31.666 13:58:09 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:31.666 13:58:09 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:09:31.666 13:58:09 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:09:31.666 13:58:09 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:31.666 13:58:09 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:09:31.666 13:58:09 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:31.666 13:58:09 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:09:31.666 13:58:09 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:09:31.666 13:58:09 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:31.666 13:58:09 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:09:31.666 13:58:09 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:31.666 13:58:09 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:31.666 13:58:09 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:31.666 13:58:09 nvme_scc -- scripts/common.sh@368 -- # return 0 00:09:31.666 13:58:09 nvme_scc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:31.666 13:58:09 nvme_scc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:31.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.666 --rc genhtml_branch_coverage=1 00:09:31.666 --rc genhtml_function_coverage=1 00:09:31.666 --rc genhtml_legend=1 00:09:31.666 --rc geninfo_all_blocks=1 00:09:31.666 --rc geninfo_unexecuted_blocks=1 00:09:31.666 00:09:31.666 ' 00:09:31.666 13:58:09 nvme_scc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:31.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.666 --rc genhtml_branch_coverage=1 00:09:31.666 --rc genhtml_function_coverage=1 00:09:31.666 --rc genhtml_legend=1 00:09:31.666 --rc geninfo_all_blocks=1 00:09:31.666 --rc geninfo_unexecuted_blocks=1 00:09:31.666 00:09:31.666 ' 00:09:31.666 13:58:09 nvme_scc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:31.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.666 --rc genhtml_branch_coverage=1 00:09:31.666 --rc genhtml_function_coverage=1 00:09:31.666 --rc genhtml_legend=1 00:09:31.666 --rc geninfo_all_blocks=1 00:09:31.666 --rc geninfo_unexecuted_blocks=1 00:09:31.666 00:09:31.666 ' 00:09:31.666 13:58:09 nvme_scc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:31.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.666 --rc genhtml_branch_coverage=1 00:09:31.666 --rc genhtml_function_coverage=1 00:09:31.666 --rc genhtml_legend=1 00:09:31.666 --rc geninfo_all_blocks=1 00:09:31.666 --rc geninfo_unexecuted_blocks=1 00:09:31.666 00:09:31.666 ' 00:09:31.666 13:58:09 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:31.666 13:58:09 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:31.666 13:58:09 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:31.666 13:58:09 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:31.666 13:58:09 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:31.666 13:58:09 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:31.666 13:58:09 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:31.666 13:58:09 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:31.666 13:58:09 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:31.666 13:58:09 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.666 13:58:09 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.666 13:58:09 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.666 13:58:09 nvme_scc -- paths/export.sh@5 -- # export PATH 00:09:31.666 13:58:09 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.666 13:58:09 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:09:31.666 13:58:09 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:31.666 13:58:09 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:09:31.666 13:58:09 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:31.666 13:58:09 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:09:31.666 13:58:09 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:31.666 13:58:09 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:31.666 13:58:09 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:31.666 13:58:09 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:09:31.666 13:58:09 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:31.666 13:58:09 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:09:31.666 13:58:09 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:09:31.666 13:58:09 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:09:31.666 13:58:09 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:31.924 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:31.924 Waiting for block devices as requested 00:09:31.924 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:32.182 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:32.182 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:32.182 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:37.457 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:37.457 13:58:15 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:09:37.457 13:58:15 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:37.457 13:58:15 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:37.457 13:58:15 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:37.457 13:58:15 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:37.457 13:58:15 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:37.457 13:58:15 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:37.457 13:58:15 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:37.457 13:58:15 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:37.457 13:58:15 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:37.457 13:58:15 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:37.457 13:58:15 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:37.457 13:58:15 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:37.457 13:58:15 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:37.457 13:58:15 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:37.457 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.457 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.457 13:58:15 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:37.457 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:37.457 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.458 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:37.459 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.460 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.461 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:37.462 13:58:15 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:37.462 13:58:15 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:37.462 13:58:15 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:37.462 13:58:15 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:37.462 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:37.463 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:37.464 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.465 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.466 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:37.467 13:58:15 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:37.467 13:58:15 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:37.467 13:58:15 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:37.467 13:58:15 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:37.467 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.468 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:37.469 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.470 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:37.471 13:58:15 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.472 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:37.473 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:37.474 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.736 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.736 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:37.736 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:37.736 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:37.736 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.736 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.736 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:37.736 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:37.736 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:37.736 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.736 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.736 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:37.736 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:37.736 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:37.736 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.736 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.736 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:37.736 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:37.736 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:37.736 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.736 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.736 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:37.736 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:37.736 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:37.736 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.736 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.736 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:37.736 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:37.736 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:37.736 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.736 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.736 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:37.736 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:37.736 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:37.736 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.736 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.736 13:58:15 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:37.736 13:58:15 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:37.736 13:58:15 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:37.736 13:58:15 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:37.736 13:58:15 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:37.736 13:58:15 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:37.736 13:58:15 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:37.736 13:58:15 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:37.736 13:58:15 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:37.736 13:58:15 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:37.736 13:58:15 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:37.736 13:58:15 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:37.736 13:58:15 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:37.736 13:58:15 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:37.736 13:58:15 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:37.736 13:58:15 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:37.736 13:58:15 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:37.736 13:58:15 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:37.736 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.737 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:37.738 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:37.739 13:58:15 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:09:37.739 13:58:15 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:37.740 13:58:15 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:09:37.740 13:58:15 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:37.740 13:58:15 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:09:37.740 13:58:15 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:09:37.740 13:58:15 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:37.740 13:58:15 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:37.740 13:58:15 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:09:37.740 13:58:15 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:09:37.740 13:58:15 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:09:37.740 13:58:15 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:09:37.740 13:58:15 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:09:37.740 13:58:15 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:09:37.740 13:58:15 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:37.740 13:58:15 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:37.740 13:58:15 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:37.740 13:58:15 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:37.740 13:58:15 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:37.740 13:58:15 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:37.740 13:58:15 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:09:37.740 13:58:15 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:37.740 13:58:15 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:09:37.740 13:58:15 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:09:37.740 13:58:15 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:09:37.740 13:58:15 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:09:37.740 13:58:15 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:09:37.740 13:58:15 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:09:37.740 13:58:15 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:37.740 13:58:15 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:37.740 13:58:15 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:37.740 13:58:15 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:37.740 13:58:15 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:37.740 13:58:15 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:37.740 13:58:15 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:09:37.740 13:58:15 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:37.740 13:58:15 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:09:37.740 13:58:15 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:09:37.740 13:58:15 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:09:37.740 13:58:15 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:09:37.740 13:58:15 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:09:37.740 13:58:15 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:09:37.740 13:58:15 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:37.740 13:58:15 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:37.740 13:58:15 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:37.740 13:58:15 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:37.740 13:58:15 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:37.740 13:58:15 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:37.740 13:58:15 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:09:37.740 13:58:15 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:37.740 13:58:15 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:09:37.740 13:58:15 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:09:37.740 13:58:15 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:09:37.740 13:58:15 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:09:37.740 13:58:15 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:09:37.740 13:58:15 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:09:37.740 13:58:15 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:37.740 13:58:15 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:37.740 13:58:15 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:37.740 13:58:15 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:37.740 13:58:15 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:37.740 13:58:15 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:37.740 13:58:15 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:09:37.740 13:58:15 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:09:37.740 13:58:15 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:09:37.740 13:58:15 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:09:37.740 13:58:15 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:09:37.740 13:58:15 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:09:37.740 13:58:15 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:37.999 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:38.566 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:38.566 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:38.566 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:38.566 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:38.826 13:58:16 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:38.826 13:58:16 nvme_scc -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:38.826 13:58:16 nvme_scc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:38.826 13:58:16 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:38.826 ************************************ 00:09:38.826 START TEST nvme_simple_copy 00:09:38.826 ************************************ 00:09:38.826 13:58:16 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:39.085 Initializing NVMe Controllers 00:09:39.085 Attaching to 0000:00:10.0 00:09:39.085 Controller supports SCC. Attached to 0000:00:10.0 00:09:39.085 Namespace ID: 1 size: 6GB 00:09:39.085 Initialization complete. 00:09:39.086 00:09:39.086 Controller QEMU NVMe Ctrl (12340 ) 00:09:39.086 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:09:39.086 Namespace Block Size:4096 00:09:39.086 Writing LBAs 0 to 63 with Random Data 00:09:39.086 Copied LBAs from 0 - 63 to the Destination LBA 256 00:09:39.086 LBAs matching Written Data: 64 00:09:39.086 00:09:39.086 real 0m0.247s 00:09:39.086 user 0m0.085s 00:09:39.086 sys 0m0.060s 00:09:39.086 ************************************ 00:09:39.086 END TEST nvme_simple_copy 00:09:39.086 ************************************ 00:09:39.086 13:58:17 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:39.086 13:58:17 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:09:39.086 00:09:39.086 real 0m7.617s 00:09:39.086 user 0m1.001s 00:09:39.086 sys 0m1.390s 00:09:39.086 13:58:17 nvme_scc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:39.086 ************************************ 00:09:39.086 END TEST nvme_scc 00:09:39.086 ************************************ 00:09:39.086 13:58:17 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:39.086 13:58:17 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:09:39.086 13:58:17 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:09:39.086 13:58:17 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:09:39.086 13:58:17 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:09:39.086 13:58:17 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:09:39.086 13:58:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:39.086 13:58:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:39.086 13:58:17 -- common/autotest_common.sh@10 -- # set +x 00:09:39.086 ************************************ 00:09:39.086 START TEST nvme_fdp 00:09:39.086 ************************************ 00:09:39.086 13:58:17 nvme_fdp -- common/autotest_common.sh@1125 -- # test/nvme/nvme_fdp.sh 00:09:39.086 * Looking for test storage... 00:09:39.086 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:39.086 13:58:17 nvme_fdp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:39.086 13:58:17 nvme_fdp -- common/autotest_common.sh@1681 -- # lcov --version 00:09:39.086 13:58:17 nvme_fdp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:39.086 13:58:17 nvme_fdp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:39.086 13:58:17 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:39.086 13:58:17 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:39.086 13:58:17 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:39.086 13:58:17 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:09:39.086 13:58:17 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:09:39.086 13:58:17 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:09:39.086 13:58:17 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:09:39.086 13:58:17 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:09:39.086 13:58:17 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:09:39.086 13:58:17 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:09:39.086 13:58:17 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:39.345 13:58:17 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:09:39.345 13:58:17 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:09:39.345 13:58:17 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:39.345 13:58:17 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:39.345 13:58:17 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:09:39.345 13:58:17 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:09:39.345 13:58:17 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:39.345 13:58:17 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:09:39.345 13:58:17 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:39.345 13:58:17 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:09:39.345 13:58:17 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:09:39.345 13:58:17 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:39.345 13:58:17 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:09:39.345 13:58:17 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:39.345 13:58:17 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:39.345 13:58:17 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:39.345 13:58:17 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:09:39.345 13:58:17 nvme_fdp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:39.345 13:58:17 nvme_fdp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:39.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.345 --rc genhtml_branch_coverage=1 00:09:39.345 --rc genhtml_function_coverage=1 00:09:39.345 --rc genhtml_legend=1 00:09:39.345 --rc geninfo_all_blocks=1 00:09:39.345 --rc geninfo_unexecuted_blocks=1 00:09:39.345 00:09:39.345 ' 00:09:39.345 13:58:17 nvme_fdp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:39.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.345 --rc genhtml_branch_coverage=1 00:09:39.345 --rc genhtml_function_coverage=1 00:09:39.345 --rc genhtml_legend=1 00:09:39.345 --rc geninfo_all_blocks=1 00:09:39.345 --rc geninfo_unexecuted_blocks=1 00:09:39.345 00:09:39.345 ' 00:09:39.345 13:58:17 nvme_fdp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:39.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.345 --rc genhtml_branch_coverage=1 00:09:39.345 --rc genhtml_function_coverage=1 00:09:39.345 --rc genhtml_legend=1 00:09:39.345 --rc geninfo_all_blocks=1 00:09:39.345 --rc geninfo_unexecuted_blocks=1 00:09:39.345 00:09:39.345 ' 00:09:39.345 13:58:17 nvme_fdp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:39.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.345 --rc genhtml_branch_coverage=1 00:09:39.345 --rc genhtml_function_coverage=1 00:09:39.345 --rc genhtml_legend=1 00:09:39.345 --rc geninfo_all_blocks=1 00:09:39.345 --rc geninfo_unexecuted_blocks=1 00:09:39.345 00:09:39.345 ' 00:09:39.345 13:58:17 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:39.345 13:58:17 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:39.345 13:58:17 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:39.345 13:58:17 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:39.345 13:58:17 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:39.345 13:58:17 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:09:39.345 13:58:17 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:39.345 13:58:17 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:39.345 13:58:17 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:39.345 13:58:17 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.345 13:58:17 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.345 13:58:17 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.345 13:58:17 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:09:39.345 13:58:17 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.345 13:58:17 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:09:39.345 13:58:17 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:39.345 13:58:17 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:09:39.345 13:58:17 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:39.345 13:58:17 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:09:39.345 13:58:17 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:39.345 13:58:17 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:39.345 13:58:17 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:39.345 13:58:17 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:09:39.345 13:58:17 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:39.345 13:58:17 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:39.604 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:39.604 Waiting for block devices as requested 00:09:39.862 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:39.862 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:39.862 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:39.862 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:45.136 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:45.136 13:58:23 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:45.136 13:58:23 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:45.136 13:58:23 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:45.136 13:58:23 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:45.136 13:58:23 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:45.136 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:45.137 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.138 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.139 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:45.140 13:58:23 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:45.141 13:58:23 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:45.141 13:58:23 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:45.141 13:58:23 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:45.141 13:58:23 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.141 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:45.142 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:45.143 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:45.144 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:45.145 13:58:23 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:45.145 13:58:23 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:45.145 13:58:23 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:45.145 13:58:23 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.145 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.146 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.147 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.148 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.149 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:45.150 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:45.150 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.150 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.150 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.150 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:45.150 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:45.150 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.150 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.150 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.150 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:45.150 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:45.150 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.150 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.150 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.150 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:45.150 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:45.150 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.150 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.150 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:45.150 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:45.150 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:45.150 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.150 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.150 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:45.150 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:45.150 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:45.150 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.150 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.150 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:45.150 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:45.150 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:45.150 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.150 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.150 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:45.413 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:45.414 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.415 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:45.416 13:58:23 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:45.416 13:58:23 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:45.416 13:58:23 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:45.416 13:58:23 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:45.416 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.417 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:45.418 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:45.419 13:58:23 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:45.419 13:58:23 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:45.420 13:58:23 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:45.420 13:58:23 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:45.420 13:58:23 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:45.420 13:58:23 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:45.420 13:58:23 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:09:45.420 13:58:23 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:09:45.420 13:58:23 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:09:45.420 13:58:23 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:09:45.420 13:58:23 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:09:45.420 13:58:23 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:09:45.420 13:58:23 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:45.420 13:58:23 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:45.420 13:58:23 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:45.420 13:58:23 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:45.420 13:58:23 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:45.420 13:58:23 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:45.420 13:58:23 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:45.420 13:58:23 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:09:45.420 13:58:23 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:09:45.420 13:58:23 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:09:45.420 13:58:23 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:09:45.420 13:58:23 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:09:45.420 13:58:23 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:09:45.420 13:58:23 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:45.420 13:58:23 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:45.420 13:58:23 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:09:45.420 13:58:23 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:09:45.420 13:58:23 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:09:45.420 13:58:23 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:45.420 13:58:23 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:09:45.420 13:58:23 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:45.420 13:58:23 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:09:45.420 13:58:23 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:09:45.420 13:58:23 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:09:45.420 13:58:23 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:09:45.420 13:58:23 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:09:45.420 13:58:23 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:09:45.420 13:58:23 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:45.420 13:58:23 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:45.420 13:58:23 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:45.420 13:58:23 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:45.420 13:58:23 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:45.420 13:58:23 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:45.420 13:58:23 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:09:45.420 13:58:23 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:09:45.420 13:58:23 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:09:45.420 13:58:23 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:09:45.420 13:58:23 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:09:45.420 13:58:23 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:45.678 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:46.245 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:46.245 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:46.245 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:46.245 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:46.245 13:58:24 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:46.503 13:58:24 nvme_fdp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:46.503 13:58:24 nvme_fdp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:46.503 13:58:24 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:46.503 ************************************ 00:09:46.503 START TEST nvme_flexible_data_placement 00:09:46.503 ************************************ 00:09:46.503 13:58:24 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:46.503 Initializing NVMe Controllers 00:09:46.503 Attaching to 0000:00:13.0 00:09:46.503 Controller supports FDP Attached to 0000:00:13.0 00:09:46.504 Namespace ID: 1 Endurance Group ID: 1 00:09:46.504 Initialization complete. 00:09:46.504 00:09:46.504 ================================== 00:09:46.504 == FDP tests for Namespace: #01 == 00:09:46.504 ================================== 00:09:46.504 00:09:46.504 Get Feature: FDP: 00:09:46.504 ================= 00:09:46.504 Enabled: Yes 00:09:46.504 FDP configuration Index: 0 00:09:46.504 00:09:46.504 FDP configurations log page 00:09:46.504 =========================== 00:09:46.504 Number of FDP configurations: 1 00:09:46.504 Version: 0 00:09:46.504 Size: 112 00:09:46.504 FDP Configuration Descriptor: 0 00:09:46.504 Descriptor Size: 96 00:09:46.504 Reclaim Group Identifier format: 2 00:09:46.504 FDP Volatile Write Cache: Not Present 00:09:46.504 FDP Configuration: Valid 00:09:46.504 Vendor Specific Size: 0 00:09:46.504 Number of Reclaim Groups: 2 00:09:46.504 Number of Recalim Unit Handles: 8 00:09:46.504 Max Placement Identifiers: 128 00:09:46.504 Number of Namespaces Suppprted: 256 00:09:46.504 Reclaim unit Nominal Size: 6000000 bytes 00:09:46.504 Estimated Reclaim Unit Time Limit: Not Reported 00:09:46.504 RUH Desc #000: RUH Type: Initially Isolated 00:09:46.504 RUH Desc #001: RUH Type: Initially Isolated 00:09:46.504 RUH Desc #002: RUH Type: Initially Isolated 00:09:46.504 RUH Desc #003: RUH Type: Initially Isolated 00:09:46.504 RUH Desc #004: RUH Type: Initially Isolated 00:09:46.504 RUH Desc #005: RUH Type: Initially Isolated 00:09:46.504 RUH Desc #006: RUH Type: Initially Isolated 00:09:46.504 RUH Desc #007: RUH Type: Initially Isolated 00:09:46.504 00:09:46.504 FDP reclaim unit handle usage log page 00:09:46.504 ====================================== 00:09:46.504 Number of Reclaim Unit Handles: 8 00:09:46.504 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:46.504 RUH Usage Desc #001: RUH Attributes: Unused 00:09:46.504 RUH Usage Desc #002: RUH Attributes: Unused 00:09:46.504 RUH Usage Desc #003: RUH Attributes: Unused 00:09:46.504 RUH Usage Desc #004: RUH Attributes: Unused 00:09:46.504 RUH Usage Desc #005: RUH Attributes: Unused 00:09:46.504 RUH Usage Desc #006: RUH Attributes: Unused 00:09:46.504 RUH Usage Desc #007: RUH Attributes: Unused 00:09:46.504 00:09:46.504 FDP statistics log page 00:09:46.504 ======================= 00:09:46.504 Host bytes with metadata written: 2019975168 00:09:46.504 Media bytes with metadata written: 2021076992 00:09:46.504 Media bytes erased: 0 00:09:46.504 00:09:46.504 FDP Reclaim unit handle status 00:09:46.504 ============================== 00:09:46.504 Number of RUHS descriptors: 2 00:09:46.504 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x000000000000599a 00:09:46.504 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:09:46.504 00:09:46.504 FDP write on placement id: 0 success 00:09:46.504 00:09:46.504 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:09:46.504 00:09:46.504 IO mgmt send: RUH update for Placement ID: #0 Success 00:09:46.504 00:09:46.504 Get Feature: FDP Events for Placement handle: #0 00:09:46.504 ======================== 00:09:46.504 Number of FDP Events: 6 00:09:46.504 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:09:46.504 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:09:46.504 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:09:46.504 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:09:46.504 FDP Event: #4 Type: Media Reallocated Enabled: No 00:09:46.504 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:09:46.504 00:09:46.504 FDP events log page 00:09:46.504 =================== 00:09:46.504 Number of FDP events: 1 00:09:46.504 FDP Event #0: 00:09:46.504 Event Type: RU Not Written to Capacity 00:09:46.504 Placement Identifier: Valid 00:09:46.504 NSID: Valid 00:09:46.504 Location: Valid 00:09:46.504 Placement Identifier: 0 00:09:46.504 Event Timestamp: 2 00:09:46.504 Namespace Identifier: 1 00:09:46.504 Reclaim Group Identifier: 0 00:09:46.504 Reclaim Unit Handle Identifier: 0 00:09:46.504 00:09:46.504 FDP test passed 00:09:46.504 00:09:46.504 real 0m0.211s 00:09:46.504 user 0m0.063s 00:09:46.504 sys 0m0.047s 00:09:46.504 13:58:24 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:46.504 ************************************ 00:09:46.504 END TEST nvme_flexible_data_placement 00:09:46.504 ************************************ 00:09:46.504 13:58:24 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:09:46.504 ************************************ 00:09:46.504 END TEST nvme_fdp 00:09:46.504 ************************************ 00:09:46.504 00:09:46.504 real 0m7.547s 00:09:46.504 user 0m1.012s 00:09:46.504 sys 0m1.335s 00:09:46.504 13:58:24 nvme_fdp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:46.504 13:58:24 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:46.763 13:58:24 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:09:46.763 13:58:24 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:46.763 13:58:24 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:46.763 13:58:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:46.763 13:58:24 -- common/autotest_common.sh@10 -- # set +x 00:09:46.763 ************************************ 00:09:46.763 START TEST nvme_rpc 00:09:46.763 ************************************ 00:09:46.763 13:58:24 nvme_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:46.763 * Looking for test storage... 00:09:46.763 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:46.763 13:58:24 nvme_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:46.763 13:58:24 nvme_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:09:46.763 13:58:24 nvme_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:46.763 13:58:24 nvme_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:46.763 13:58:24 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:46.763 13:58:24 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:46.763 13:58:24 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:46.763 13:58:24 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:46.763 13:58:24 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:46.763 13:58:24 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:46.763 13:58:24 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:46.763 13:58:24 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:46.763 13:58:24 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:46.763 13:58:24 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:46.763 13:58:24 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:46.763 13:58:24 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:46.763 13:58:24 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:09:46.763 13:58:24 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:46.763 13:58:24 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:46.763 13:58:24 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:46.763 13:58:24 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:09:46.763 13:58:24 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:46.763 13:58:24 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:09:46.763 13:58:24 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:46.763 13:58:24 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:46.763 13:58:24 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:09:46.763 13:58:24 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:46.763 13:58:24 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:09:46.763 13:58:24 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:46.763 13:58:24 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:46.763 13:58:24 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:46.763 13:58:24 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:09:46.763 13:58:24 nvme_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:46.763 13:58:24 nvme_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:46.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.763 --rc genhtml_branch_coverage=1 00:09:46.763 --rc genhtml_function_coverage=1 00:09:46.763 --rc genhtml_legend=1 00:09:46.763 --rc geninfo_all_blocks=1 00:09:46.763 --rc geninfo_unexecuted_blocks=1 00:09:46.763 00:09:46.763 ' 00:09:46.763 13:58:24 nvme_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:46.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.763 --rc genhtml_branch_coverage=1 00:09:46.763 --rc genhtml_function_coverage=1 00:09:46.763 --rc genhtml_legend=1 00:09:46.763 --rc geninfo_all_blocks=1 00:09:46.763 --rc geninfo_unexecuted_blocks=1 00:09:46.763 00:09:46.763 ' 00:09:46.763 13:58:24 nvme_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:46.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.763 --rc genhtml_branch_coverage=1 00:09:46.763 --rc genhtml_function_coverage=1 00:09:46.763 --rc genhtml_legend=1 00:09:46.763 --rc geninfo_all_blocks=1 00:09:46.763 --rc geninfo_unexecuted_blocks=1 00:09:46.763 00:09:46.763 ' 00:09:46.763 13:58:24 nvme_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:46.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.763 --rc genhtml_branch_coverage=1 00:09:46.763 --rc genhtml_function_coverage=1 00:09:46.763 --rc genhtml_legend=1 00:09:46.763 --rc geninfo_all_blocks=1 00:09:46.763 --rc geninfo_unexecuted_blocks=1 00:09:46.763 00:09:46.763 ' 00:09:46.763 13:58:24 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:46.763 13:58:24 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:09:46.763 13:58:24 nvme_rpc -- common/autotest_common.sh@1507 -- # bdfs=() 00:09:46.763 13:58:24 nvme_rpc -- common/autotest_common.sh@1507 -- # local bdfs 00:09:46.763 13:58:24 nvme_rpc -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:09:46.763 13:58:24 nvme_rpc -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:09:46.763 13:58:24 nvme_rpc -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:46.763 13:58:24 nvme_rpc -- common/autotest_common.sh@1496 -- # local bdfs 00:09:46.763 13:58:24 nvme_rpc -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:46.763 13:58:24 nvme_rpc -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:46.763 13:58:24 nvme_rpc -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:46.763 13:58:25 nvme_rpc -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:09:46.763 13:58:25 nvme_rpc -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:46.763 13:58:25 nvme_rpc -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:09:46.763 13:58:25 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:09:46.763 13:58:25 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=77888 00:09:46.763 13:58:25 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:09:46.763 13:58:25 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 77888 00:09:46.763 13:58:25 nvme_rpc -- common/autotest_common.sh@831 -- # '[' -z 77888 ']' 00:09:46.763 13:58:25 nvme_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:46.763 13:58:25 nvme_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:46.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:46.763 13:58:25 nvme_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:46.763 13:58:25 nvme_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:46.763 13:58:25 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:46.763 13:58:25 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:47.021 [2024-11-17 13:58:25.113906] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:47.022 [2024-11-17 13:58:25.114154] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77888 ] 00:09:47.022 [2024-11-17 13:58:25.253594] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:47.022 [2024-11-17 13:58:25.285815] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:47.022 [2024-11-17 13:58:25.285854] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.955 13:58:25 nvme_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:47.955 13:58:25 nvme_rpc -- common/autotest_common.sh@864 -- # return 0 00:09:47.955 13:58:25 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:09:47.955 Nvme0n1 00:09:47.955 13:58:26 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:09:47.955 13:58:26 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:09:48.216 request: 00:09:48.216 { 00:09:48.216 "bdev_name": "Nvme0n1", 00:09:48.216 "filename": "non_existing_file", 00:09:48.216 "method": "bdev_nvme_apply_firmware", 00:09:48.216 "req_id": 1 00:09:48.216 } 00:09:48.216 Got JSON-RPC error response 00:09:48.216 response: 00:09:48.216 { 00:09:48.216 "code": -32603, 00:09:48.216 "message": "open file failed." 00:09:48.216 } 00:09:48.216 13:58:26 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:09:48.216 13:58:26 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:09:48.216 13:58:26 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:09:48.474 13:58:26 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:48.474 13:58:26 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 77888 00:09:48.474 13:58:26 nvme_rpc -- common/autotest_common.sh@950 -- # '[' -z 77888 ']' 00:09:48.474 13:58:26 nvme_rpc -- common/autotest_common.sh@954 -- # kill -0 77888 00:09:48.474 13:58:26 nvme_rpc -- common/autotest_common.sh@955 -- # uname 00:09:48.474 13:58:26 nvme_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:48.474 13:58:26 nvme_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77888 00:09:48.474 killing process with pid 77888 00:09:48.474 13:58:26 nvme_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:48.474 13:58:26 nvme_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:48.474 13:58:26 nvme_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77888' 00:09:48.474 13:58:26 nvme_rpc -- common/autotest_common.sh@969 -- # kill 77888 00:09:48.474 13:58:26 nvme_rpc -- common/autotest_common.sh@974 -- # wait 77888 00:09:48.732 ************************************ 00:09:48.732 END TEST nvme_rpc 00:09:48.732 ************************************ 00:09:48.732 00:09:48.732 real 0m2.051s 00:09:48.732 user 0m4.015s 00:09:48.732 sys 0m0.453s 00:09:48.732 13:58:26 nvme_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:48.732 13:58:26 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:48.732 13:58:26 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:48.732 13:58:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:48.732 13:58:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:48.732 13:58:26 -- common/autotest_common.sh@10 -- # set +x 00:09:48.732 ************************************ 00:09:48.732 START TEST nvme_rpc_timeouts 00:09:48.732 ************************************ 00:09:48.732 13:58:26 nvme_rpc_timeouts -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:48.732 * Looking for test storage... 00:09:48.732 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:48.732 13:58:26 nvme_rpc_timeouts -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:48.732 13:58:26 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # lcov --version 00:09:48.732 13:58:26 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:49.044 13:58:27 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:49.044 13:58:27 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:49.044 13:58:27 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:49.044 13:58:27 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:49.044 13:58:27 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:09:49.044 13:58:27 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:09:49.044 13:58:27 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:09:49.044 13:58:27 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:09:49.044 13:58:27 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:09:49.044 13:58:27 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:09:49.044 13:58:27 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:09:49.044 13:58:27 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:49.044 13:58:27 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:09:49.044 13:58:27 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:09:49.044 13:58:27 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:49.044 13:58:27 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:49.044 13:58:27 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:09:49.044 13:58:27 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:09:49.044 13:58:27 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:49.044 13:58:27 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:09:49.044 13:58:27 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:09:49.044 13:58:27 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:09:49.044 13:58:27 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:09:49.044 13:58:27 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:49.044 13:58:27 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:09:49.044 13:58:27 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:09:49.044 13:58:27 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:49.044 13:58:27 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:49.044 13:58:27 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:09:49.044 13:58:27 nvme_rpc_timeouts -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:49.044 13:58:27 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:49.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.044 --rc genhtml_branch_coverage=1 00:09:49.044 --rc genhtml_function_coverage=1 00:09:49.044 --rc genhtml_legend=1 00:09:49.044 --rc geninfo_all_blocks=1 00:09:49.044 --rc geninfo_unexecuted_blocks=1 00:09:49.044 00:09:49.044 ' 00:09:49.044 13:58:27 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:49.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.044 --rc genhtml_branch_coverage=1 00:09:49.044 --rc genhtml_function_coverage=1 00:09:49.044 --rc genhtml_legend=1 00:09:49.044 --rc geninfo_all_blocks=1 00:09:49.044 --rc geninfo_unexecuted_blocks=1 00:09:49.044 00:09:49.044 ' 00:09:49.044 13:58:27 nvme_rpc_timeouts -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:49.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.044 --rc genhtml_branch_coverage=1 00:09:49.044 --rc genhtml_function_coverage=1 00:09:49.044 --rc genhtml_legend=1 00:09:49.044 --rc geninfo_all_blocks=1 00:09:49.044 --rc geninfo_unexecuted_blocks=1 00:09:49.044 00:09:49.044 ' 00:09:49.044 13:58:27 nvme_rpc_timeouts -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:49.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.044 --rc genhtml_branch_coverage=1 00:09:49.044 --rc genhtml_function_coverage=1 00:09:49.044 --rc genhtml_legend=1 00:09:49.044 --rc geninfo_all_blocks=1 00:09:49.044 --rc geninfo_unexecuted_blocks=1 00:09:49.044 00:09:49.044 ' 00:09:49.044 13:58:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:49.044 13:58:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_77938 00:09:49.044 13:58:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_77938 00:09:49.044 13:58:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=77970 00:09:49.044 13:58:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:09:49.044 13:58:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 77970 00:09:49.044 13:58:27 nvme_rpc_timeouts -- common/autotest_common.sh@831 -- # '[' -z 77970 ']' 00:09:49.044 13:58:27 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.044 13:58:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:49.044 13:58:27 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:49.044 13:58:27 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.044 13:58:27 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:49.044 13:58:27 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:49.044 [2024-11-17 13:58:27.138914] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:49.044 [2024-11-17 13:58:27.139028] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77970 ] 00:09:49.044 [2024-11-17 13:58:27.283436] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:49.044 [2024-11-17 13:58:27.315040] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:49.044 [2024-11-17 13:58:27.315152] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.999 13:58:27 nvme_rpc_timeouts -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:49.999 Checking default timeout settings: 00:09:49.999 13:58:27 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # return 0 00:09:49.999 13:58:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:09:49.999 13:58:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:49.999 Making settings changes with rpc: 00:09:49.999 13:58:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:09:49.999 13:58:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:09:50.257 Check default vs. modified settings: 00:09:50.257 13:58:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:09:50.257 13:58:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:50.515 13:58:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:09:50.515 13:58:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:50.515 13:58:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_77938 00:09:50.515 13:58:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:50.515 13:58:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:50.774 13:58:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:09:50.774 13:58:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_77938 00:09:50.774 13:58:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:50.774 13:58:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:50.774 13:58:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:09:50.774 13:58:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:09:50.774 Setting action_on_timeout is changed as expected. 00:09:50.774 13:58:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:09:50.774 13:58:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:50.774 13:58:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_77938 00:09:50.774 13:58:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:50.774 13:58:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:50.774 13:58:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:50.774 13:58:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_77938 00:09:50.774 13:58:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:50.774 13:58:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:50.774 Setting timeout_us is changed as expected. 00:09:50.774 13:58:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:09:50.774 13:58:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:09:50.774 13:58:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:09:50.774 13:58:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:50.774 13:58:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_77938 00:09:50.774 13:58:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:50.774 13:58:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:50.774 13:58:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:50.774 13:58:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:50.774 13:58:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_77938 00:09:50.774 13:58:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:50.774 Setting timeout_admin_us is changed as expected. 00:09:50.774 13:58:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:09:50.774 13:58:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:09:50.774 13:58:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:09:50.774 13:58:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:09:50.774 13:58:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_77938 /tmp/settings_modified_77938 00:09:50.774 13:58:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 77970 00:09:50.774 13:58:28 nvme_rpc_timeouts -- common/autotest_common.sh@950 -- # '[' -z 77970 ']' 00:09:50.774 13:58:28 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # kill -0 77970 00:09:50.774 13:58:28 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # uname 00:09:50.774 13:58:28 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:50.774 13:58:28 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77970 00:09:50.774 killing process with pid 77970 00:09:50.774 13:58:28 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:50.774 13:58:28 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:50.774 13:58:28 nvme_rpc_timeouts -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77970' 00:09:50.774 13:58:28 nvme_rpc_timeouts -- common/autotest_common.sh@969 -- # kill 77970 00:09:50.774 13:58:28 nvme_rpc_timeouts -- common/autotest_common.sh@974 -- # wait 77970 00:09:51.032 RPC TIMEOUT SETTING TEST PASSED. 00:09:51.032 13:58:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:09:51.032 00:09:51.032 real 0m2.205s 00:09:51.032 user 0m4.431s 00:09:51.032 sys 0m0.449s 00:09:51.032 ************************************ 00:09:51.032 END TEST nvme_rpc_timeouts 00:09:51.032 ************************************ 00:09:51.032 13:58:29 nvme_rpc_timeouts -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:51.032 13:58:29 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:51.032 13:58:29 -- spdk/autotest.sh@239 -- # uname -s 00:09:51.032 13:58:29 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:09:51.032 13:58:29 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:51.032 13:58:29 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:51.032 13:58:29 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:51.032 13:58:29 -- common/autotest_common.sh@10 -- # set +x 00:09:51.032 ************************************ 00:09:51.032 START TEST sw_hotplug 00:09:51.032 ************************************ 00:09:51.032 13:58:29 sw_hotplug -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:51.032 * Looking for test storage... 00:09:51.032 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:51.032 13:58:29 sw_hotplug -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:51.033 13:58:29 sw_hotplug -- common/autotest_common.sh@1681 -- # lcov --version 00:09:51.033 13:58:29 sw_hotplug -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:51.033 13:58:29 sw_hotplug -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:51.033 13:58:29 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:51.033 13:58:29 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:51.033 13:58:29 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:51.033 13:58:29 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:09:51.033 13:58:29 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:09:51.033 13:58:29 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:09:51.033 13:58:29 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:09:51.033 13:58:29 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:09:51.033 13:58:29 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:09:51.033 13:58:29 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:09:51.033 13:58:29 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:51.033 13:58:29 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:09:51.033 13:58:29 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:09:51.033 13:58:29 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:51.033 13:58:29 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:51.033 13:58:29 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:09:51.033 13:58:29 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:09:51.033 13:58:29 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:51.033 13:58:29 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:09:51.033 13:58:29 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:09:51.033 13:58:29 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:09:51.033 13:58:29 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:09:51.033 13:58:29 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:51.033 13:58:29 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:09:51.033 13:58:29 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:09:51.033 13:58:29 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:51.033 13:58:29 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:51.033 13:58:29 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:09:51.033 13:58:29 sw_hotplug -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:51.033 13:58:29 sw_hotplug -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:51.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.033 --rc genhtml_branch_coverage=1 00:09:51.033 --rc genhtml_function_coverage=1 00:09:51.033 --rc genhtml_legend=1 00:09:51.033 --rc geninfo_all_blocks=1 00:09:51.033 --rc geninfo_unexecuted_blocks=1 00:09:51.033 00:09:51.033 ' 00:09:51.033 13:58:29 sw_hotplug -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:51.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.033 --rc genhtml_branch_coverage=1 00:09:51.033 --rc genhtml_function_coverage=1 00:09:51.033 --rc genhtml_legend=1 00:09:51.033 --rc geninfo_all_blocks=1 00:09:51.033 --rc geninfo_unexecuted_blocks=1 00:09:51.033 00:09:51.033 ' 00:09:51.033 13:58:29 sw_hotplug -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:51.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.033 --rc genhtml_branch_coverage=1 00:09:51.033 --rc genhtml_function_coverage=1 00:09:51.033 --rc genhtml_legend=1 00:09:51.033 --rc geninfo_all_blocks=1 00:09:51.033 --rc geninfo_unexecuted_blocks=1 00:09:51.033 00:09:51.033 ' 00:09:51.033 13:58:29 sw_hotplug -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:51.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.033 --rc genhtml_branch_coverage=1 00:09:51.033 --rc genhtml_function_coverage=1 00:09:51.033 --rc genhtml_legend=1 00:09:51.033 --rc geninfo_all_blocks=1 00:09:51.033 --rc geninfo_unexecuted_blocks=1 00:09:51.033 00:09:51.033 ' 00:09:51.033 13:58:29 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:51.291 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:51.550 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:51.550 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:51.550 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:51.550 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:51.550 13:58:29 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:09:51.550 13:58:29 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:09:51.550 13:58:29 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:09:51.550 13:58:29 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:09:51.550 13:58:29 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:09:51.550 13:58:29 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:09:51.550 13:58:29 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:09:51.550 13:58:29 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:09:51.550 13:58:29 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:09:51.550 13:58:29 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:09:51.550 13:58:29 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:09:51.550 13:58:29 sw_hotplug -- scripts/common.sh@233 -- # local class 00:09:51.550 13:58:29 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:09:51.550 13:58:29 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:09:51.550 13:58:29 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:09:51.550 13:58:29 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:09:51.550 13:58:29 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:09:51.550 13:58:29 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:09:51.550 13:58:29 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:09:51.550 13:58:29 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:09:51.550 13:58:29 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:09:51.550 13:58:29 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:09:51.550 13:58:29 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:09:51.550 13:58:29 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:09:51.550 13:58:29 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:09:51.550 13:58:29 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:09:51.550 13:58:29 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:51.550 13:58:29 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:09:51.550 13:58:29 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:51.550 13:58:29 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:51.550 13:58:29 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:51.550 13:58:29 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:51.550 13:58:29 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:09:51.550 13:58:29 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:51.550 13:58:29 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:09:51.550 13:58:29 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:51.550 13:58:29 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:51.550 13:58:29 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:51.550 13:58:29 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:51.550 13:58:29 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:09:51.550 13:58:29 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:51.550 13:58:29 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:09:51.550 13:58:29 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:51.550 13:58:29 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:51.550 13:58:29 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:51.550 13:58:29 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:51.551 13:58:29 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:09:51.551 13:58:29 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:51.551 13:58:29 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:09:51.551 13:58:29 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:51.551 13:58:29 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:51.551 13:58:29 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:51.551 13:58:29 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:51.551 13:58:29 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:09:51.551 13:58:29 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:51.551 13:58:29 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:09:51.551 13:58:29 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:51.551 13:58:29 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:51.551 13:58:29 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:51.551 13:58:29 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:51.551 13:58:29 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:09:51.551 13:58:29 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:51.551 13:58:29 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:51.551 13:58:29 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:51.551 13:58:29 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:51.551 13:58:29 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:09:51.551 13:58:29 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:51.551 13:58:29 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:51.551 13:58:29 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:51.551 13:58:29 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:51.551 13:58:29 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:09:51.551 13:58:29 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:51.551 13:58:29 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:51.551 13:58:29 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:51.551 13:58:29 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:09:51.551 13:58:29 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:51.551 13:58:29 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:09:51.551 13:58:29 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:09:51.551 13:58:29 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:51.809 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:52.068 Waiting for block devices as requested 00:09:52.068 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:52.068 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:52.068 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:52.326 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:57.590 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:57.590 13:58:35 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:09:57.590 13:58:35 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:57.590 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:09:57.848 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:57.849 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:09:58.107 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:09:58.365 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:58.365 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:58.365 13:58:36 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:09:58.365 13:58:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:09:58.365 13:58:36 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:09:58.365 13:58:36 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:09:58.365 13:58:36 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=78811 00:09:58.365 13:58:36 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:09:58.365 13:58:36 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:09:58.365 13:58:36 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:09:58.365 13:58:36 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:09:58.365 13:58:36 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:09:58.365 13:58:36 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:09:58.365 13:58:36 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:09:58.365 13:58:36 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:09:58.365 13:58:36 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 false 00:09:58.365 13:58:36 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:09:58.365 13:58:36 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:09:58.365 13:58:36 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:09:58.365 13:58:36 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:09:58.365 13:58:36 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:09:58.624 Initializing NVMe Controllers 00:09:58.624 Attaching to 0000:00:10.0 00:09:58.624 Attaching to 0000:00:11.0 00:09:58.624 Attached to 0000:00:10.0 00:09:58.624 Attached to 0000:00:11.0 00:09:58.624 Initialization complete. Starting I/O... 00:09:58.624 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:09:58.624 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:09:58.624 00:09:59.558 QEMU NVMe Ctrl (12340 ): 2872 I/Os completed (+2872) 00:09:59.558 QEMU NVMe Ctrl (12341 ): 2984 I/Os completed (+2984) 00:09:59.558 00:10:00.494 QEMU NVMe Ctrl (12340 ): 6449 I/Os completed (+3577) 00:10:00.494 QEMU NVMe Ctrl (12341 ): 6801 I/Os completed (+3817) 00:10:00.494 00:10:01.500 QEMU NVMe Ctrl (12340 ): 10209 I/Os completed (+3760) 00:10:01.500 QEMU NVMe Ctrl (12341 ): 10554 I/Os completed (+3753) 00:10:01.500 00:10:02.875 QEMU NVMe Ctrl (12340 ): 14013 I/Os completed (+3804) 00:10:02.875 QEMU NVMe Ctrl (12341 ): 14358 I/Os completed (+3804) 00:10:02.875 00:10:03.444 QEMU NVMe Ctrl (12340 ): 17892 I/Os completed (+3879) 00:10:03.444 QEMU NVMe Ctrl (12341 ): 18257 I/Os completed (+3899) 00:10:03.444 00:10:04.382 13:58:42 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:04.382 13:58:42 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:04.382 13:58:42 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:04.382 [2024-11-17 13:58:42.567025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:04.382 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:04.382 [2024-11-17 13:58:42.567853] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.382 [2024-11-17 13:58:42.567889] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.382 [2024-11-17 13:58:42.567901] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.382 [2024-11-17 13:58:42.567915] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.382 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:04.382 [2024-11-17 13:58:42.570284] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.382 [2024-11-17 13:58:42.570321] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.382 [2024-11-17 13:58:42.570332] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.382 [2024-11-17 13:58:42.570343] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.382 13:58:42 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:04.382 13:58:42 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:04.382 [2024-11-17 13:58:42.589083] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:04.382 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:04.382 [2024-11-17 13:58:42.589832] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.382 [2024-11-17 13:58:42.589865] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.382 [2024-11-17 13:58:42.589878] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.382 [2024-11-17 13:58:42.589890] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.382 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:04.382 [2024-11-17 13:58:42.590700] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.382 [2024-11-17 13:58:42.590725] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.382 [2024-11-17 13:58:42.590739] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.382 [2024-11-17 13:58:42.590749] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:04.382 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:04.382 EAL: Scan for (pci) bus failed. 00:10:04.382 13:58:42 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:04.382 13:58:42 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:04.382 13:58:42 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:04.383 13:58:42 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:04.383 13:58:42 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:04.644 00:10:04.644 13:58:42 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:04.644 13:58:42 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:04.644 13:58:42 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:04.645 13:58:42 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:04.645 13:58:42 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:04.645 Attaching to 0000:00:10.0 00:10:04.645 Attached to 0000:00:10.0 00:10:04.645 13:58:42 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:04.645 13:58:42 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:04.645 13:58:42 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:04.645 Attaching to 0000:00:11.0 00:10:04.645 Attached to 0000:00:11.0 00:10:05.585 QEMU NVMe Ctrl (12340 ): 4358 I/Os completed (+4358) 00:10:05.585 QEMU NVMe Ctrl (12341 ): 3938 I/Os completed (+3938) 00:10:05.585 00:10:06.524 QEMU NVMe Ctrl (12340 ): 7819 I/Os completed (+3461) 00:10:06.524 QEMU NVMe Ctrl (12341 ): 7466 I/Os completed (+3528) 00:10:06.524 00:10:07.458 QEMU NVMe Ctrl (12340 ): 11527 I/Os completed (+3708) 00:10:07.458 QEMU NVMe Ctrl (12341 ): 11206 I/Os completed (+3740) 00:10:07.458 00:10:08.833 QEMU NVMe Ctrl (12340 ): 15303 I/Os completed (+3776) 00:10:08.833 QEMU NVMe Ctrl (12341 ): 14982 I/Os completed (+3776) 00:10:08.833 00:10:09.769 QEMU NVMe Ctrl (12340 ): 19567 I/Os completed (+4264) 00:10:09.769 QEMU NVMe Ctrl (12341 ): 19279 I/Os completed (+4297) 00:10:09.769 00:10:10.706 QEMU NVMe Ctrl (12340 ): 23505 I/Os completed (+3938) 00:10:10.706 QEMU NVMe Ctrl (12341 ): 23301 I/Os completed (+4022) 00:10:10.706 00:10:11.645 QEMU NVMe Ctrl (12340 ): 27269 I/Os completed (+3764) 00:10:11.645 QEMU NVMe Ctrl (12341 ): 27072 I/Os completed (+3771) 00:10:11.645 00:10:12.583 QEMU NVMe Ctrl (12340 ): 31218 I/Os completed (+3949) 00:10:12.583 QEMU NVMe Ctrl (12341 ): 31021 I/Os completed (+3949) 00:10:12.583 00:10:13.518 QEMU NVMe Ctrl (12340 ): 35457 I/Os completed (+4239) 00:10:13.518 QEMU NVMe Ctrl (12341 ): 35283 I/Os completed (+4262) 00:10:13.518 00:10:14.456 QEMU NVMe Ctrl (12340 ): 39122 I/Os completed (+3665) 00:10:14.456 QEMU NVMe Ctrl (12341 ): 39043 I/Os completed (+3760) 00:10:14.456 00:10:15.837 QEMU NVMe Ctrl (12340 ): 42834 I/Os completed (+3712) 00:10:15.837 QEMU NVMe Ctrl (12341 ): 42759 I/Os completed (+3716) 00:10:15.837 00:10:16.775 QEMU NVMe Ctrl (12340 ): 46689 I/Os completed (+3855) 00:10:16.775 QEMU NVMe Ctrl (12341 ): 46609 I/Os completed (+3850) 00:10:16.775 00:10:16.775 13:58:54 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:16.775 13:58:54 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:16.775 13:58:54 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:16.775 13:58:54 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:16.775 [2024-11-17 13:58:54.843539] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:16.775 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:16.775 [2024-11-17 13:58:54.844457] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:16.775 [2024-11-17 13:58:54.844487] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:16.775 [2024-11-17 13:58:54.844500] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:16.775 [2024-11-17 13:58:54.844514] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:16.775 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:16.775 [2024-11-17 13:58:54.845544] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:16.775 [2024-11-17 13:58:54.845577] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:16.775 [2024-11-17 13:58:54.845588] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:16.775 [2024-11-17 13:58:54.845599] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:16.775 13:58:54 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:16.775 13:58:54 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:16.775 [2024-11-17 13:58:54.868394] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:16.775 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:16.775 [2024-11-17 13:58:54.869136] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:16.775 [2024-11-17 13:58:54.869162] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:16.775 [2024-11-17 13:58:54.869174] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:16.775 [2024-11-17 13:58:54.869186] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:16.775 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:16.775 [2024-11-17 13:58:54.870033] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:16.775 [2024-11-17 13:58:54.870068] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:16.775 [2024-11-17 13:58:54.870083] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:16.775 [2024-11-17 13:58:54.870093] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:16.775 13:58:54 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:16.775 13:58:54 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:16.775 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:16.775 EAL: Scan for (pci) bus failed. 00:10:16.775 13:58:54 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:16.775 13:58:54 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:16.775 13:58:54 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:16.775 13:58:55 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:16.775 13:58:55 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:16.775 13:58:55 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:16.775 13:58:55 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:16.775 13:58:55 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:16.775 Attaching to 0000:00:10.0 00:10:16.775 Attached to 0000:00:10.0 00:10:17.034 13:58:55 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:17.034 13:58:55 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:17.034 13:58:55 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:17.034 Attaching to 0000:00:11.0 00:10:17.034 Attached to 0000:00:11.0 00:10:17.602 QEMU NVMe Ctrl (12340 ): 3021 I/Os completed (+3021) 00:10:17.602 QEMU NVMe Ctrl (12341 ): 2701 I/Os completed (+2701) 00:10:17.602 00:10:18.542 QEMU NVMe Ctrl (12340 ): 6733 I/Os completed (+3712) 00:10:18.542 QEMU NVMe Ctrl (12341 ): 6413 I/Os completed (+3712) 00:10:18.542 00:10:19.479 QEMU NVMe Ctrl (12340 ): 10449 I/Os completed (+3716) 00:10:19.479 QEMU NVMe Ctrl (12341 ): 10129 I/Os completed (+3716) 00:10:19.479 00:10:20.859 QEMU NVMe Ctrl (12340 ): 14149 I/Os completed (+3700) 00:10:20.859 QEMU NVMe Ctrl (12341 ): 13845 I/Os completed (+3716) 00:10:20.859 00:10:21.437 QEMU NVMe Ctrl (12340 ): 17832 I/Os completed (+3683) 00:10:21.437 QEMU NVMe Ctrl (12341 ): 17518 I/Os completed (+3673) 00:10:21.437 00:10:22.817 QEMU NVMe Ctrl (12340 ): 21514 I/Os completed (+3682) 00:10:22.817 QEMU NVMe Ctrl (12341 ): 21212 I/Os completed (+3694) 00:10:22.817 00:10:23.750 QEMU NVMe Ctrl (12340 ): 25194 I/Os completed (+3680) 00:10:23.750 QEMU NVMe Ctrl (12341 ): 24884 I/Os completed (+3672) 00:10:23.750 00:10:24.685 QEMU NVMe Ctrl (12340 ): 29184 I/Os completed (+3990) 00:10:24.685 QEMU NVMe Ctrl (12341 ): 28868 I/Os completed (+3984) 00:10:24.685 00:10:25.620 QEMU NVMe Ctrl (12340 ): 33147 I/Os completed (+3963) 00:10:25.620 QEMU NVMe Ctrl (12341 ): 32780 I/Os completed (+3912) 00:10:25.620 00:10:26.557 QEMU NVMe Ctrl (12340 ): 36850 I/Os completed (+3703) 00:10:26.557 QEMU NVMe Ctrl (12341 ): 36494 I/Os completed (+3714) 00:10:26.557 00:10:27.491 QEMU NVMe Ctrl (12340 ): 40862 I/Os completed (+4012) 00:10:27.491 QEMU NVMe Ctrl (12341 ): 40428 I/Os completed (+3934) 00:10:27.491 00:10:28.525 QEMU NVMe Ctrl (12340 ): 44530 I/Os completed (+3668) 00:10:28.525 QEMU NVMe Ctrl (12341 ): 44102 I/Os completed (+3674) 00:10:28.525 00:10:29.093 13:59:07 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:29.093 13:59:07 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:29.093 13:59:07 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:29.093 13:59:07 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:29.093 [2024-11-17 13:59:07.115370] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:29.093 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:29.093 [2024-11-17 13:59:07.116204] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:29.093 [2024-11-17 13:59:07.116261] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:29.093 [2024-11-17 13:59:07.116275] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:29.093 [2024-11-17 13:59:07.116291] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:29.093 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:29.093 [2024-11-17 13:59:07.117795] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:29.093 [2024-11-17 13:59:07.117829] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:29.093 [2024-11-17 13:59:07.117840] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:29.093 [2024-11-17 13:59:07.117851] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:29.093 13:59:07 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:29.093 13:59:07 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:29.093 [2024-11-17 13:59:07.139727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:29.093 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:29.093 [2024-11-17 13:59:07.140497] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:29.093 [2024-11-17 13:59:07.140527] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:29.093 [2024-11-17 13:59:07.140541] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:29.093 [2024-11-17 13:59:07.140553] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:29.093 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:29.093 [2024-11-17 13:59:07.141389] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:29.093 [2024-11-17 13:59:07.141416] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:29.093 [2024-11-17 13:59:07.141427] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:29.093 [2024-11-17 13:59:07.141437] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:29.093 13:59:07 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:29.093 13:59:07 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:29.093 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:29.093 EAL: Scan for (pci) bus failed. 00:10:29.093 13:59:07 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:29.093 13:59:07 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:29.093 13:59:07 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:29.093 13:59:07 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:29.093 13:59:07 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:29.093 13:59:07 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:29.093 13:59:07 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:29.093 13:59:07 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:29.093 Attaching to 0000:00:10.0 00:10:29.093 Attached to 0000:00:10.0 00:10:29.093 13:59:07 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:29.093 13:59:07 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:29.093 13:59:07 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:29.093 Attaching to 0000:00:11.0 00:10:29.093 Attached to 0000:00:11.0 00:10:29.093 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:29.093 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:29.350 [2024-11-17 13:59:07.393967] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:10:41.556 13:59:19 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:41.556 13:59:19 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:41.556 13:59:19 sw_hotplug -- common/autotest_common.sh@717 -- # time=42.83 00:10:41.556 13:59:19 sw_hotplug -- common/autotest_common.sh@718 -- # echo 42.83 00:10:41.556 13:59:19 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:10:41.556 13:59:19 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.83 00:10:41.556 13:59:19 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.83 2 00:10:41.556 remove_attach_helper took 42.83s to complete (handling 2 nvme drive(s)) 13:59:19 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:10:48.129 13:59:25 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 78811 00:10:48.129 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (78811) - No such process 00:10:48.129 13:59:25 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 78811 00:10:48.129 13:59:25 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:10:48.129 13:59:25 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:10:48.129 13:59:25 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:10:48.129 13:59:25 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=79368 00:10:48.129 13:59:25 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:10:48.129 13:59:25 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 79368 00:10:48.129 13:59:25 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:48.129 13:59:25 sw_hotplug -- common/autotest_common.sh@831 -- # '[' -z 79368 ']' 00:10:48.129 13:59:25 sw_hotplug -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:48.129 13:59:25 sw_hotplug -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:48.129 13:59:25 sw_hotplug -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:48.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:48.129 13:59:25 sw_hotplug -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:48.129 13:59:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:48.129 [2024-11-17 13:59:25.475915] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:48.129 [2024-11-17 13:59:25.476028] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79368 ] 00:10:48.129 [2024-11-17 13:59:25.623852] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.129 [2024-11-17 13:59:25.655152] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.129 13:59:26 sw_hotplug -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:48.129 13:59:26 sw_hotplug -- common/autotest_common.sh@864 -- # return 0 00:10:48.129 13:59:26 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:48.129 13:59:26 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.129 13:59:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:48.129 13:59:26 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.129 13:59:26 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:10:48.129 13:59:26 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:48.129 13:59:26 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:48.129 13:59:26 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:10:48.129 13:59:26 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:10:48.129 13:59:26 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:10:48.129 13:59:26 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:10:48.129 13:59:26 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:10:48.129 13:59:26 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:48.129 13:59:26 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:48.129 13:59:26 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:48.129 13:59:26 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:48.129 13:59:26 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:54.720 13:59:32 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:54.720 13:59:32 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:54.720 13:59:32 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:54.720 13:59:32 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:54.720 13:59:32 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:54.720 13:59:32 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:54.720 13:59:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:54.720 13:59:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:54.720 13:59:32 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:54.720 13:59:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:54.720 13:59:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:54.720 13:59:32 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.720 13:59:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:54.720 13:59:32 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.720 13:59:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:54.720 13:59:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:54.720 [2024-11-17 13:59:32.406430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:54.720 [2024-11-17 13:59:32.407483] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:54.720 [2024-11-17 13:59:32.407511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:54.720 [2024-11-17 13:59:32.407524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:54.720 [2024-11-17 13:59:32.407536] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:54.720 [2024-11-17 13:59:32.407545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:54.720 [2024-11-17 13:59:32.407552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:54.720 [2024-11-17 13:59:32.407563] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:54.720 [2024-11-17 13:59:32.407570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:54.720 [2024-11-17 13:59:32.407577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:54.720 [2024-11-17 13:59:32.407584] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:54.720 [2024-11-17 13:59:32.407591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:54.720 [2024-11-17 13:59:32.407598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:54.720 [2024-11-17 13:59:32.806437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:54.720 [2024-11-17 13:59:32.807465] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:54.720 [2024-11-17 13:59:32.807492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:54.720 [2024-11-17 13:59:32.807502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:54.720 [2024-11-17 13:59:32.807514] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:54.720 [2024-11-17 13:59:32.807521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:54.720 [2024-11-17 13:59:32.807529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:54.720 [2024-11-17 13:59:32.807536] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:54.720 [2024-11-17 13:59:32.807544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:54.721 [2024-11-17 13:59:32.807550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:54.721 [2024-11-17 13:59:32.807559] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:54.721 [2024-11-17 13:59:32.807566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:54.721 [2024-11-17 13:59:32.807573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:54.721 13:59:32 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:54.721 13:59:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:54.721 13:59:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:54.721 13:59:32 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:54.721 13:59:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:54.721 13:59:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:54.721 13:59:32 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.721 13:59:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:54.721 13:59:32 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.721 13:59:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:54.721 13:59:32 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:54.721 13:59:32 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:54.721 13:59:32 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:54.721 13:59:32 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:54.979 13:59:33 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:54.979 13:59:33 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:54.979 13:59:33 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:54.979 13:59:33 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:54.979 13:59:33 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:54.979 13:59:33 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:54.979 13:59:33 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:54.979 13:59:33 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:07.196 13:59:45 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:07.196 13:59:45 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:07.196 13:59:45 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:07.196 13:59:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:07.196 13:59:45 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:07.196 13:59:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:07.196 13:59:45 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.196 13:59:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:07.196 13:59:45 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.196 13:59:45 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:07.196 13:59:45 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:07.196 13:59:45 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:07.196 13:59:45 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:07.196 [2024-11-17 13:59:45.206653] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:07.196 [2024-11-17 13:59:45.207858] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:07.196 [2024-11-17 13:59:45.207892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:07.196 [2024-11-17 13:59:45.207904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:07.196 [2024-11-17 13:59:45.207916] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:07.196 [2024-11-17 13:59:45.207927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:07.196 [2024-11-17 13:59:45.207934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:07.196 [2024-11-17 13:59:45.207942] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:07.196 [2024-11-17 13:59:45.207948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:07.196 [2024-11-17 13:59:45.207956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:07.196 [2024-11-17 13:59:45.207962] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:07.196 [2024-11-17 13:59:45.207970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:07.196 [2024-11-17 13:59:45.207976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:07.196 13:59:45 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:07.196 13:59:45 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:07.196 13:59:45 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:07.196 13:59:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:07.196 13:59:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:07.196 13:59:45 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:07.196 13:59:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:07.196 13:59:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:07.196 13:59:45 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.196 13:59:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:07.196 13:59:45 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.196 13:59:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:07.196 13:59:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:07.457 [2024-11-17 13:59:45.606659] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:07.457 [2024-11-17 13:59:45.607663] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:07.457 [2024-11-17 13:59:45.607697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:07.457 [2024-11-17 13:59:45.607707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:07.457 [2024-11-17 13:59:45.607719] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:07.457 [2024-11-17 13:59:45.607727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:07.457 [2024-11-17 13:59:45.607735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:07.457 [2024-11-17 13:59:45.607743] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:07.457 [2024-11-17 13:59:45.607751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:07.457 [2024-11-17 13:59:45.607757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:07.457 [2024-11-17 13:59:45.607765] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:07.457 [2024-11-17 13:59:45.607771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:07.457 [2024-11-17 13:59:45.607779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:07.718 13:59:45 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:07.718 13:59:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:07.718 13:59:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:07.718 13:59:45 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:07.718 13:59:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:07.718 13:59:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:07.718 13:59:45 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.718 13:59:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:07.718 13:59:45 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.718 13:59:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:07.718 13:59:45 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:07.718 13:59:45 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:07.718 13:59:45 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:07.718 13:59:45 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:07.718 13:59:45 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:07.718 13:59:45 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:07.718 13:59:45 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:07.718 13:59:45 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:07.718 13:59:45 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:07.979 13:59:46 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:07.979 13:59:46 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:07.979 13:59:46 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:20.215 13:59:58 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:20.215 13:59:58 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:20.215 13:59:58 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:20.215 13:59:58 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:20.215 13:59:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:20.215 13:59:58 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.215 13:59:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:20.215 13:59:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:20.215 13:59:58 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.215 13:59:58 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:20.215 13:59:58 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:20.215 13:59:58 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:20.215 13:59:58 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:20.215 13:59:58 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:20.215 13:59:58 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:20.215 [2024-11-17 13:59:58.106854] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:20.215 [2024-11-17 13:59:58.107969] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:20.215 [2024-11-17 13:59:58.107989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:20.215 [2024-11-17 13:59:58.108002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.215 [2024-11-17 13:59:58.108014] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:20.215 [2024-11-17 13:59:58.108022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:20.215 [2024-11-17 13:59:58.108028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.215 [2024-11-17 13:59:58.108036] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:20.215 [2024-11-17 13:59:58.108043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:20.215 [2024-11-17 13:59:58.108050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.215 [2024-11-17 13:59:58.108057] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:20.215 [2024-11-17 13:59:58.108064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:20.215 [2024-11-17 13:59:58.108071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.215 13:59:58 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:20.215 13:59:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:20.215 13:59:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:20.215 13:59:58 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:20.215 13:59:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:20.215 13:59:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:20.215 13:59:58 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.215 13:59:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:20.215 13:59:58 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.215 13:59:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:20.215 13:59:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:20.215 [2024-11-17 13:59:58.506853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:20.215 [2024-11-17 13:59:58.507855] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:20.215 [2024-11-17 13:59:58.507885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:20.215 [2024-11-17 13:59:58.507895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.215 [2024-11-17 13:59:58.507910] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:20.215 [2024-11-17 13:59:58.507917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:20.215 [2024-11-17 13:59:58.507927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.215 [2024-11-17 13:59:58.507933] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:20.215 [2024-11-17 13:59:58.507941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:20.215 [2024-11-17 13:59:58.507948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.215 [2024-11-17 13:59:58.507955] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:20.215 [2024-11-17 13:59:58.507962] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:20.215 [2024-11-17 13:59:58.507969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.476 13:59:58 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:20.476 13:59:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:20.476 13:59:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:20.476 13:59:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:20.476 13:59:58 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:20.476 13:59:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:20.476 13:59:58 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.476 13:59:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:20.476 13:59:58 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.476 13:59:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:20.477 13:59:58 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:20.477 13:59:58 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:20.477 13:59:58 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:20.477 13:59:58 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:20.738 13:59:58 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:20.738 13:59:58 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:20.738 13:59:58 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:20.738 13:59:58 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:20.738 13:59:58 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:20.738 13:59:58 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:20.738 13:59:58 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:20.738 13:59:58 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:32.994 14:00:10 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:32.994 14:00:10 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:32.994 14:00:10 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:32.994 14:00:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:32.994 14:00:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:32.994 14:00:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:32.994 14:00:10 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.994 14:00:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:32.994 14:00:10 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.994 14:00:10 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:32.994 14:00:10 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:32.994 14:00:10 sw_hotplug -- common/autotest_common.sh@717 -- # time=44.64 00:11:32.994 14:00:10 sw_hotplug -- common/autotest_common.sh@718 -- # echo 44.64 00:11:32.994 14:00:10 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:11:32.994 14:00:10 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.64 00:11:32.994 14:00:10 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.64 2 00:11:32.994 remove_attach_helper took 44.64s to complete (handling 2 nvme drive(s)) 14:00:10 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:11:32.994 14:00:10 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.994 14:00:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:32.994 14:00:10 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.994 14:00:10 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:32.994 14:00:10 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.994 14:00:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:32.994 14:00:10 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.994 14:00:10 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:11:32.994 14:00:10 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:32.994 14:00:10 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:32.994 14:00:10 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:11:32.994 14:00:10 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:11:32.994 14:00:10 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:11:32.994 14:00:10 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:11:32.994 14:00:10 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:11:32.994 14:00:10 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:32.994 14:00:10 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:32.994 14:00:10 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:32.994 14:00:10 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:32.994 14:00:10 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:39.585 14:00:16 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:39.585 14:00:16 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:39.585 14:00:16 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:39.585 14:00:17 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:39.585 14:00:17 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:39.585 14:00:17 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:39.585 14:00:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:39.585 14:00:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:39.585 14:00:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:39.585 14:00:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:39.585 14:00:17 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.585 14:00:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:39.585 14:00:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:39.585 14:00:17 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.585 14:00:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:39.585 14:00:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:39.585 [2024-11-17 14:00:17.081853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:39.585 [2024-11-17 14:00:17.082625] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:39.585 [2024-11-17 14:00:17.082651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:39.585 [2024-11-17 14:00:17.082663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.585 [2024-11-17 14:00:17.082674] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:39.585 [2024-11-17 14:00:17.082684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:39.585 [2024-11-17 14:00:17.082691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.585 [2024-11-17 14:00:17.082698] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:39.585 [2024-11-17 14:00:17.082705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:39.585 [2024-11-17 14:00:17.082716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.585 [2024-11-17 14:00:17.082723] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:39.585 [2024-11-17 14:00:17.082730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:39.586 [2024-11-17 14:00:17.082737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.586 [2024-11-17 14:00:17.481861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:39.586 [2024-11-17 14:00:17.482623] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:39.586 [2024-11-17 14:00:17.482654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:39.586 [2024-11-17 14:00:17.482664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.586 [2024-11-17 14:00:17.482677] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:39.586 [2024-11-17 14:00:17.482684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:39.586 [2024-11-17 14:00:17.482692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.586 [2024-11-17 14:00:17.482699] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:39.586 [2024-11-17 14:00:17.482707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:39.586 [2024-11-17 14:00:17.482714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.586 [2024-11-17 14:00:17.482721] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:39.586 [2024-11-17 14:00:17.482728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:39.586 [2024-11-17 14:00:17.482738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.586 14:00:17 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:39.586 14:00:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:39.586 14:00:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:39.586 14:00:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:39.586 14:00:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:39.586 14:00:17 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.586 14:00:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:39.586 14:00:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:39.586 14:00:17 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.586 14:00:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:39.586 14:00:17 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:39.586 14:00:17 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:39.586 14:00:17 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:39.586 14:00:17 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:39.586 14:00:17 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:39.586 14:00:17 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:39.586 14:00:17 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:39.586 14:00:17 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:39.586 14:00:17 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:39.586 14:00:17 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:39.586 14:00:17 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:39.586 14:00:17 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:51.820 14:00:29 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:51.820 14:00:29 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:51.820 14:00:29 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:51.820 14:00:29 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:51.820 14:00:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:51.820 14:00:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:51.820 14:00:29 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.820 14:00:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:51.820 14:00:29 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.820 14:00:29 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:51.820 14:00:29 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:51.820 14:00:29 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:51.820 14:00:29 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:51.820 14:00:29 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:51.820 14:00:29 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:51.820 14:00:29 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:51.820 14:00:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:51.820 14:00:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:51.820 14:00:29 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:51.820 14:00:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:51.820 14:00:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:51.820 14:00:29 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.820 14:00:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:51.820 14:00:29 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.820 14:00:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:51.820 14:00:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:51.820 [2024-11-17 14:00:29.982068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:51.820 [2024-11-17 14:00:29.982844] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:51.820 [2024-11-17 14:00:29.982945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:51.820 [2024-11-17 14:00:29.982963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.820 [2024-11-17 14:00:29.982974] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:51.820 [2024-11-17 14:00:29.982983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:51.820 [2024-11-17 14:00:29.982990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.820 [2024-11-17 14:00:29.982999] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:51.820 [2024-11-17 14:00:29.983005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:51.820 [2024-11-17 14:00:29.983013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.820 [2024-11-17 14:00:29.983019] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:51.820 [2024-11-17 14:00:29.983027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:51.820 [2024-11-17 14:00:29.983034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:52.392 [2024-11-17 14:00:30.382070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:52.393 [2024-11-17 14:00:30.382955] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.393 [2024-11-17 14:00:30.382990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:52.393 [2024-11-17 14:00:30.383000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:52.393 [2024-11-17 14:00:30.383012] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.393 [2024-11-17 14:00:30.383019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:52.393 [2024-11-17 14:00:30.383028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:52.393 [2024-11-17 14:00:30.383034] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.393 [2024-11-17 14:00:30.383042] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:52.393 [2024-11-17 14:00:30.383048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:52.393 [2024-11-17 14:00:30.383056] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.393 [2024-11-17 14:00:30.383062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:52.393 [2024-11-17 14:00:30.383069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:52.393 14:00:30 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:52.393 14:00:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:52.393 14:00:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:52.393 14:00:30 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:52.393 14:00:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:52.393 14:00:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:52.393 14:00:30 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.393 14:00:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:52.393 14:00:30 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.393 14:00:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:52.393 14:00:30 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:52.393 14:00:30 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:52.393 14:00:30 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:52.393 14:00:30 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:52.393 14:00:30 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:52.393 14:00:30 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:52.393 14:00:30 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:52.393 14:00:30 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:52.393 14:00:30 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:52.653 14:00:30 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:52.653 14:00:30 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:52.653 14:00:30 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:04.911 14:00:42 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:04.911 14:00:42 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:04.911 14:00:42 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:04.911 14:00:42 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:04.911 14:00:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:04.911 14:00:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:04.911 14:00:42 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.911 14:00:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:04.911 14:00:42 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.911 14:00:42 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:04.911 14:00:42 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:04.911 14:00:42 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:04.911 14:00:42 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:04.911 14:00:42 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:04.911 14:00:42 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:04.911 [2024-11-17 14:00:42.782278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:04.911 [2024-11-17 14:00:42.783207] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:04.911 [2024-11-17 14:00:42.783310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:04.911 [2024-11-17 14:00:42.783367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.911 [2024-11-17 14:00:42.783423] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:04.911 [2024-11-17 14:00:42.783473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:04.911 [2024-11-17 14:00:42.783532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.911 [2024-11-17 14:00:42.783562] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:04.911 [2024-11-17 14:00:42.783602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:04.911 [2024-11-17 14:00:42.783645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.911 [2024-11-17 14:00:42.783670] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:04.911 [2024-11-17 14:00:42.783687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:04.911 [2024-11-17 14:00:42.783709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.911 14:00:42 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:04.911 14:00:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:04.911 14:00:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:04.911 14:00:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:04.911 14:00:42 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:04.911 14:00:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:04.911 14:00:42 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.911 14:00:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:04.911 14:00:42 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.912 14:00:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:04.912 14:00:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:04.912 [2024-11-17 14:00:43.182280] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:04.912 [2024-11-17 14:00:43.183094] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:04.912 [2024-11-17 14:00:43.183193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:04.912 [2024-11-17 14:00:43.183263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.912 [2024-11-17 14:00:43.183320] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:04.912 [2024-11-17 14:00:43.183340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:04.912 [2024-11-17 14:00:43.183367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.912 [2024-11-17 14:00:43.183417] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:04.912 [2024-11-17 14:00:43.183439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:04.912 [2024-11-17 14:00:43.183648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.912 [2024-11-17 14:00:43.183675] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:04.912 [2024-11-17 14:00:43.183716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:04.912 [2024-11-17 14:00:43.183751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:05.173 14:00:43 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:05.173 14:00:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:05.173 14:00:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:05.173 14:00:43 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:05.173 14:00:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:05.173 14:00:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:05.173 14:00:43 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.173 14:00:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:05.173 14:00:43 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.173 14:00:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:05.173 14:00:43 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:05.173 14:00:43 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:05.173 14:00:43 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:05.173 14:00:43 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:05.435 14:00:43 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:05.435 14:00:43 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:05.435 14:00:43 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:05.435 14:00:43 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:05.435 14:00:43 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:05.435 14:00:43 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:05.435 14:00:43 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:05.435 14:00:43 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:17.677 14:00:55 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:17.677 14:00:55 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:17.677 14:00:55 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:17.677 14:00:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:17.677 14:00:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:17.677 14:00:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:17.677 14:00:55 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.677 14:00:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:17.677 14:00:55 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.677 14:00:55 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:17.677 14:00:55 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:17.677 14:00:55 sw_hotplug -- common/autotest_common.sh@717 -- # time=44.65 00:12:17.677 14:00:55 sw_hotplug -- common/autotest_common.sh@718 -- # echo 44.65 00:12:17.677 14:00:55 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:12:17.677 14:00:55 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.65 00:12:17.677 14:00:55 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.65 2 00:12:17.677 remove_attach_helper took 44.65s to complete (handling 2 nvme drive(s)) 14:00:55 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:12:17.677 14:00:55 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 79368 00:12:17.677 14:00:55 sw_hotplug -- common/autotest_common.sh@950 -- # '[' -z 79368 ']' 00:12:17.677 14:00:55 sw_hotplug -- common/autotest_common.sh@954 -- # kill -0 79368 00:12:17.677 14:00:55 sw_hotplug -- common/autotest_common.sh@955 -- # uname 00:12:17.677 14:00:55 sw_hotplug -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:17.677 14:00:55 sw_hotplug -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79368 00:12:17.677 14:00:55 sw_hotplug -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:17.677 14:00:55 sw_hotplug -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:17.678 killing process with pid 79368 00:12:17.678 14:00:55 sw_hotplug -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79368' 00:12:17.678 14:00:55 sw_hotplug -- common/autotest_common.sh@969 -- # kill 79368 00:12:17.678 14:00:55 sw_hotplug -- common/autotest_common.sh@974 -- # wait 79368 00:12:17.678 14:00:55 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:18.251 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:18.512 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:18.512 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:18.512 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:18.774 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:18.774 00:12:18.774 real 2m27.750s 00:12:18.774 user 1m47.425s 00:12:18.774 sys 0m18.795s 00:12:18.774 ************************************ 00:12:18.774 END TEST sw_hotplug 00:12:18.774 ************************************ 00:12:18.774 14:00:56 sw_hotplug -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:18.774 14:00:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:18.774 14:00:56 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:12:18.774 14:00:56 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:18.774 14:00:56 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:18.774 14:00:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:18.774 14:00:56 -- common/autotest_common.sh@10 -- # set +x 00:12:18.774 ************************************ 00:12:18.774 START TEST nvme_xnvme 00:12:18.774 ************************************ 00:12:18.774 14:00:56 nvme_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:18.774 * Looking for test storage... 00:12:18.774 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:18.774 14:00:57 nvme_xnvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:18.774 14:00:57 nvme_xnvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:18.774 14:00:57 nvme_xnvme -- common/autotest_common.sh@1681 -- # lcov --version 00:12:19.036 14:00:57 nvme_xnvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:19.036 14:00:57 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:19.036 14:00:57 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:19.036 14:00:57 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:19.036 14:00:57 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:19.036 14:00:57 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:19.036 14:00:57 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:19.036 14:00:57 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:19.036 14:00:57 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:19.036 14:00:57 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:19.036 14:00:57 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:19.036 14:00:57 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:19.036 14:00:57 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:19.036 14:00:57 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:19.036 14:00:57 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:19.036 14:00:57 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:19.036 14:00:57 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:19.036 14:00:57 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:19.036 14:00:57 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:19.036 14:00:57 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:19.036 14:00:57 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:19.036 14:00:57 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:19.036 14:00:57 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:19.036 14:00:57 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:19.036 14:00:57 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:19.036 14:00:57 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:19.036 14:00:57 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:19.036 14:00:57 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:19.036 14:00:57 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:19.036 14:00:57 nvme_xnvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:19.036 14:00:57 nvme_xnvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:19.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.037 --rc genhtml_branch_coverage=1 00:12:19.037 --rc genhtml_function_coverage=1 00:12:19.037 --rc genhtml_legend=1 00:12:19.037 --rc geninfo_all_blocks=1 00:12:19.037 --rc geninfo_unexecuted_blocks=1 00:12:19.037 00:12:19.037 ' 00:12:19.037 14:00:57 nvme_xnvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:19.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.037 --rc genhtml_branch_coverage=1 00:12:19.037 --rc genhtml_function_coverage=1 00:12:19.037 --rc genhtml_legend=1 00:12:19.037 --rc geninfo_all_blocks=1 00:12:19.037 --rc geninfo_unexecuted_blocks=1 00:12:19.037 00:12:19.037 ' 00:12:19.037 14:00:57 nvme_xnvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:19.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.037 --rc genhtml_branch_coverage=1 00:12:19.037 --rc genhtml_function_coverage=1 00:12:19.037 --rc genhtml_legend=1 00:12:19.037 --rc geninfo_all_blocks=1 00:12:19.037 --rc geninfo_unexecuted_blocks=1 00:12:19.037 00:12:19.037 ' 00:12:19.037 14:00:57 nvme_xnvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:19.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.037 --rc genhtml_branch_coverage=1 00:12:19.037 --rc genhtml_function_coverage=1 00:12:19.037 --rc genhtml_legend=1 00:12:19.037 --rc geninfo_all_blocks=1 00:12:19.037 --rc geninfo_unexecuted_blocks=1 00:12:19.037 00:12:19.037 ' 00:12:19.037 14:00:57 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:19.037 14:00:57 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:19.037 14:00:57 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:19.037 14:00:57 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:19.037 14:00:57 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:19.037 14:00:57 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.037 14:00:57 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.037 14:00:57 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.037 14:00:57 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:19.037 14:00:57 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.037 14:00:57 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:12:19.037 14:00:57 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:19.037 14:00:57 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:19.037 14:00:57 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:19.037 ************************************ 00:12:19.037 START TEST xnvme_to_malloc_dd_copy 00:12:19.037 ************************************ 00:12:19.037 14:00:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1125 -- # malloc_to_xnvme_copy 00:12:19.037 14:00:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:12:19.037 14:00:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:12:19.037 14:00:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:12:19.037 14:00:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:12:19.037 14:00:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:12:19.037 14:00:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:12:19.037 14:00:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:19.037 14:00:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:12:19.037 14:00:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:12:19.037 14:00:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:12:19.037 14:00:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:12:19.037 14:00:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:12:19.037 14:00:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:12:19.037 14:00:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:12:19.037 14:00:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:12:19.037 14:00:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:12:19.037 14:00:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:19.037 14:00:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:19.037 14:00:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:19.037 14:00:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:19.037 14:00:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:19.037 14:00:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:19.037 14:00:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:19.037 14:00:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:19.037 { 00:12:19.037 "subsystems": [ 00:12:19.037 { 00:12:19.037 "subsystem": "bdev", 00:12:19.037 "config": [ 00:12:19.037 { 00:12:19.037 "params": { 00:12:19.037 "block_size": 512, 00:12:19.037 "num_blocks": 2097152, 00:12:19.037 "name": "malloc0" 00:12:19.037 }, 00:12:19.037 "method": "bdev_malloc_create" 00:12:19.037 }, 00:12:19.037 { 00:12:19.037 "params": { 00:12:19.037 "io_mechanism": "libaio", 00:12:19.037 "filename": "/dev/nullb0", 00:12:19.037 "name": "null0" 00:12:19.037 }, 00:12:19.037 "method": "bdev_xnvme_create" 00:12:19.037 }, 00:12:19.037 { 00:12:19.037 "method": "bdev_wait_for_examine" 00:12:19.037 } 00:12:19.037 ] 00:12:19.037 } 00:12:19.037 ] 00:12:19.037 } 00:12:19.037 [2024-11-17 14:00:57.265794] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:19.037 [2024-11-17 14:00:57.266102] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80719 ] 00:12:19.299 [2024-11-17 14:00:57.419651] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:19.299 [2024-11-17 14:00:57.469619] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.687  [2024-11-17T14:00:59.931Z] Copying: 224/1024 [MB] (224 MBps) [2024-11-17T14:01:00.874Z] Copying: 449/1024 [MB] (225 MBps) [2024-11-17T14:01:01.815Z] Copying: 674/1024 [MB] (224 MBps) [2024-11-17T14:01:02.076Z] Copying: 946/1024 [MB] (272 MBps) [2024-11-17T14:01:02.649Z] Copying: 1024/1024 [MB] (average 240 MBps) 00:12:24.348 00:12:24.348 14:01:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:24.348 14:01:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:24.348 14:01:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:24.348 14:01:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:24.348 { 00:12:24.348 "subsystems": [ 00:12:24.348 { 00:12:24.348 "subsystem": "bdev", 00:12:24.348 "config": [ 00:12:24.348 { 00:12:24.348 "params": { 00:12:24.348 "block_size": 512, 00:12:24.348 "num_blocks": 2097152, 00:12:24.348 "name": "malloc0" 00:12:24.348 }, 00:12:24.348 "method": "bdev_malloc_create" 00:12:24.348 }, 00:12:24.348 { 00:12:24.348 "params": { 00:12:24.348 "io_mechanism": "libaio", 00:12:24.348 "filename": "/dev/nullb0", 00:12:24.348 "name": "null0" 00:12:24.348 }, 00:12:24.348 "method": "bdev_xnvme_create" 00:12:24.348 }, 00:12:24.348 { 00:12:24.348 "method": "bdev_wait_for_examine" 00:12:24.348 } 00:12:24.348 ] 00:12:24.348 } 00:12:24.348 ] 00:12:24.348 } 00:12:24.348 [2024-11-17 14:01:02.457324] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:24.348 [2024-11-17 14:01:02.457557] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80786 ] 00:12:24.348 [2024-11-17 14:01:02.605428] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:24.348 [2024-11-17 14:01:02.641359] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:25.735  [2024-11-17T14:01:04.980Z] Copying: 311/1024 [MB] (311 MBps) [2024-11-17T14:01:05.924Z] Copying: 623/1024 [MB] (312 MBps) [2024-11-17T14:01:06.186Z] Copying: 936/1024 [MB] (312 MBps) [2024-11-17T14:01:06.759Z] Copying: 1024/1024 [MB] (average 312 MBps) 00:12:28.458 00:12:28.458 14:01:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:28.458 14:01:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:28.458 14:01:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:28.458 14:01:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:28.458 14:01:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:28.458 14:01:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:28.458 { 00:12:28.458 "subsystems": [ 00:12:28.458 { 00:12:28.458 "subsystem": "bdev", 00:12:28.458 "config": [ 00:12:28.458 { 00:12:28.458 "params": { 00:12:28.458 "block_size": 512, 00:12:28.458 "num_blocks": 2097152, 00:12:28.458 "name": "malloc0" 00:12:28.458 }, 00:12:28.458 "method": "bdev_malloc_create" 00:12:28.458 }, 00:12:28.459 { 00:12:28.459 "params": { 00:12:28.459 "io_mechanism": "io_uring", 00:12:28.459 "filename": "/dev/nullb0", 00:12:28.459 "name": "null0" 00:12:28.459 }, 00:12:28.459 "method": "bdev_xnvme_create" 00:12:28.459 }, 00:12:28.459 { 00:12:28.459 "method": "bdev_wait_for_examine" 00:12:28.459 } 00:12:28.459 ] 00:12:28.459 } 00:12:28.459 ] 00:12:28.459 } 00:12:28.459 [2024-11-17 14:01:06.562923] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:28.459 [2024-11-17 14:01:06.563040] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80840 ] 00:12:28.459 [2024-11-17 14:01:06.710010] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:28.459 [2024-11-17 14:01:06.743127] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.897  [2024-11-17T14:01:09.142Z] Copying: 318/1024 [MB] (318 MBps) [2024-11-17T14:01:10.086Z] Copying: 636/1024 [MB] (317 MBps) [2024-11-17T14:01:10.347Z] Copying: 954/1024 [MB] (318 MBps) [2024-11-17T14:01:10.608Z] Copying: 1024/1024 [MB] (average 318 MBps) 00:12:32.307 00:12:32.307 14:01:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:32.307 14:01:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:32.308 14:01:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:32.308 14:01:10 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:32.308 { 00:12:32.308 "subsystems": [ 00:12:32.308 { 00:12:32.308 "subsystem": "bdev", 00:12:32.308 "config": [ 00:12:32.308 { 00:12:32.308 "params": { 00:12:32.308 "block_size": 512, 00:12:32.308 "num_blocks": 2097152, 00:12:32.308 "name": "malloc0" 00:12:32.308 }, 00:12:32.308 "method": "bdev_malloc_create" 00:12:32.308 }, 00:12:32.308 { 00:12:32.308 "params": { 00:12:32.308 "io_mechanism": "io_uring", 00:12:32.308 "filename": "/dev/nullb0", 00:12:32.308 "name": "null0" 00:12:32.308 }, 00:12:32.308 "method": "bdev_xnvme_create" 00:12:32.308 }, 00:12:32.308 { 00:12:32.308 "method": "bdev_wait_for_examine" 00:12:32.308 } 00:12:32.308 ] 00:12:32.308 } 00:12:32.308 ] 00:12:32.308 } 00:12:32.308 [2024-11-17 14:01:10.575356] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:32.308 [2024-11-17 14:01:10.575476] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80890 ] 00:12:32.569 [2024-11-17 14:01:10.724578] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:32.569 [2024-11-17 14:01:10.775353] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.955  [2024-11-17T14:01:13.201Z] Copying: 234/1024 [MB] (234 MBps) [2024-11-17T14:01:14.143Z] Copying: 496/1024 [MB] (262 MBps) [2024-11-17T14:01:15.086Z] Copying: 820/1024 [MB] (323 MBps) [2024-11-17T14:01:15.086Z] Copying: 1024/1024 [MB] (average 282 MBps) 00:12:36.785 00:12:36.785 14:01:15 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:12:36.785 14:01:15 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:12:37.045 ************************************ 00:12:37.045 END TEST xnvme_to_malloc_dd_copy 00:12:37.045 ************************************ 00:12:37.045 00:12:37.045 real 0m17.964s 00:12:37.045 user 0m14.715s 00:12:37.045 sys 0m2.731s 00:12:37.045 14:01:15 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:37.045 14:01:15 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:37.046 14:01:15 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:37.046 14:01:15 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:37.046 14:01:15 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:37.046 14:01:15 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:37.046 ************************************ 00:12:37.046 START TEST xnvme_bdevperf 00:12:37.046 ************************************ 00:12:37.046 14:01:15 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1125 -- # xnvme_bdevperf 00:12:37.046 14:01:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:12:37.046 14:01:15 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:12:37.046 14:01:15 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:12:37.046 14:01:15 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:12:37.046 14:01:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:12:37.046 14:01:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:37.046 14:01:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:12:37.046 14:01:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:12:37.046 14:01:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:12:37.046 14:01:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:12:37.046 14:01:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:12:37.046 14:01:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:12:37.046 14:01:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:37.046 14:01:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:37.046 14:01:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:37.046 14:01:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:37.046 14:01:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:37.046 14:01:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:37.046 14:01:15 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:37.046 14:01:15 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:37.046 { 00:12:37.046 "subsystems": [ 00:12:37.046 { 00:12:37.046 "subsystem": "bdev", 00:12:37.046 "config": [ 00:12:37.046 { 00:12:37.046 "params": { 00:12:37.046 "io_mechanism": "libaio", 00:12:37.046 "filename": "/dev/nullb0", 00:12:37.046 "name": "null0" 00:12:37.046 }, 00:12:37.046 "method": "bdev_xnvme_create" 00:12:37.046 }, 00:12:37.046 { 00:12:37.046 "method": "bdev_wait_for_examine" 00:12:37.046 } 00:12:37.046 ] 00:12:37.046 } 00:12:37.046 ] 00:12:37.046 } 00:12:37.046 [2024-11-17 14:01:15.258268] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:37.046 [2024-11-17 14:01:15.258356] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80978 ] 00:12:37.308 [2024-11-17 14:01:15.392142] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:37.308 [2024-11-17 14:01:15.422878] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.308 Running I/O for 5 seconds... 00:12:39.638 209600.00 IOPS, 818.75 MiB/s [2024-11-17T14:01:18.510Z] 209760.00 IOPS, 819.38 MiB/s [2024-11-17T14:01:19.895Z] 209792.00 IOPS, 819.50 MiB/s [2024-11-17T14:01:20.836Z] 209840.00 IOPS, 819.69 MiB/s 00:12:42.535 Latency(us) 00:12:42.535 [2024-11-17T14:01:20.836Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:42.535 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:42.535 null0 : 5.00 209705.65 819.16 0.00 0.00 302.87 111.06 1518.67 00:12:42.535 [2024-11-17T14:01:20.836Z] =================================================================================================================== 00:12:42.536 [2024-11-17T14:01:20.837Z] Total : 209705.65 819.16 0.00 0.00 302.87 111.06 1518.67 00:12:42.536 14:01:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:42.536 14:01:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:42.536 14:01:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:42.536 14:01:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:42.536 14:01:20 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:42.536 14:01:20 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:42.536 { 00:12:42.536 "subsystems": [ 00:12:42.536 { 00:12:42.536 "subsystem": "bdev", 00:12:42.536 "config": [ 00:12:42.536 { 00:12:42.536 "params": { 00:12:42.536 "io_mechanism": "io_uring", 00:12:42.536 "filename": "/dev/nullb0", 00:12:42.536 "name": "null0" 00:12:42.536 }, 00:12:42.536 "method": "bdev_xnvme_create" 00:12:42.536 }, 00:12:42.536 { 00:12:42.536 "method": "bdev_wait_for_examine" 00:12:42.536 } 00:12:42.536 ] 00:12:42.536 } 00:12:42.536 ] 00:12:42.536 } 00:12:42.536 [2024-11-17 14:01:20.717606] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:42.536 [2024-11-17 14:01:20.717860] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81041 ] 00:12:42.797 [2024-11-17 14:01:20.865674] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:42.797 [2024-11-17 14:01:20.897634] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:42.797 Running I/O for 5 seconds... 00:12:44.685 238592.00 IOPS, 932.00 MiB/s [2024-11-17T14:01:24.370Z] 238528.00 IOPS, 931.75 MiB/s [2024-11-17T14:01:25.304Z] 238485.33 IOPS, 931.58 MiB/s [2024-11-17T14:01:26.238Z] 238480.00 IOPS, 931.56 MiB/s 00:12:47.937 Latency(us) 00:12:47.937 [2024-11-17T14:01:26.238Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:47.937 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:47.937 null0 : 5.00 238453.10 931.46 0.00 0.00 266.01 145.72 1474.56 00:12:47.937 [2024-11-17T14:01:26.238Z] =================================================================================================================== 00:12:47.937 [2024-11-17T14:01:26.238Z] Total : 238453.10 931.46 0.00 0.00 266.01 145.72 1474.56 00:12:47.937 14:01:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:12:47.937 14:01:26 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:12:47.937 ************************************ 00:12:47.937 END TEST xnvme_bdevperf 00:12:47.937 ************************************ 00:12:47.937 00:12:47.937 real 0m10.947s 00:12:47.937 user 0m8.676s 00:12:47.937 sys 0m2.047s 00:12:47.937 14:01:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:47.937 14:01:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:47.937 ************************************ 00:12:47.937 END TEST nvme_xnvme 00:12:47.937 ************************************ 00:12:47.937 00:12:47.937 real 0m29.191s 00:12:47.937 user 0m23.511s 00:12:47.937 sys 0m4.899s 00:12:47.937 14:01:26 nvme_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:47.937 14:01:26 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:47.937 14:01:26 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:47.937 14:01:26 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:47.937 14:01:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:47.937 14:01:26 -- common/autotest_common.sh@10 -- # set +x 00:12:48.198 ************************************ 00:12:48.198 START TEST blockdev_xnvme 00:12:48.198 ************************************ 00:12:48.198 14:01:26 blockdev_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:48.198 * Looking for test storage... 00:12:48.198 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:12:48.198 14:01:26 blockdev_xnvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:48.198 14:01:26 blockdev_xnvme -- common/autotest_common.sh@1681 -- # lcov --version 00:12:48.198 14:01:26 blockdev_xnvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:48.198 14:01:26 blockdev_xnvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:48.198 14:01:26 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:48.198 14:01:26 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:48.198 14:01:26 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:48.198 14:01:26 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:48.198 14:01:26 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:48.198 14:01:26 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:48.198 14:01:26 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:48.198 14:01:26 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:48.198 14:01:26 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:48.198 14:01:26 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:48.198 14:01:26 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:48.198 14:01:26 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:48.198 14:01:26 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:12:48.198 14:01:26 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:48.198 14:01:26 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:48.198 14:01:26 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:48.198 14:01:26 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:48.198 14:01:26 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:48.198 14:01:26 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:48.198 14:01:26 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:48.198 14:01:26 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:48.198 14:01:26 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:48.198 14:01:26 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:48.198 14:01:26 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:48.198 14:01:26 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:48.198 14:01:26 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:48.198 14:01:26 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:48.198 14:01:26 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:12:48.198 14:01:26 blockdev_xnvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:48.198 14:01:26 blockdev_xnvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:48.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:48.198 --rc genhtml_branch_coverage=1 00:12:48.198 --rc genhtml_function_coverage=1 00:12:48.198 --rc genhtml_legend=1 00:12:48.198 --rc geninfo_all_blocks=1 00:12:48.198 --rc geninfo_unexecuted_blocks=1 00:12:48.198 00:12:48.198 ' 00:12:48.198 14:01:26 blockdev_xnvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:48.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:48.199 --rc genhtml_branch_coverage=1 00:12:48.199 --rc genhtml_function_coverage=1 00:12:48.199 --rc genhtml_legend=1 00:12:48.199 --rc geninfo_all_blocks=1 00:12:48.199 --rc geninfo_unexecuted_blocks=1 00:12:48.199 00:12:48.199 ' 00:12:48.199 14:01:26 blockdev_xnvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:48.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:48.199 --rc genhtml_branch_coverage=1 00:12:48.199 --rc genhtml_function_coverage=1 00:12:48.199 --rc genhtml_legend=1 00:12:48.199 --rc geninfo_all_blocks=1 00:12:48.199 --rc geninfo_unexecuted_blocks=1 00:12:48.199 00:12:48.199 ' 00:12:48.199 14:01:26 blockdev_xnvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:48.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:48.199 --rc genhtml_branch_coverage=1 00:12:48.199 --rc genhtml_function_coverage=1 00:12:48.199 --rc genhtml_legend=1 00:12:48.199 --rc geninfo_all_blocks=1 00:12:48.199 --rc geninfo_unexecuted_blocks=1 00:12:48.199 00:12:48.199 ' 00:12:48.199 14:01:26 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:48.199 14:01:26 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:12:48.199 14:01:26 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:12:48.199 14:01:26 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:48.199 14:01:26 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:12:48.199 14:01:26 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:12:48.199 14:01:26 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:12:48.199 14:01:26 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:12:48.199 14:01:26 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:12:48.199 14:01:26 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:12:48.199 14:01:26 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:12:48.199 14:01:26 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:12:48.199 14:01:26 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:12:48.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:48.199 14:01:26 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:12:48.199 14:01:26 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:12:48.199 14:01:26 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:12:48.199 14:01:26 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:12:48.199 14:01:26 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:12:48.199 14:01:26 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:12:48.199 14:01:26 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:12:48.199 14:01:26 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:12:48.199 14:01:26 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:12:48.199 14:01:26 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:12:48.199 14:01:26 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:12:48.199 14:01:26 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=81178 00:12:48.199 14:01:26 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:48.199 14:01:26 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 81178 00:12:48.199 14:01:26 blockdev_xnvme -- common/autotest_common.sh@831 -- # '[' -z 81178 ']' 00:12:48.199 14:01:26 blockdev_xnvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:48.199 14:01:26 blockdev_xnvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:48.199 14:01:26 blockdev_xnvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:48.199 14:01:26 blockdev_xnvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:48.199 14:01:26 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:12:48.199 14:01:26 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:48.199 [2024-11-17 14:01:26.460416] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:48.199 [2024-11-17 14:01:26.460528] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81178 ] 00:12:48.460 [2024-11-17 14:01:26.603030] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:48.460 [2024-11-17 14:01:26.632454] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:49.031 14:01:27 blockdev_xnvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:49.031 14:01:27 blockdev_xnvme -- common/autotest_common.sh@864 -- # return 0 00:12:49.031 14:01:27 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:12:49.031 14:01:27 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:12:49.031 14:01:27 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:12:49.031 14:01:27 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:12:49.031 14:01:27 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:49.291 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:49.550 Waiting for block devices as requested 00:12:49.550 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:49.550 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:49.550 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:49.809 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:55.092 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:55.093 14:01:32 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:12:55.093 14:01:32 blockdev_xnvme -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:12:55.093 14:01:32 blockdev_xnvme -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:12:55.093 14:01:32 blockdev_xnvme -- common/autotest_common.sh@1656 -- # local nvme bdf 00:12:55.093 14:01:32 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:55.093 14:01:32 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:12:55.093 14:01:32 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:12:55.093 14:01:32 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:12:55.093 14:01:32 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:55.093 14:01:32 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:55.093 14:01:32 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:12:55.093 14:01:32 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:12:55.093 14:01:32 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:12:55.093 14:01:32 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:55.093 14:01:32 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:55.093 14:01:32 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:12:55.093 14:01:32 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:12:55.093 14:01:32 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:12:55.093 14:01:32 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:55.093 14:01:32 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:55.093 14:01:32 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:12:55.093 14:01:32 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:12:55.093 14:01:32 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:12:55.093 14:01:32 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:55.093 14:01:32 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:55.093 14:01:32 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:12:55.093 14:01:32 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:12:55.093 14:01:32 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:12:55.093 14:01:32 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:55.093 14:01:32 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:55.093 14:01:32 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:12:55.093 14:01:32 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:12:55.093 14:01:32 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:12:55.093 14:01:32 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:55.093 14:01:32 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:12:55.093 14:01:32 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:12:55.093 14:01:32 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:12:55.093 14:01:32 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:12:55.093 14:01:32 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:12:55.093 14:01:32 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:55.093 14:01:32 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:12:55.093 14:01:32 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:55.093 14:01:32 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:55.093 14:01:32 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:55.093 14:01:32 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:12:55.093 14:01:32 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:55.093 14:01:32 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:55.093 14:01:32 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:55.093 14:01:32 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:12:55.093 14:01:32 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:55.093 14:01:32 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:55.093 14:01:32 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:55.093 14:01:32 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:12:55.093 14:01:32 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:55.093 14:01:32 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:55.093 14:01:32 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:55.093 14:01:32 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:12:55.093 14:01:32 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:55.093 14:01:32 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:55.093 14:01:32 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:12:55.093 14:01:32 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:12:55.093 14:01:32 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:12:55.093 14:01:32 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:55.093 14:01:32 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:12:55.093 14:01:32 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:12:55.093 14:01:32 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.093 14:01:32 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:55.093 14:01:32 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:12:55.093 nvme0n1 00:12:55.093 nvme1n1 00:12:55.093 nvme2n1 00:12:55.093 nvme2n2 00:12:55.093 nvme2n3 00:12:55.093 nvme3n1 00:12:55.093 14:01:32 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.093 14:01:32 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:12:55.093 14:01:32 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.093 14:01:32 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:55.093 14:01:32 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.093 14:01:32 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:12:55.093 14:01:32 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:12:55.093 14:01:32 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.093 14:01:32 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:55.093 14:01:33 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.093 14:01:33 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:12:55.093 14:01:33 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.093 14:01:33 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:55.093 14:01:33 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.093 14:01:33 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:12:55.093 14:01:33 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.093 14:01:33 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:55.093 14:01:33 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.093 14:01:33 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:12:55.093 14:01:33 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:12:55.093 14:01:33 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.093 14:01:33 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:12:55.093 14:01:33 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:55.093 14:01:33 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.093 14:01:33 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:12:55.093 14:01:33 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:12:55.094 14:01:33 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "77bd1e4e-195e-4ae0-a009-ffe013e47e71"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "77bd1e4e-195e-4ae0-a009-ffe013e47e71",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "2b6e4bb2-b32e-4183-bb0b-ba6acb893a6b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "2b6e4bb2-b32e-4183-bb0b-ba6acb893a6b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "36204971-f665-4c95-8951-62a115340326"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "36204971-f665-4c95-8951-62a115340326",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "71a9278a-8e97-4091-9d72-85b42bd1eb7a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "71a9278a-8e97-4091-9d72-85b42bd1eb7a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "a407cfb4-eaf3-47e8-915a-b7594f3164af"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a407cfb4-eaf3-47e8-915a-b7594f3164af",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "2ed5f443-4325-45be-92b2-9c3f30fa532e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "2ed5f443-4325-45be-92b2-9c3f30fa532e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:12:55.094 14:01:33 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:12:55.094 14:01:33 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:12:55.094 14:01:33 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:12:55.094 14:01:33 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 81178 00:12:55.094 14:01:33 blockdev_xnvme -- common/autotest_common.sh@950 -- # '[' -z 81178 ']' 00:12:55.094 14:01:33 blockdev_xnvme -- common/autotest_common.sh@954 -- # kill -0 81178 00:12:55.094 14:01:33 blockdev_xnvme -- common/autotest_common.sh@955 -- # uname 00:12:55.094 14:01:33 blockdev_xnvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:55.094 14:01:33 blockdev_xnvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81178 00:12:55.094 killing process with pid 81178 00:12:55.094 14:01:33 blockdev_xnvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:55.094 14:01:33 blockdev_xnvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:55.094 14:01:33 blockdev_xnvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81178' 00:12:55.094 14:01:33 blockdev_xnvme -- common/autotest_common.sh@969 -- # kill 81178 00:12:55.094 14:01:33 blockdev_xnvme -- common/autotest_common.sh@974 -- # wait 81178 00:12:55.094 14:01:33 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:55.094 14:01:33 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:12:55.094 14:01:33 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:55.094 14:01:33 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:55.094 14:01:33 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:55.094 ************************************ 00:12:55.094 START TEST bdev_hello_world 00:12:55.094 ************************************ 00:12:55.094 14:01:33 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:12:55.352 [2024-11-17 14:01:33.430255] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:55.352 [2024-11-17 14:01:33.430447] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81521 ] 00:12:55.352 [2024-11-17 14:01:33.569783] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:55.352 [2024-11-17 14:01:33.598383] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:55.611 [2024-11-17 14:01:33.755163] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:12:55.611 [2024-11-17 14:01:33.755345] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:12:55.611 [2024-11-17 14:01:33.755365] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:12:55.611 [2024-11-17 14:01:33.756884] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:12:55.611 [2024-11-17 14:01:33.757082] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:12:55.611 [2024-11-17 14:01:33.757098] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:12:55.611 [2024-11-17 14:01:33.757334] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:12:55.611 00:12:55.611 [2024-11-17 14:01:33.757347] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:12:55.611 ************************************ 00:12:55.611 END TEST bdev_hello_world 00:12:55.611 ************************************ 00:12:55.611 00:12:55.611 real 0m0.508s 00:12:55.611 user 0m0.265s 00:12:55.611 sys 0m0.136s 00:12:55.611 14:01:33 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:55.611 14:01:33 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:12:55.869 14:01:33 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:12:55.869 14:01:33 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:55.869 14:01:33 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:55.869 14:01:33 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:55.869 ************************************ 00:12:55.869 START TEST bdev_bounds 00:12:55.869 ************************************ 00:12:55.869 14:01:33 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:12:55.869 14:01:33 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=81547 00:12:55.869 14:01:33 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:12:55.869 Process bdevio pid: 81547 00:12:55.869 14:01:33 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 81547' 00:12:55.869 14:01:33 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 81547 00:12:55.869 14:01:33 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 81547 ']' 00:12:55.869 14:01:33 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:55.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:55.869 14:01:33 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:55.869 14:01:33 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:55.869 14:01:33 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:55.869 14:01:33 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:55.869 14:01:33 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:55.869 [2024-11-17 14:01:34.006490] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:55.869 [2024-11-17 14:01:34.006612] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81547 ] 00:12:55.869 [2024-11-17 14:01:34.151740] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:56.127 [2024-11-17 14:01:34.186158] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:56.127 [2024-11-17 14:01:34.186567] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:56.127 [2024-11-17 14:01:34.186626] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:12:56.713 14:01:34 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:56.713 14:01:34 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:12:56.713 14:01:34 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:12:56.713 I/O targets: 00:12:56.713 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:12:56.713 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:12:56.713 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:56.713 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:56.713 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:56.713 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:12:56.713 00:12:56.713 00:12:56.713 CUnit - A unit testing framework for C - Version 2.1-3 00:12:56.713 http://cunit.sourceforge.net/ 00:12:56.713 00:12:56.713 00:12:56.713 Suite: bdevio tests on: nvme3n1 00:12:56.713 Test: blockdev write read block ...passed 00:12:56.713 Test: blockdev write zeroes read block ...passed 00:12:56.713 Test: blockdev write zeroes read no split ...passed 00:12:56.713 Test: blockdev write zeroes read split ...passed 00:12:56.713 Test: blockdev write zeroes read split partial ...passed 00:12:56.713 Test: blockdev reset ...passed 00:12:56.713 Test: blockdev write read 8 blocks ...passed 00:12:56.713 Test: blockdev write read size > 128k ...passed 00:12:56.713 Test: blockdev write read invalid size ...passed 00:12:56.713 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:56.713 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:56.713 Test: blockdev write read max offset ...passed 00:12:56.713 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:56.713 Test: blockdev writev readv 8 blocks ...passed 00:12:56.713 Test: blockdev writev readv 30 x 1block ...passed 00:12:56.713 Test: blockdev writev readv block ...passed 00:12:56.713 Test: blockdev writev readv size > 128k ...passed 00:12:56.713 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:56.713 Test: blockdev comparev and writev ...passed 00:12:56.713 Test: blockdev nvme passthru rw ...passed 00:12:56.713 Test: blockdev nvme passthru vendor specific ...passed 00:12:56.713 Test: blockdev nvme admin passthru ...passed 00:12:56.713 Test: blockdev copy ...passed 00:12:56.713 Suite: bdevio tests on: nvme2n3 00:12:56.713 Test: blockdev write read block ...passed 00:12:56.713 Test: blockdev write zeroes read block ...passed 00:12:56.713 Test: blockdev write zeroes read no split ...passed 00:12:56.714 Test: blockdev write zeroes read split ...passed 00:12:56.714 Test: blockdev write zeroes read split partial ...passed 00:12:56.714 Test: blockdev reset ...passed 00:12:56.714 Test: blockdev write read 8 blocks ...passed 00:12:56.714 Test: blockdev write read size > 128k ...passed 00:12:56.714 Test: blockdev write read invalid size ...passed 00:12:56.714 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:56.714 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:56.714 Test: blockdev write read max offset ...passed 00:12:56.714 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:56.714 Test: blockdev writev readv 8 blocks ...passed 00:12:56.714 Test: blockdev writev readv 30 x 1block ...passed 00:12:56.972 Test: blockdev writev readv block ...passed 00:12:56.972 Test: blockdev writev readv size > 128k ...passed 00:12:56.972 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:56.972 Test: blockdev comparev and writev ...passed 00:12:56.972 Test: blockdev nvme passthru rw ...passed 00:12:56.972 Test: blockdev nvme passthru vendor specific ...passed 00:12:56.972 Test: blockdev nvme admin passthru ...passed 00:12:56.972 Test: blockdev copy ...passed 00:12:56.972 Suite: bdevio tests on: nvme2n2 00:12:56.972 Test: blockdev write read block ...passed 00:12:56.972 Test: blockdev write zeroes read block ...passed 00:12:56.972 Test: blockdev write zeroes read no split ...passed 00:12:56.972 Test: blockdev write zeroes read split ...passed 00:12:56.972 Test: blockdev write zeroes read split partial ...passed 00:12:56.972 Test: blockdev reset ...passed 00:12:56.972 Test: blockdev write read 8 blocks ...passed 00:12:56.972 Test: blockdev write read size > 128k ...passed 00:12:56.972 Test: blockdev write read invalid size ...passed 00:12:56.972 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:56.972 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:56.972 Test: blockdev write read max offset ...passed 00:12:56.972 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:56.972 Test: blockdev writev readv 8 blocks ...passed 00:12:56.972 Test: blockdev writev readv 30 x 1block ...passed 00:12:56.972 Test: blockdev writev readv block ...passed 00:12:56.972 Test: blockdev writev readv size > 128k ...passed 00:12:56.972 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:56.972 Test: blockdev comparev and writev ...passed 00:12:56.972 Test: blockdev nvme passthru rw ...passed 00:12:56.972 Test: blockdev nvme passthru vendor specific ...passed 00:12:56.972 Test: blockdev nvme admin passthru ...passed 00:12:56.972 Test: blockdev copy ...passed 00:12:56.972 Suite: bdevio tests on: nvme2n1 00:12:56.972 Test: blockdev write read block ...passed 00:12:56.972 Test: blockdev write zeroes read block ...passed 00:12:56.972 Test: blockdev write zeroes read no split ...passed 00:12:56.972 Test: blockdev write zeroes read split ...passed 00:12:56.972 Test: blockdev write zeroes read split partial ...passed 00:12:56.972 Test: blockdev reset ...passed 00:12:56.972 Test: blockdev write read 8 blocks ...passed 00:12:56.972 Test: blockdev write read size > 128k ...passed 00:12:56.972 Test: blockdev write read invalid size ...passed 00:12:56.972 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:56.972 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:56.972 Test: blockdev write read max offset ...passed 00:12:56.972 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:56.972 Test: blockdev writev readv 8 blocks ...passed 00:12:56.972 Test: blockdev writev readv 30 x 1block ...passed 00:12:56.972 Test: blockdev writev readv block ...passed 00:12:56.972 Test: blockdev writev readv size > 128k ...passed 00:12:56.972 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:56.972 Test: blockdev comparev and writev ...passed 00:12:56.972 Test: blockdev nvme passthru rw ...passed 00:12:56.972 Test: blockdev nvme passthru vendor specific ...passed 00:12:56.972 Test: blockdev nvme admin passthru ...passed 00:12:56.972 Test: blockdev copy ...passed 00:12:56.972 Suite: bdevio tests on: nvme1n1 00:12:56.972 Test: blockdev write read block ...passed 00:12:56.972 Test: blockdev write zeroes read block ...passed 00:12:56.972 Test: blockdev write zeroes read no split ...passed 00:12:56.972 Test: blockdev write zeroes read split ...passed 00:12:56.972 Test: blockdev write zeroes read split partial ...passed 00:12:56.972 Test: blockdev reset ...passed 00:12:56.972 Test: blockdev write read 8 blocks ...passed 00:12:56.972 Test: blockdev write read size > 128k ...passed 00:12:56.972 Test: blockdev write read invalid size ...passed 00:12:56.972 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:56.972 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:56.972 Test: blockdev write read max offset ...passed 00:12:56.972 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:56.972 Test: blockdev writev readv 8 blocks ...passed 00:12:56.972 Test: blockdev writev readv 30 x 1block ...passed 00:12:56.972 Test: blockdev writev readv block ...passed 00:12:56.972 Test: blockdev writev readv size > 128k ...passed 00:12:56.972 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:56.972 Test: blockdev comparev and writev ...passed 00:12:56.972 Test: blockdev nvme passthru rw ...passed 00:12:56.973 Test: blockdev nvme passthru vendor specific ...passed 00:12:56.973 Test: blockdev nvme admin passthru ...passed 00:12:56.973 Test: blockdev copy ...passed 00:12:56.973 Suite: bdevio tests on: nvme0n1 00:12:56.973 Test: blockdev write read block ...passed 00:12:56.973 Test: blockdev write zeroes read block ...passed 00:12:56.973 Test: blockdev write zeroes read no split ...passed 00:12:56.973 Test: blockdev write zeroes read split ...passed 00:12:56.973 Test: blockdev write zeroes read split partial ...passed 00:12:56.973 Test: blockdev reset ...passed 00:12:56.973 Test: blockdev write read 8 blocks ...passed 00:12:56.973 Test: blockdev write read size > 128k ...passed 00:12:56.973 Test: blockdev write read invalid size ...passed 00:12:56.973 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:56.973 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:56.973 Test: blockdev write read max offset ...passed 00:12:56.973 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:56.973 Test: blockdev writev readv 8 blocks ...passed 00:12:56.973 Test: blockdev writev readv 30 x 1block ...passed 00:12:56.973 Test: blockdev writev readv block ...passed 00:12:56.973 Test: blockdev writev readv size > 128k ...passed 00:12:56.973 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:56.973 Test: blockdev comparev and writev ...passed 00:12:56.973 Test: blockdev nvme passthru rw ...passed 00:12:56.973 Test: blockdev nvme passthru vendor specific ...passed 00:12:56.973 Test: blockdev nvme admin passthru ...passed 00:12:56.973 Test: blockdev copy ...passed 00:12:56.973 00:12:56.973 Run Summary: Type Total Ran Passed Failed Inactive 00:12:56.973 suites 6 6 n/a 0 0 00:12:56.973 tests 138 138 138 0 0 00:12:56.973 asserts 780 780 780 0 n/a 00:12:56.973 00:12:56.973 Elapsed time = 0.477 seconds 00:12:56.973 0 00:12:56.973 14:01:35 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 81547 00:12:56.973 14:01:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 81547 ']' 00:12:56.973 14:01:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 81547 00:12:56.973 14:01:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:12:56.973 14:01:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:56.973 14:01:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81547 00:12:56.973 14:01:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:56.973 killing process with pid 81547 00:12:56.973 14:01:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:56.973 14:01:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81547' 00:12:56.973 14:01:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 81547 00:12:56.973 14:01:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 81547 00:12:57.231 14:01:35 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:12:57.231 00:12:57.231 real 0m1.412s 00:12:57.231 user 0m3.478s 00:12:57.231 sys 0m0.270s 00:12:57.231 14:01:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:57.231 ************************************ 00:12:57.231 END TEST bdev_bounds 00:12:57.231 ************************************ 00:12:57.231 14:01:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:57.231 14:01:35 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:12:57.231 14:01:35 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:57.231 14:01:35 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:57.231 14:01:35 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:57.231 ************************************ 00:12:57.231 START TEST bdev_nbd 00:12:57.231 ************************************ 00:12:57.231 14:01:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:12:57.231 14:01:35 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:12:57.231 14:01:35 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:12:57.231 14:01:35 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:57.231 14:01:35 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:57.231 14:01:35 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:57.231 14:01:35 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:12:57.231 14:01:35 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:12:57.232 14:01:35 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:12:57.232 14:01:35 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:57.232 14:01:35 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:12:57.232 14:01:35 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:12:57.232 14:01:35 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:57.232 14:01:35 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:12:57.232 14:01:35 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:57.232 14:01:35 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:12:57.232 14:01:35 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=81597 00:12:57.232 14:01:35 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:12:57.232 14:01:35 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 81597 /var/tmp/spdk-nbd.sock 00:12:57.232 14:01:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 81597 ']' 00:12:57.232 14:01:35 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:57.232 14:01:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:57.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:57.232 14:01:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:57.232 14:01:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:57.232 14:01:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:57.232 14:01:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:12:57.232 [2024-11-17 14:01:35.488127] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:57.232 [2024-11-17 14:01:35.488259] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:57.491 [2024-11-17 14:01:35.636342] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:57.491 [2024-11-17 14:01:35.669084] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.057 14:01:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:58.057 14:01:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:12:58.057 14:01:36 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:12:58.057 14:01:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:58.057 14:01:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:58.057 14:01:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:12:58.057 14:01:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:12:58.057 14:01:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:58.057 14:01:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:12:58.057 14:01:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:12:58.057 14:01:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:12:58.057 14:01:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:12:58.057 14:01:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:12:58.057 14:01:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:58.057 14:01:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:12:58.315 14:01:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:12:58.315 14:01:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:12:58.315 14:01:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:12:58.315 14:01:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:58.315 14:01:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:58.315 14:01:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:58.315 14:01:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:58.315 14:01:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:58.315 14:01:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:58.315 14:01:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:58.315 14:01:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:58.315 14:01:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:58.315 1+0 records in 00:12:58.315 1+0 records out 00:12:58.315 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00101945 s, 4.0 MB/s 00:12:58.315 14:01:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:58.315 14:01:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:58.315 14:01:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:58.315 14:01:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:58.315 14:01:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:58.315 14:01:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:58.315 14:01:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:58.315 14:01:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:12:58.573 14:01:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:12:58.573 14:01:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:12:58.573 14:01:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:12:58.573 14:01:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:58.573 14:01:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:58.573 14:01:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:58.573 14:01:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:58.573 14:01:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:58.573 14:01:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:58.573 14:01:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:58.573 14:01:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:58.573 14:01:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:58.573 1+0 records in 00:12:58.573 1+0 records out 00:12:58.573 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000687925 s, 6.0 MB/s 00:12:58.573 14:01:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:58.573 14:01:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:58.573 14:01:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:58.573 14:01:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:58.573 14:01:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:58.573 14:01:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:58.573 14:01:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:58.573 14:01:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:12:58.831 14:01:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:12:58.831 14:01:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:12:58.831 14:01:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:12:58.831 14:01:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:12:58.831 14:01:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:58.831 14:01:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:58.831 14:01:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:58.831 14:01:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:12:58.831 14:01:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:58.831 14:01:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:58.831 14:01:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:58.831 14:01:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:58.831 1+0 records in 00:12:58.831 1+0 records out 00:12:58.831 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00109699 s, 3.7 MB/s 00:12:58.831 14:01:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:58.831 14:01:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:58.831 14:01:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:58.831 14:01:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:58.831 14:01:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:58.831 14:01:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:58.831 14:01:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:58.831 14:01:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:12:59.089 14:01:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:12:59.089 14:01:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:12:59.089 14:01:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:12:59.089 14:01:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:12:59.089 14:01:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:59.089 14:01:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:59.089 14:01:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:59.089 14:01:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:12:59.089 14:01:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:59.089 14:01:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:59.089 14:01:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:59.089 14:01:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:59.089 1+0 records in 00:12:59.089 1+0 records out 00:12:59.089 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00364008 s, 1.1 MB/s 00:12:59.089 14:01:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:59.089 14:01:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:59.089 14:01:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:59.089 14:01:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:59.089 14:01:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:59.089 14:01:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:59.089 14:01:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:59.089 14:01:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:12:59.348 14:01:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:12:59.348 14:01:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:12:59.348 14:01:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:12:59.348 14:01:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:12:59.348 14:01:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:59.348 14:01:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:59.348 14:01:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:59.348 14:01:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:12:59.348 14:01:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:59.348 14:01:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:59.348 14:01:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:59.348 14:01:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:59.348 1+0 records in 00:12:59.348 1+0 records out 00:12:59.348 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00117642 s, 3.5 MB/s 00:12:59.348 14:01:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:59.348 14:01:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:59.348 14:01:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:59.348 14:01:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:59.348 14:01:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:59.348 14:01:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:59.348 14:01:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:59.348 14:01:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:12:59.608 14:01:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:12:59.608 14:01:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:12:59.608 14:01:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:12:59.608 14:01:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:12:59.608 14:01:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:12:59.608 14:01:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:59.608 14:01:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:59.608 14:01:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:12:59.608 14:01:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:12:59.609 14:01:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:59.609 14:01:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:59.609 14:01:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:59.609 1+0 records in 00:12:59.609 1+0 records out 00:12:59.609 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00126463 s, 3.2 MB/s 00:12:59.609 14:01:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:59.609 14:01:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:12:59.609 14:01:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:59.609 14:01:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:59.609 14:01:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:12:59.609 14:01:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:59.609 14:01:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:59.609 14:01:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:59.869 14:01:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:12:59.869 { 00:12:59.869 "nbd_device": "/dev/nbd0", 00:12:59.869 "bdev_name": "nvme0n1" 00:12:59.869 }, 00:12:59.869 { 00:12:59.869 "nbd_device": "/dev/nbd1", 00:12:59.869 "bdev_name": "nvme1n1" 00:12:59.869 }, 00:12:59.869 { 00:12:59.869 "nbd_device": "/dev/nbd2", 00:12:59.869 "bdev_name": "nvme2n1" 00:12:59.869 }, 00:12:59.869 { 00:12:59.869 "nbd_device": "/dev/nbd3", 00:12:59.869 "bdev_name": "nvme2n2" 00:12:59.869 }, 00:12:59.869 { 00:12:59.869 "nbd_device": "/dev/nbd4", 00:12:59.869 "bdev_name": "nvme2n3" 00:12:59.869 }, 00:12:59.869 { 00:12:59.869 "nbd_device": "/dev/nbd5", 00:12:59.869 "bdev_name": "nvme3n1" 00:12:59.869 } 00:12:59.869 ]' 00:12:59.869 14:01:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:12:59.869 14:01:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:12:59.869 { 00:12:59.869 "nbd_device": "/dev/nbd0", 00:12:59.869 "bdev_name": "nvme0n1" 00:12:59.869 }, 00:12:59.869 { 00:12:59.869 "nbd_device": "/dev/nbd1", 00:12:59.869 "bdev_name": "nvme1n1" 00:12:59.869 }, 00:12:59.869 { 00:12:59.869 "nbd_device": "/dev/nbd2", 00:12:59.869 "bdev_name": "nvme2n1" 00:12:59.869 }, 00:12:59.869 { 00:12:59.869 "nbd_device": "/dev/nbd3", 00:12:59.869 "bdev_name": "nvme2n2" 00:12:59.869 }, 00:12:59.869 { 00:12:59.869 "nbd_device": "/dev/nbd4", 00:12:59.869 "bdev_name": "nvme2n3" 00:12:59.869 }, 00:12:59.869 { 00:12:59.869 "nbd_device": "/dev/nbd5", 00:12:59.869 "bdev_name": "nvme3n1" 00:12:59.869 } 00:12:59.869 ]' 00:12:59.869 14:01:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:12:59.869 14:01:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:12:59.869 14:01:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:59.869 14:01:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:12:59.869 14:01:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:59.869 14:01:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:59.869 14:01:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:59.869 14:01:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:00.131 14:01:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:00.131 14:01:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:00.131 14:01:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:00.131 14:01:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:00.131 14:01:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:00.131 14:01:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:00.131 14:01:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:00.131 14:01:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:00.131 14:01:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:00.131 14:01:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:00.392 14:01:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:00.392 14:01:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:00.392 14:01:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:00.392 14:01:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:00.392 14:01:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:00.392 14:01:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:00.392 14:01:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:00.392 14:01:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:00.392 14:01:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:00.393 14:01:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:13:00.654 14:01:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:13:00.654 14:01:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:13:00.654 14:01:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:13:00.654 14:01:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:00.654 14:01:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:00.654 14:01:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:13:00.654 14:01:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:00.654 14:01:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:00.654 14:01:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:00.654 14:01:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:13:00.914 14:01:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:13:00.914 14:01:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:13:00.914 14:01:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:13:00.914 14:01:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:00.914 14:01:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:00.914 14:01:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:13:00.914 14:01:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:00.914 14:01:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:00.914 14:01:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:00.914 14:01:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:13:00.914 14:01:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:13:00.914 14:01:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:13:00.914 14:01:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:13:00.914 14:01:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:00.914 14:01:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:00.914 14:01:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:13:00.914 14:01:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:00.914 14:01:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:00.914 14:01:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:00.914 14:01:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:13:01.175 14:01:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:13:01.175 14:01:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:13:01.175 14:01:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:13:01.175 14:01:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:01.175 14:01:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:01.175 14:01:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:13:01.175 14:01:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:01.175 14:01:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:01.175 14:01:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:01.175 14:01:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:01.175 14:01:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:01.436 14:01:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:01.436 14:01:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:01.436 14:01:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:01.436 14:01:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:01.436 14:01:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:01.436 14:01:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:01.436 14:01:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:01.436 14:01:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:01.436 14:01:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:01.436 14:01:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:13:01.436 14:01:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:13:01.436 14:01:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:13:01.436 14:01:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:01.436 14:01:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:01.436 14:01:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:01.436 14:01:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:13:01.436 14:01:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:01.436 14:01:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:13:01.436 14:01:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:01.436 14:01:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:01.436 14:01:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:01.437 14:01:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:01.437 14:01:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:01.437 14:01:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:01.437 14:01:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:13:01.437 14:01:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:01.437 14:01:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:01.437 14:01:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:13:01.698 /dev/nbd0 00:13:01.698 14:01:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:01.698 14:01:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:01.698 14:01:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:01.698 14:01:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:01.698 14:01:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:01.698 14:01:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:01.698 14:01:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:01.698 14:01:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:01.698 14:01:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:01.698 14:01:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:01.698 14:01:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:01.698 1+0 records in 00:13:01.698 1+0 records out 00:13:01.698 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0011223 s, 3.6 MB/s 00:13:01.698 14:01:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:01.698 14:01:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:01.698 14:01:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:01.698 14:01:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:01.698 14:01:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:01.698 14:01:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:01.698 14:01:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:01.698 14:01:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:13:01.959 /dev/nbd1 00:13:01.959 14:01:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:01.959 14:01:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:01.959 14:01:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:01.959 14:01:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:01.959 14:01:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:01.959 14:01:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:01.959 14:01:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:01.959 14:01:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:01.959 14:01:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:01.959 14:01:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:01.959 14:01:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:01.959 1+0 records in 00:13:01.959 1+0 records out 00:13:01.959 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00123444 s, 3.3 MB/s 00:13:01.959 14:01:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:01.959 14:01:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:01.959 14:01:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:01.959 14:01:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:01.959 14:01:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:01.959 14:01:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:01.959 14:01:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:01.959 14:01:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:13:02.220 /dev/nbd10 00:13:02.220 14:01:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:13:02.220 14:01:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:13:02.220 14:01:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:13:02.220 14:01:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:02.220 14:01:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:02.220 14:01:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:02.220 14:01:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:13:02.220 14:01:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:02.220 14:01:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:02.220 14:01:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:02.220 14:01:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:02.220 1+0 records in 00:13:02.220 1+0 records out 00:13:02.220 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00110818 s, 3.7 MB/s 00:13:02.220 14:01:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:02.220 14:01:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:02.220 14:01:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:02.220 14:01:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:02.220 14:01:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:02.220 14:01:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:02.220 14:01:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:02.220 14:01:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:13:02.482 /dev/nbd11 00:13:02.482 14:01:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:13:02.482 14:01:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:13:02.482 14:01:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:13:02.482 14:01:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:02.482 14:01:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:02.482 14:01:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:02.482 14:01:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:13:02.482 14:01:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:02.482 14:01:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:02.482 14:01:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:02.482 14:01:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:02.482 1+0 records in 00:13:02.482 1+0 records out 00:13:02.482 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0019668 s, 2.1 MB/s 00:13:02.482 14:01:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:02.482 14:01:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:02.482 14:01:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:02.482 14:01:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:02.482 14:01:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:02.482 14:01:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:02.482 14:01:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:02.482 14:01:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:13:02.745 /dev/nbd12 00:13:02.745 14:01:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:13:02.745 14:01:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:13:02.745 14:01:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:13:02.745 14:01:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:02.745 14:01:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:02.745 14:01:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:02.745 14:01:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:13:02.745 14:01:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:02.745 14:01:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:02.745 14:01:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:02.745 14:01:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:02.745 1+0 records in 00:13:02.745 1+0 records out 00:13:02.745 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00114112 s, 3.6 MB/s 00:13:02.745 14:01:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:02.745 14:01:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:02.745 14:01:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:02.745 14:01:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:02.745 14:01:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:02.745 14:01:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:02.745 14:01:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:02.745 14:01:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:13:03.007 /dev/nbd13 00:13:03.007 14:01:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:13:03.007 14:01:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:13:03.007 14:01:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:13:03.007 14:01:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:03.007 14:01:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:03.007 14:01:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:03.007 14:01:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:13:03.007 14:01:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:03.007 14:01:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:03.007 14:01:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:03.007 14:01:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:03.007 1+0 records in 00:13:03.007 1+0 records out 00:13:03.007 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00104478 s, 3.9 MB/s 00:13:03.007 14:01:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:03.007 14:01:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:03.007 14:01:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:03.007 14:01:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:03.007 14:01:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:03.007 14:01:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:03.007 14:01:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:03.007 14:01:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:03.007 14:01:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:03.007 14:01:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:03.297 14:01:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:03.297 { 00:13:03.297 "nbd_device": "/dev/nbd0", 00:13:03.297 "bdev_name": "nvme0n1" 00:13:03.297 }, 00:13:03.297 { 00:13:03.297 "nbd_device": "/dev/nbd1", 00:13:03.297 "bdev_name": "nvme1n1" 00:13:03.297 }, 00:13:03.297 { 00:13:03.297 "nbd_device": "/dev/nbd10", 00:13:03.297 "bdev_name": "nvme2n1" 00:13:03.297 }, 00:13:03.297 { 00:13:03.297 "nbd_device": "/dev/nbd11", 00:13:03.297 "bdev_name": "nvme2n2" 00:13:03.297 }, 00:13:03.297 { 00:13:03.297 "nbd_device": "/dev/nbd12", 00:13:03.297 "bdev_name": "nvme2n3" 00:13:03.297 }, 00:13:03.297 { 00:13:03.297 "nbd_device": "/dev/nbd13", 00:13:03.297 "bdev_name": "nvme3n1" 00:13:03.297 } 00:13:03.297 ]' 00:13:03.297 14:01:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:03.297 14:01:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:03.297 { 00:13:03.297 "nbd_device": "/dev/nbd0", 00:13:03.297 "bdev_name": "nvme0n1" 00:13:03.297 }, 00:13:03.297 { 00:13:03.297 "nbd_device": "/dev/nbd1", 00:13:03.297 "bdev_name": "nvme1n1" 00:13:03.297 }, 00:13:03.297 { 00:13:03.297 "nbd_device": "/dev/nbd10", 00:13:03.297 "bdev_name": "nvme2n1" 00:13:03.297 }, 00:13:03.297 { 00:13:03.297 "nbd_device": "/dev/nbd11", 00:13:03.297 "bdev_name": "nvme2n2" 00:13:03.297 }, 00:13:03.297 { 00:13:03.297 "nbd_device": "/dev/nbd12", 00:13:03.297 "bdev_name": "nvme2n3" 00:13:03.297 }, 00:13:03.297 { 00:13:03.297 "nbd_device": "/dev/nbd13", 00:13:03.297 "bdev_name": "nvme3n1" 00:13:03.297 } 00:13:03.297 ]' 00:13:03.297 14:01:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:13:03.297 /dev/nbd1 00:13:03.297 /dev/nbd10 00:13:03.297 /dev/nbd11 00:13:03.297 /dev/nbd12 00:13:03.298 /dev/nbd13' 00:13:03.298 14:01:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:13:03.298 /dev/nbd1 00:13:03.298 /dev/nbd10 00:13:03.298 /dev/nbd11 00:13:03.298 /dev/nbd12 00:13:03.298 /dev/nbd13' 00:13:03.298 14:01:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:03.298 14:01:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:13:03.298 14:01:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:13:03.298 14:01:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:13:03.298 14:01:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:13:03.298 14:01:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:13:03.298 14:01:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:03.298 14:01:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:03.298 14:01:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:13:03.298 14:01:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:03.298 14:01:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:13:03.298 14:01:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:13:03.298 256+0 records in 00:13:03.298 256+0 records out 00:13:03.298 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00894722 s, 117 MB/s 00:13:03.298 14:01:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:03.298 14:01:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:13:03.558 256+0 records in 00:13:03.558 256+0 records out 00:13:03.559 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.236889 s, 4.4 MB/s 00:13:03.559 14:01:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:03.559 14:01:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:13:03.819 256+0 records in 00:13:03.819 256+0 records out 00:13:03.819 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.283479 s, 3.7 MB/s 00:13:03.819 14:01:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:03.819 14:01:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:13:04.080 256+0 records in 00:13:04.080 256+0 records out 00:13:04.080 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.235226 s, 4.5 MB/s 00:13:04.080 14:01:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:04.080 14:01:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:13:04.342 256+0 records in 00:13:04.342 256+0 records out 00:13:04.342 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.225573 s, 4.6 MB/s 00:13:04.342 14:01:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:04.342 14:01:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:13:04.342 256+0 records in 00:13:04.342 256+0 records out 00:13:04.342 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.14717 s, 7.1 MB/s 00:13:04.342 14:01:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:04.342 14:01:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:13:04.604 256+0 records in 00:13:04.604 256+0 records out 00:13:04.604 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.225321 s, 4.7 MB/s 00:13:04.604 14:01:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:13:04.604 14:01:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:04.604 14:01:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:04.604 14:01:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:13:04.604 14:01:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:04.604 14:01:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:13:04.604 14:01:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:13:04.604 14:01:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:04.604 14:01:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:13:04.604 14:01:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:04.604 14:01:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:13:04.604 14:01:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:04.604 14:01:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:13:04.604 14:01:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:04.604 14:01:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:13:04.604 14:01:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:04.604 14:01:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:13:04.604 14:01:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:04.604 14:01:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:13:04.604 14:01:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:04.604 14:01:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:04.604 14:01:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:04.604 14:01:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:04.604 14:01:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:04.604 14:01:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:04.604 14:01:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:04.604 14:01:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:04.866 14:01:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:04.866 14:01:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:04.866 14:01:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:04.866 14:01:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:04.866 14:01:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:04.866 14:01:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:04.866 14:01:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:04.866 14:01:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:04.866 14:01:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:04.866 14:01:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:05.128 14:01:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:05.128 14:01:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:05.128 14:01:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:05.128 14:01:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:05.128 14:01:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:05.128 14:01:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:05.128 14:01:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:05.128 14:01:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:05.128 14:01:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:05.128 14:01:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:13:05.390 14:01:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:13:05.390 14:01:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:13:05.390 14:01:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:13:05.390 14:01:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:05.390 14:01:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:05.390 14:01:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:13:05.390 14:01:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:05.390 14:01:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:05.390 14:01:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:05.390 14:01:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:13:05.651 14:01:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:13:05.651 14:01:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:13:05.651 14:01:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:13:05.651 14:01:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:05.651 14:01:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:05.651 14:01:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:13:05.651 14:01:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:05.651 14:01:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:05.651 14:01:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:05.651 14:01:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:13:05.913 14:01:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:13:05.913 14:01:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:13:05.913 14:01:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:13:05.913 14:01:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:05.913 14:01:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:05.913 14:01:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:13:05.913 14:01:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:05.913 14:01:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:05.913 14:01:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:05.913 14:01:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:13:06.174 14:01:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:13:06.174 14:01:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:13:06.174 14:01:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:13:06.174 14:01:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:06.174 14:01:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:06.174 14:01:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:13:06.174 14:01:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:06.174 14:01:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:06.174 14:01:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:06.174 14:01:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:06.174 14:01:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:06.435 14:01:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:06.435 14:01:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:06.435 14:01:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:06.435 14:01:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:06.435 14:01:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:06.435 14:01:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:06.435 14:01:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:06.435 14:01:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:06.435 14:01:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:06.435 14:01:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:13:06.435 14:01:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:06.435 14:01:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:13:06.435 14:01:44 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:06.435 14:01:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:06.435 14:01:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:13:06.435 14:01:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:13:06.435 malloc_lvol_verify 00:13:06.435 14:01:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:13:06.697 2a2e9e84-3ced-41a9-b577-6d7869338629 00:13:06.697 14:01:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:13:06.957 ea44b4ef-066e-45fd-93f0-ce6563ea5f71 00:13:06.957 14:01:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:13:07.217 /dev/nbd0 00:13:07.217 14:01:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:13:07.217 14:01:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:13:07.217 14:01:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:13:07.217 14:01:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:13:07.217 14:01:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:13:07.217 mke2fs 1.47.0 (5-Feb-2023) 00:13:07.217 Discarding device blocks: 0/4096 done 00:13:07.217 Creating filesystem with 4096 1k blocks and 1024 inodes 00:13:07.217 00:13:07.217 Allocating group tables: 0/1 done 00:13:07.217 Writing inode tables: 0/1 done 00:13:07.217 Creating journal (1024 blocks): done 00:13:07.217 Writing superblocks and filesystem accounting information: 0/1 done 00:13:07.217 00:13:07.217 14:01:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:07.217 14:01:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:07.217 14:01:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:07.217 14:01:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:07.217 14:01:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:07.217 14:01:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:07.217 14:01:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:07.478 14:01:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:07.478 14:01:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:07.478 14:01:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:07.478 14:01:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:07.478 14:01:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:07.478 14:01:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:07.478 14:01:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:07.478 14:01:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:07.478 14:01:45 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 81597 00:13:07.478 14:01:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 81597 ']' 00:13:07.478 14:01:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 81597 00:13:07.478 14:01:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:13:07.478 14:01:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:07.478 14:01:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81597 00:13:07.478 14:01:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:07.478 14:01:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:07.478 14:01:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81597' 00:13:07.478 killing process with pid 81597 00:13:07.478 14:01:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 81597 00:13:07.478 14:01:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 81597 00:13:07.478 14:01:45 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:13:07.478 00:13:07.478 real 0m10.292s 00:13:07.478 user 0m14.023s 00:13:07.478 sys 0m3.773s 00:13:07.478 14:01:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:07.478 ************************************ 00:13:07.478 END TEST bdev_nbd 00:13:07.478 ************************************ 00:13:07.478 14:01:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:07.478 14:01:45 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:13:07.478 14:01:45 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:13:07.478 14:01:45 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:13:07.478 14:01:45 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:13:07.478 14:01:45 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:07.478 14:01:45 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:07.478 14:01:45 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:07.478 ************************************ 00:13:07.478 START TEST bdev_fio 00:13:07.478 ************************************ 00:13:07.478 14:01:45 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:13:07.478 14:01:45 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:13:07.478 14:01:45 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:13:07.478 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:13:07.478 14:01:45 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:13:07.478 14:01:45 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:13:07.478 14:01:45 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:13:07.740 14:01:45 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:13:07.740 14:01:45 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:13:07.740 14:01:45 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:07.740 14:01:45 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:13:07.740 14:01:45 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:13:07.740 14:01:45 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:13:07.740 14:01:45 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:13:07.740 14:01:45 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:07.740 14:01:45 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:13:07.740 14:01:45 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:13:07.740 14:01:45 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:07.740 14:01:45 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:13:07.740 14:01:45 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:13:07.740 14:01:45 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:13:07.740 14:01:45 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:13:07.740 14:01:45 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:13:07.740 14:01:45 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:13:07.740 14:01:45 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:13:07.740 14:01:45 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:07.740 14:01:45 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:13:07.740 14:01:45 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:13:07.740 14:01:45 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:07.740 14:01:45 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:13:07.740 14:01:45 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:13:07.740 14:01:45 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:07.740 14:01:45 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:13:07.740 14:01:45 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:13:07.740 14:01:45 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:07.740 14:01:45 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:13:07.740 14:01:45 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:13:07.740 14:01:45 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:07.740 14:01:45 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:13:07.740 14:01:45 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:13:07.740 14:01:45 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:07.740 14:01:45 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:13:07.740 14:01:45 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:13:07.740 14:01:45 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:13:07.740 14:01:45 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:07.740 14:01:45 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:13:07.740 14:01:45 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:07.740 14:01:45 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:07.740 ************************************ 00:13:07.740 START TEST bdev_fio_rw_verify 00:13:07.740 ************************************ 00:13:07.740 14:01:45 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:07.740 14:01:45 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:07.740 14:01:45 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:13:07.740 14:01:45 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:07.740 14:01:45 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:13:07.740 14:01:45 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:07.740 14:01:45 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:13:07.741 14:01:45 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:13:07.741 14:01:45 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:13:07.741 14:01:45 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:07.741 14:01:45 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:13:07.741 14:01:45 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:13:07.741 14:01:45 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:07.741 14:01:45 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:07.741 14:01:45 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:13:07.741 14:01:45 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:07.741 14:01:45 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:07.741 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:07.741 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:07.741 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:07.741 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:07.741 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:07.741 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:07.741 fio-3.35 00:13:07.741 Starting 6 threads 00:13:19.990 00:13:19.990 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=81998: Sun Nov 17 14:01:56 2024 00:13:19.990 read: IOPS=15.6k, BW=61.1MiB/s (64.1MB/s)(611MiB/10002msec) 00:13:19.990 slat (usec): min=2, max=2460, avg= 5.72, stdev=17.69 00:13:19.990 clat (usec): min=78, max=8570, avg=1250.03, stdev=926.55 00:13:19.990 lat (usec): min=84, max=8573, avg=1255.75, stdev=927.29 00:13:19.990 clat percentiles (usec): 00:13:19.990 | 50.000th=[ 1057], 99.000th=[ 4178], 99.900th=[ 5932], 99.990th=[ 7504], 00:13:19.990 | 99.999th=[ 8586] 00:13:19.990 write: IOPS=15.9k, BW=62.2MiB/s (65.2MB/s)(622MiB/10002msec); 0 zone resets 00:13:19.990 slat (usec): min=10, max=4809, avg=38.83, stdev=152.71 00:13:19.990 clat (usec): min=75, max=8509, avg=1475.37, stdev=1039.49 00:13:19.990 lat (usec): min=89, max=8554, avg=1514.20, stdev=1056.85 00:13:19.990 clat percentiles (usec): 00:13:19.990 | 50.000th=[ 1287], 99.000th=[ 4686], 99.900th=[ 6456], 99.990th=[ 8029], 00:13:19.990 | 99.999th=[ 8455] 00:13:19.990 bw ( KiB/s): min=42447, max=177624, per=100.00%, avg=64965.21, stdev=5174.28, samples=114 00:13:19.990 iops : min=10611, max=44406, avg=16240.89, stdev=1293.59, samples=114 00:13:19.990 lat (usec) : 100=0.03%, 250=5.60%, 500=15.99%, 750=12.92%, 1000=9.58% 00:13:19.990 lat (msec) : 2=33.00%, 4=21.01%, 10=1.87% 00:13:19.990 cpu : usr=42.87%, sys=33.35%, ctx=5601, majf=0, minf=15551 00:13:19.990 IO depths : 1=11.6%, 2=24.1%, 4=50.9%, 8=13.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:19.990 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:19.990 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:19.990 issued rwts: total=156476,159266,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:19.990 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:19.990 00:13:19.990 Run status group 0 (all jobs): 00:13:19.990 READ: bw=61.1MiB/s (64.1MB/s), 61.1MiB/s-61.1MiB/s (64.1MB/s-64.1MB/s), io=611MiB (641MB), run=10002-10002msec 00:13:19.990 WRITE: bw=62.2MiB/s (65.2MB/s), 62.2MiB/s-62.2MiB/s (65.2MB/s-65.2MB/s), io=622MiB (652MB), run=10002-10002msec 00:13:19.990 ----------------------------------------------------- 00:13:19.990 Suppressions used: 00:13:19.990 count bytes template 00:13:19.990 6 48 /usr/src/fio/parse.c 00:13:19.990 2681 257376 /usr/src/fio/iolog.c 00:13:19.990 1 8 libtcmalloc_minimal.so 00:13:19.990 1 904 libcrypto.so 00:13:19.990 ----------------------------------------------------- 00:13:19.990 00:13:19.990 00:13:19.990 real 0m11.119s 00:13:19.990 user 0m26.482s 00:13:19.990 sys 0m20.285s 00:13:19.990 14:01:56 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:19.990 ************************************ 00:13:19.990 END TEST bdev_fio_rw_verify 00:13:19.990 ************************************ 00:13:19.990 14:01:56 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:13:19.990 14:01:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:13:19.990 14:01:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:19.990 14:01:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:13:19.990 14:01:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:19.990 14:01:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:13:19.990 14:01:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:13:19.990 14:01:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:13:19.990 14:01:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:13:19.990 14:01:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:19.990 14:01:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:13:19.990 14:01:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:13:19.990 14:01:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:19.990 14:01:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:13:19.990 14:01:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:13:19.990 14:01:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:13:19.990 14:01:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:13:19.991 14:01:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "77bd1e4e-195e-4ae0-a009-ffe013e47e71"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "77bd1e4e-195e-4ae0-a009-ffe013e47e71",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "2b6e4bb2-b32e-4183-bb0b-ba6acb893a6b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "2b6e4bb2-b32e-4183-bb0b-ba6acb893a6b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "36204971-f665-4c95-8951-62a115340326"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "36204971-f665-4c95-8951-62a115340326",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "71a9278a-8e97-4091-9d72-85b42bd1eb7a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "71a9278a-8e97-4091-9d72-85b42bd1eb7a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "a407cfb4-eaf3-47e8-915a-b7594f3164af"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a407cfb4-eaf3-47e8-915a-b7594f3164af",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "2ed5f443-4325-45be-92b2-9c3f30fa532e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "2ed5f443-4325-45be-92b2-9c3f30fa532e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:13:19.991 14:01:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:13:19.991 14:01:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:13:19.991 14:01:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:19.991 /home/vagrant/spdk_repo/spdk 00:13:19.991 14:01:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:13:19.991 14:01:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:13:19.991 14:01:57 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:13:19.991 00:13:19.991 real 0m11.285s 00:13:19.991 user 0m26.552s 00:13:19.991 sys 0m20.360s 00:13:19.991 ************************************ 00:13:19.991 END TEST bdev_fio 00:13:19.991 ************************************ 00:13:19.991 14:01:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:19.991 14:01:57 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:19.991 14:01:57 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:19.991 14:01:57 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:19.991 14:01:57 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:13:19.991 14:01:57 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:19.991 14:01:57 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:19.991 ************************************ 00:13:19.991 START TEST bdev_verify 00:13:19.991 ************************************ 00:13:19.991 14:01:57 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:19.991 [2024-11-17 14:01:57.183829] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:19.991 [2024-11-17 14:01:57.183978] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82168 ] 00:13:19.991 [2024-11-17 14:01:57.329365] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:19.991 [2024-11-17 14:01:57.380192] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:19.991 [2024-11-17 14:01:57.380298] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:19.991 Running I/O for 5 seconds... 00:13:21.550 23200.00 IOPS, 90.62 MiB/s [2024-11-17T14:02:01.240Z] 23504.00 IOPS, 91.81 MiB/s [2024-11-17T14:02:01.812Z] 23136.00 IOPS, 90.38 MiB/s [2024-11-17T14:02:02.758Z] 23056.00 IOPS, 90.06 MiB/s [2024-11-17T14:02:02.758Z] 23174.40 IOPS, 90.53 MiB/s 00:13:24.457 Latency(us) 00:13:24.457 [2024-11-17T14:02:02.758Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:24.457 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:24.457 Verification LBA range: start 0x0 length 0xa0000 00:13:24.457 nvme0n1 : 5.05 1823.83 7.12 0.00 0.00 70040.71 7561.85 76626.71 00:13:24.457 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:24.457 Verification LBA range: start 0xa0000 length 0xa0000 00:13:24.457 nvme0n1 : 5.06 1720.23 6.72 0.00 0.00 74268.79 13107.20 95178.44 00:13:24.457 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:24.457 Verification LBA range: start 0x0 length 0xbd0bd 00:13:24.457 nvme1n1 : 5.07 2387.08 9.32 0.00 0.00 53269.63 5948.65 69367.34 00:13:24.457 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:24.457 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:13:24.457 nvme1n1 : 5.05 2274.22 8.88 0.00 0.00 55986.53 7461.02 62914.56 00:13:24.457 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:24.457 Verification LBA range: start 0x0 length 0x80000 00:13:24.457 nvme2n1 : 5.06 1873.59 7.32 0.00 0.00 67717.09 9074.22 69367.34 00:13:24.457 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:24.457 Verification LBA range: start 0x80000 length 0x80000 00:13:24.457 nvme2n1 : 5.06 1845.43 7.21 0.00 0.00 69007.16 7662.67 78239.90 00:13:24.457 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:24.457 Verification LBA range: start 0x0 length 0x80000 00:13:24.457 nvme2n2 : 5.06 1822.17 7.12 0.00 0.00 69475.27 14518.74 62107.96 00:13:24.457 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:24.457 Verification LBA range: start 0x80000 length 0x80000 00:13:24.457 nvme2n2 : 5.07 1818.81 7.10 0.00 0.00 69761.77 8318.03 70173.93 00:13:24.457 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:24.457 Verification LBA range: start 0x0 length 0x80000 00:13:24.457 nvme2n3 : 5.08 1841.10 7.19 0.00 0.00 68627.30 7309.78 64527.75 00:13:24.457 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:24.457 Verification LBA range: start 0x80000 length 0x80000 00:13:24.457 nvme2n3 : 5.07 1816.44 7.10 0.00 0.00 69680.02 7259.37 70173.93 00:13:24.457 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:24.457 Verification LBA range: start 0x0 length 0x20000 00:13:24.457 nvme3n1 : 5.08 1863.14 7.28 0.00 0.00 67728.92 3579.27 67350.84 00:13:24.457 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:24.457 Verification LBA range: start 0x20000 length 0x20000 00:13:24.457 nvme3n1 : 5.07 1794.16 7.01 0.00 0.00 70413.53 4562.31 75013.51 00:13:24.457 [2024-11-17T14:02:02.758Z] =================================================================================================================== 00:13:24.457 [2024-11-17T14:02:02.758Z] Total : 22880.20 89.38 0.00 0.00 66571.32 3579.27 95178.44 00:13:25.030 00:13:25.030 real 0m5.923s 00:13:25.030 user 0m9.450s 00:13:25.030 sys 0m1.428s 00:13:25.030 ************************************ 00:13:25.030 END TEST bdev_verify 00:13:25.030 ************************************ 00:13:25.030 14:02:03 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:25.030 14:02:03 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:13:25.030 14:02:03 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:25.030 14:02:03 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:13:25.030 14:02:03 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:25.030 14:02:03 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:25.030 ************************************ 00:13:25.030 START TEST bdev_verify_big_io 00:13:25.030 ************************************ 00:13:25.030 14:02:03 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:25.030 [2024-11-17 14:02:03.191926] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:25.030 [2024-11-17 14:02:03.192114] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82251 ] 00:13:25.288 [2024-11-17 14:02:03.349307] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:25.288 [2024-11-17 14:02:03.404100] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:25.288 [2024-11-17 14:02:03.404174] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:25.548 Running I/O for 5 seconds... 00:13:31.397 1136.00 IOPS, 71.00 MiB/s [2024-11-17T14:02:09.698Z] 2875.50 IOPS, 179.72 MiB/s 00:13:31.397 Latency(us) 00:13:31.397 [2024-11-17T14:02:09.698Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:31.397 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:31.397 Verification LBA range: start 0x0 length 0xa000 00:13:31.397 nvme0n1 : 5.89 129.08 8.07 0.00 0.00 947782.69 105664.20 1432516.14 00:13:31.397 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:31.397 Verification LBA range: start 0xa000 length 0xa000 00:13:31.397 nvme0n1 : 5.91 117.00 7.31 0.00 0.00 1033769.52 76626.71 1142141.24 00:13:31.397 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:31.397 Verification LBA range: start 0x0 length 0xbd0b 00:13:31.397 nvme1n1 : 5.81 176.28 11.02 0.00 0.00 674451.59 13712.15 767880.27 00:13:31.397 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:31.397 Verification LBA range: start 0xbd0b length 0xbd0b 00:13:31.397 nvme1n1 : 5.93 148.41 9.28 0.00 0.00 814909.30 5192.47 1226027.32 00:13:31.397 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:31.397 Verification LBA range: start 0x0 length 0x8000 00:13:31.397 nvme2n1 : 5.90 151.93 9.50 0.00 0.00 778374.81 10384.94 787238.60 00:13:31.397 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:31.397 Verification LBA range: start 0x8000 length 0x8000 00:13:31.397 nvme2n1 : 5.93 126.92 7.93 0.00 0.00 923546.64 24298.73 1309913.40 00:13:31.397 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:31.397 Verification LBA range: start 0x0 length 0x8000 00:13:31.397 nvme2n2 : 5.90 122.04 7.63 0.00 0.00 942223.96 67350.84 1548666.09 00:13:31.397 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:31.397 Verification LBA range: start 0x8000 length 0x8000 00:13:31.397 nvme2n2 : 5.93 118.69 7.42 0.00 0.00 961429.88 20366.57 793691.37 00:13:31.397 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:31.397 Verification LBA range: start 0x0 length 0x8000 00:13:31.397 nvme2n3 : 5.89 87.08 5.44 0.00 0.00 1279376.71 140347.86 2890843.37 00:13:31.397 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:31.397 Verification LBA range: start 0x8000 length 0x8000 00:13:31.397 nvme2n3 : 5.94 138.46 8.65 0.00 0.00 803831.11 8670.92 1451874.46 00:13:31.397 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:31.397 Verification LBA range: start 0x0 length 0x2000 00:13:31.397 nvme3n1 : 5.91 147.64 9.23 0.00 0.00 742255.15 4133.81 1910021.51 00:13:31.397 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:31.397 Verification LBA range: start 0x2000 length 0x2000 00:13:31.397 nvme3n1 : 5.93 151.00 9.44 0.00 0.00 715712.24 6301.54 993727.41 00:13:31.397 [2024-11-17T14:02:09.698Z] =================================================================================================================== 00:13:31.397 [2024-11-17T14:02:09.698Z] Total : 1614.52 100.91 0.00 0.00 859935.05 4133.81 2890843.37 00:13:31.659 00:13:31.659 real 0m6.770s 00:13:31.659 user 0m12.269s 00:13:31.659 sys 0m0.522s 00:13:31.659 14:02:09 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:31.659 14:02:09 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.659 ************************************ 00:13:31.659 END TEST bdev_verify_big_io 00:13:31.659 ************************************ 00:13:31.659 14:02:09 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:31.659 14:02:09 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:13:31.659 14:02:09 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:31.659 14:02:09 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:31.659 ************************************ 00:13:31.659 START TEST bdev_write_zeroes 00:13:31.659 ************************************ 00:13:31.659 14:02:09 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:31.918 [2024-11-17 14:02:10.019712] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:31.918 [2024-11-17 14:02:10.019855] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82351 ] 00:13:31.918 [2024-11-17 14:02:10.172648] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:32.180 [2024-11-17 14:02:10.225970] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:32.180 Running I/O for 1 seconds... 00:13:33.569 75872.00 IOPS, 296.38 MiB/s 00:13:33.569 Latency(us) 00:13:33.569 [2024-11-17T14:02:11.870Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:33.569 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:33.569 nvme0n1 : 1.02 12480.83 48.75 0.00 0.00 10245.72 6503.19 21273.99 00:13:33.569 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:33.569 nvme1n1 : 1.02 13365.94 52.21 0.00 0.00 9556.33 3125.56 21677.29 00:13:33.569 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:33.569 nvme2n1 : 1.02 12590.32 49.18 0.00 0.00 10113.37 6553.60 19055.85 00:13:33.569 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:33.569 nvme2n2 : 1.02 12323.86 48.14 0.00 0.00 10279.39 6503.19 19963.27 00:13:33.569 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:33.569 nvme2n3 : 1.03 12344.80 48.22 0.00 0.00 10241.29 6503.19 20769.87 00:13:33.569 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:33.569 nvme3n1 : 1.02 12379.52 48.36 0.00 0.00 10189.08 6503.19 20669.05 00:13:33.569 [2024-11-17T14:02:11.870Z] =================================================================================================================== 00:13:33.569 [2024-11-17T14:02:11.870Z] Total : 75485.27 294.86 0.00 0.00 10096.87 3125.56 21677.29 00:13:33.569 00:13:33.569 real 0m1.774s 00:13:33.569 user 0m1.081s 00:13:33.569 sys 0m0.514s 00:13:33.569 14:02:11 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:33.569 ************************************ 00:13:33.569 END TEST bdev_write_zeroes 00:13:33.569 ************************************ 00:13:33.569 14:02:11 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:13:33.569 14:02:11 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:33.569 14:02:11 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:13:33.569 14:02:11 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:33.569 14:02:11 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:33.569 ************************************ 00:13:33.569 START TEST bdev_json_nonenclosed 00:13:33.569 ************************************ 00:13:33.569 14:02:11 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:33.569 [2024-11-17 14:02:11.863190] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:33.569 [2024-11-17 14:02:11.863366] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82393 ] 00:13:33.832 [2024-11-17 14:02:12.008390] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:33.832 [2024-11-17 14:02:12.057398] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.832 [2024-11-17 14:02:12.057511] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:33.832 [2024-11-17 14:02:12.057528] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:33.832 [2024-11-17 14:02:12.057540] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:34.091 00:13:34.091 real 0m0.363s 00:13:34.091 user 0m0.143s 00:13:34.091 sys 0m0.116s 00:13:34.091 14:02:12 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:34.091 ************************************ 00:13:34.091 END TEST bdev_json_nonenclosed 00:13:34.091 ************************************ 00:13:34.091 14:02:12 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:13:34.091 14:02:12 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:34.091 14:02:12 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:13:34.091 14:02:12 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:34.091 14:02:12 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:34.091 ************************************ 00:13:34.091 START TEST bdev_json_nonarray 00:13:34.091 ************************************ 00:13:34.091 14:02:12 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:34.091 [2024-11-17 14:02:12.285317] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:34.091 [2024-11-17 14:02:12.285438] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82417 ] 00:13:34.350 [2024-11-17 14:02:12.432871] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:34.350 [2024-11-17 14:02:12.465076] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:34.350 [2024-11-17 14:02:12.465171] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:34.350 [2024-11-17 14:02:12.465186] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:34.350 [2024-11-17 14:02:12.465197] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:34.350 00:13:34.350 real 0m0.321s 00:13:34.350 user 0m0.118s 00:13:34.350 sys 0m0.099s 00:13:34.350 14:02:12 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:34.350 14:02:12 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:13:34.350 ************************************ 00:13:34.350 END TEST bdev_json_nonarray 00:13:34.350 ************************************ 00:13:34.350 14:02:12 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:13:34.350 14:02:12 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:13:34.350 14:02:12 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:13:34.350 14:02:12 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:13:34.350 14:02:12 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:13:34.350 14:02:12 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:13:34.350 14:02:12 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:34.350 14:02:12 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:13:34.350 14:02:12 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:13:34.350 14:02:12 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:13:34.350 14:02:12 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:13:34.350 14:02:12 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:34.916 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:39.140 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:39.141 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:39.141 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:39.141 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:39.141 00:13:39.141 real 0m50.590s 00:13:39.141 user 1m15.330s 00:13:39.141 sys 0m35.088s 00:13:39.141 14:02:16 blockdev_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:39.141 ************************************ 00:13:39.141 END TEST blockdev_xnvme 00:13:39.141 14:02:16 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:39.141 ************************************ 00:13:39.141 14:02:16 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:39.141 14:02:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:39.141 14:02:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:39.141 14:02:16 -- common/autotest_common.sh@10 -- # set +x 00:13:39.141 ************************************ 00:13:39.141 START TEST ublk 00:13:39.141 ************************************ 00:13:39.141 14:02:16 ublk -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:39.141 * Looking for test storage... 00:13:39.141 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:13:39.141 14:02:16 ublk -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:39.141 14:02:16 ublk -- common/autotest_common.sh@1681 -- # lcov --version 00:13:39.141 14:02:16 ublk -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:39.141 14:02:16 ublk -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:39.141 14:02:16 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:39.141 14:02:16 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:39.141 14:02:16 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:39.141 14:02:16 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:13:39.141 14:02:16 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:13:39.141 14:02:16 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:13:39.141 14:02:16 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:13:39.141 14:02:16 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:13:39.141 14:02:16 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:13:39.141 14:02:16 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:13:39.141 14:02:16 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:39.141 14:02:16 ublk -- scripts/common.sh@344 -- # case "$op" in 00:13:39.141 14:02:16 ublk -- scripts/common.sh@345 -- # : 1 00:13:39.141 14:02:16 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:39.141 14:02:16 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:39.141 14:02:16 ublk -- scripts/common.sh@365 -- # decimal 1 00:13:39.141 14:02:16 ublk -- scripts/common.sh@353 -- # local d=1 00:13:39.141 14:02:16 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:39.141 14:02:16 ublk -- scripts/common.sh@355 -- # echo 1 00:13:39.141 14:02:16 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:13:39.141 14:02:16 ublk -- scripts/common.sh@366 -- # decimal 2 00:13:39.141 14:02:16 ublk -- scripts/common.sh@353 -- # local d=2 00:13:39.141 14:02:17 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:39.141 14:02:17 ublk -- scripts/common.sh@355 -- # echo 2 00:13:39.141 14:02:17 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:13:39.141 14:02:17 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:39.141 14:02:17 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:39.141 14:02:17 ublk -- scripts/common.sh@368 -- # return 0 00:13:39.141 14:02:17 ublk -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:39.141 14:02:17 ublk -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:39.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:39.141 --rc genhtml_branch_coverage=1 00:13:39.141 --rc genhtml_function_coverage=1 00:13:39.141 --rc genhtml_legend=1 00:13:39.141 --rc geninfo_all_blocks=1 00:13:39.141 --rc geninfo_unexecuted_blocks=1 00:13:39.141 00:13:39.141 ' 00:13:39.141 14:02:17 ublk -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:39.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:39.141 --rc genhtml_branch_coverage=1 00:13:39.141 --rc genhtml_function_coverage=1 00:13:39.141 --rc genhtml_legend=1 00:13:39.141 --rc geninfo_all_blocks=1 00:13:39.141 --rc geninfo_unexecuted_blocks=1 00:13:39.141 00:13:39.141 ' 00:13:39.141 14:02:17 ublk -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:39.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:39.141 --rc genhtml_branch_coverage=1 00:13:39.141 --rc genhtml_function_coverage=1 00:13:39.141 --rc genhtml_legend=1 00:13:39.141 --rc geninfo_all_blocks=1 00:13:39.141 --rc geninfo_unexecuted_blocks=1 00:13:39.141 00:13:39.141 ' 00:13:39.141 14:02:17 ublk -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:39.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:39.141 --rc genhtml_branch_coverage=1 00:13:39.141 --rc genhtml_function_coverage=1 00:13:39.141 --rc genhtml_legend=1 00:13:39.141 --rc geninfo_all_blocks=1 00:13:39.141 --rc geninfo_unexecuted_blocks=1 00:13:39.141 00:13:39.141 ' 00:13:39.141 14:02:17 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:13:39.141 14:02:17 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:13:39.141 14:02:17 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:13:39.141 14:02:17 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:13:39.141 14:02:17 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:13:39.141 14:02:17 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:13:39.141 14:02:17 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:13:39.141 14:02:17 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:13:39.141 14:02:17 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:13:39.141 14:02:17 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:13:39.141 14:02:17 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:13:39.141 14:02:17 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:13:39.141 14:02:17 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:13:39.141 14:02:17 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:13:39.141 14:02:17 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:13:39.141 14:02:17 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:13:39.141 14:02:17 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:13:39.141 14:02:17 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:13:39.141 14:02:17 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:13:39.141 14:02:17 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:13:39.141 14:02:17 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:39.141 14:02:17 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:39.141 14:02:17 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:39.141 ************************************ 00:13:39.141 START TEST test_save_ublk_config 00:13:39.141 ************************************ 00:13:39.141 14:02:17 ublk.test_save_ublk_config -- common/autotest_common.sh@1125 -- # test_save_config 00:13:39.141 14:02:17 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:13:39.141 14:02:17 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=82708 00:13:39.141 14:02:17 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:13:39.141 14:02:17 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 82708 00:13:39.141 14:02:17 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 82708 ']' 00:13:39.141 14:02:17 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:39.141 14:02:17 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:39.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:39.141 14:02:17 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:39.141 14:02:17 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:39.141 14:02:17 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:39.141 14:02:17 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:13:39.141 [2024-11-17 14:02:17.102398] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:39.141 [2024-11-17 14:02:17.102521] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82708 ] 00:13:39.141 [2024-11-17 14:02:17.247457] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:39.141 [2024-11-17 14:02:17.291935] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:39.708 14:02:17 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:39.708 14:02:17 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:13:39.708 14:02:17 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:13:39.708 14:02:17 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:13:39.708 14:02:17 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.708 14:02:17 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:39.708 [2024-11-17 14:02:17.958255] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:39.708 [2024-11-17 14:02:17.958524] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:39.708 malloc0 00:13:39.708 [2024-11-17 14:02:17.982349] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:39.708 [2024-11-17 14:02:17.982431] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:39.708 [2024-11-17 14:02:17.982439] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:39.708 [2024-11-17 14:02:17.982450] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:39.708 [2024-11-17 14:02:17.990269] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:39.708 [2024-11-17 14:02:17.990309] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:39.708 [2024-11-17 14:02:17.998266] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:39.708 [2024-11-17 14:02:17.998360] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:39.967 [2024-11-17 14:02:18.015266] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:39.967 0 00:13:39.967 14:02:18 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.967 14:02:18 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:13:39.967 14:02:18 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.967 14:02:18 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:40.225 14:02:18 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.225 14:02:18 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:13:40.225 "subsystems": [ 00:13:40.225 { 00:13:40.225 "subsystem": "fsdev", 00:13:40.225 "config": [ 00:13:40.225 { 00:13:40.225 "method": "fsdev_set_opts", 00:13:40.225 "params": { 00:13:40.225 "fsdev_io_pool_size": 65535, 00:13:40.225 "fsdev_io_cache_size": 256 00:13:40.225 } 00:13:40.225 } 00:13:40.225 ] 00:13:40.225 }, 00:13:40.225 { 00:13:40.225 "subsystem": "keyring", 00:13:40.225 "config": [] 00:13:40.225 }, 00:13:40.225 { 00:13:40.225 "subsystem": "iobuf", 00:13:40.225 "config": [ 00:13:40.225 { 00:13:40.225 "method": "iobuf_set_options", 00:13:40.225 "params": { 00:13:40.225 "small_pool_count": 8192, 00:13:40.225 "large_pool_count": 1024, 00:13:40.225 "small_bufsize": 8192, 00:13:40.225 "large_bufsize": 135168 00:13:40.225 } 00:13:40.225 } 00:13:40.225 ] 00:13:40.225 }, 00:13:40.226 { 00:13:40.226 "subsystem": "sock", 00:13:40.226 "config": [ 00:13:40.226 { 00:13:40.226 "method": "sock_set_default_impl", 00:13:40.226 "params": { 00:13:40.226 "impl_name": "posix" 00:13:40.226 } 00:13:40.226 }, 00:13:40.226 { 00:13:40.226 "method": "sock_impl_set_options", 00:13:40.226 "params": { 00:13:40.226 "impl_name": "ssl", 00:13:40.226 "recv_buf_size": 4096, 00:13:40.226 "send_buf_size": 4096, 00:13:40.226 "enable_recv_pipe": true, 00:13:40.226 "enable_quickack": false, 00:13:40.226 "enable_placement_id": 0, 00:13:40.226 "enable_zerocopy_send_server": true, 00:13:40.226 "enable_zerocopy_send_client": false, 00:13:40.226 "zerocopy_threshold": 0, 00:13:40.226 "tls_version": 0, 00:13:40.226 "enable_ktls": false 00:13:40.226 } 00:13:40.226 }, 00:13:40.226 { 00:13:40.226 "method": "sock_impl_set_options", 00:13:40.226 "params": { 00:13:40.226 "impl_name": "posix", 00:13:40.226 "recv_buf_size": 2097152, 00:13:40.226 "send_buf_size": 2097152, 00:13:40.226 "enable_recv_pipe": true, 00:13:40.226 "enable_quickack": false, 00:13:40.226 "enable_placement_id": 0, 00:13:40.226 "enable_zerocopy_send_server": true, 00:13:40.226 "enable_zerocopy_send_client": false, 00:13:40.226 "zerocopy_threshold": 0, 00:13:40.226 "tls_version": 0, 00:13:40.226 "enable_ktls": false 00:13:40.226 } 00:13:40.226 } 00:13:40.226 ] 00:13:40.226 }, 00:13:40.226 { 00:13:40.226 "subsystem": "vmd", 00:13:40.226 "config": [] 00:13:40.226 }, 00:13:40.226 { 00:13:40.226 "subsystem": "accel", 00:13:40.226 "config": [ 00:13:40.226 { 00:13:40.226 "method": "accel_set_options", 00:13:40.226 "params": { 00:13:40.226 "small_cache_size": 128, 00:13:40.226 "large_cache_size": 16, 00:13:40.226 "task_count": 2048, 00:13:40.226 "sequence_count": 2048, 00:13:40.226 "buf_count": 2048 00:13:40.226 } 00:13:40.226 } 00:13:40.226 ] 00:13:40.226 }, 00:13:40.226 { 00:13:40.226 "subsystem": "bdev", 00:13:40.226 "config": [ 00:13:40.226 { 00:13:40.226 "method": "bdev_set_options", 00:13:40.226 "params": { 00:13:40.226 "bdev_io_pool_size": 65535, 00:13:40.226 "bdev_io_cache_size": 256, 00:13:40.226 "bdev_auto_examine": true, 00:13:40.226 "iobuf_small_cache_size": 128, 00:13:40.226 "iobuf_large_cache_size": 16 00:13:40.226 } 00:13:40.226 }, 00:13:40.226 { 00:13:40.226 "method": "bdev_raid_set_options", 00:13:40.226 "params": { 00:13:40.226 "process_window_size_kb": 1024, 00:13:40.226 "process_max_bandwidth_mb_sec": 0 00:13:40.226 } 00:13:40.226 }, 00:13:40.226 { 00:13:40.226 "method": "bdev_iscsi_set_options", 00:13:40.226 "params": { 00:13:40.226 "timeout_sec": 30 00:13:40.226 } 00:13:40.226 }, 00:13:40.226 { 00:13:40.226 "method": "bdev_nvme_set_options", 00:13:40.226 "params": { 00:13:40.226 "action_on_timeout": "none", 00:13:40.226 "timeout_us": 0, 00:13:40.226 "timeout_admin_us": 0, 00:13:40.226 "keep_alive_timeout_ms": 10000, 00:13:40.226 "arbitration_burst": 0, 00:13:40.226 "low_priority_weight": 0, 00:13:40.226 "medium_priority_weight": 0, 00:13:40.226 "high_priority_weight": 0, 00:13:40.226 "nvme_adminq_poll_period_us": 10000, 00:13:40.226 "nvme_ioq_poll_period_us": 0, 00:13:40.226 "io_queue_requests": 0, 00:13:40.226 "delay_cmd_submit": true, 00:13:40.226 "transport_retry_count": 4, 00:13:40.226 "bdev_retry_count": 3, 00:13:40.226 "transport_ack_timeout": 0, 00:13:40.226 "ctrlr_loss_timeout_sec": 0, 00:13:40.226 "reconnect_delay_sec": 0, 00:13:40.226 "fast_io_fail_timeout_sec": 0, 00:13:40.226 "disable_auto_failback": false, 00:13:40.226 "generate_uuids": false, 00:13:40.226 "transport_tos": 0, 00:13:40.226 "nvme_error_stat": false, 00:13:40.226 "rdma_srq_size": 0, 00:13:40.226 "io_path_stat": false, 00:13:40.226 "allow_accel_sequence": false, 00:13:40.226 "rdma_max_cq_size": 0, 00:13:40.226 "rdma_cm_event_timeout_ms": 0, 00:13:40.226 "dhchap_digests": [ 00:13:40.226 "sha256", 00:13:40.226 "sha384", 00:13:40.226 "sha512" 00:13:40.226 ], 00:13:40.226 "dhchap_dhgroups": [ 00:13:40.226 "null", 00:13:40.226 "ffdhe2048", 00:13:40.226 "ffdhe3072", 00:13:40.226 "ffdhe4096", 00:13:40.226 "ffdhe6144", 00:13:40.226 "ffdhe8192" 00:13:40.226 ] 00:13:40.226 } 00:13:40.226 }, 00:13:40.226 { 00:13:40.226 "method": "bdev_nvme_set_hotplug", 00:13:40.226 "params": { 00:13:40.226 "period_us": 100000, 00:13:40.226 "enable": false 00:13:40.226 } 00:13:40.226 }, 00:13:40.226 { 00:13:40.226 "method": "bdev_malloc_create", 00:13:40.226 "params": { 00:13:40.226 "name": "malloc0", 00:13:40.226 "num_blocks": 8192, 00:13:40.226 "block_size": 4096, 00:13:40.226 "physical_block_size": 4096, 00:13:40.226 "uuid": "f0f91461-cf4a-4c03-a8a3-e1c4ac54f638", 00:13:40.226 "optimal_io_boundary": 0, 00:13:40.226 "md_size": 0, 00:13:40.226 "dif_type": 0, 00:13:40.226 "dif_is_head_of_md": false, 00:13:40.226 "dif_pi_format": 0 00:13:40.226 } 00:13:40.226 }, 00:13:40.226 { 00:13:40.226 "method": "bdev_wait_for_examine" 00:13:40.226 } 00:13:40.226 ] 00:13:40.226 }, 00:13:40.226 { 00:13:40.226 "subsystem": "scsi", 00:13:40.226 "config": null 00:13:40.226 }, 00:13:40.226 { 00:13:40.226 "subsystem": "scheduler", 00:13:40.226 "config": [ 00:13:40.226 { 00:13:40.226 "method": "framework_set_scheduler", 00:13:40.226 "params": { 00:13:40.226 "name": "static" 00:13:40.226 } 00:13:40.226 } 00:13:40.226 ] 00:13:40.226 }, 00:13:40.226 { 00:13:40.226 "subsystem": "vhost_scsi", 00:13:40.226 "config": [] 00:13:40.226 }, 00:13:40.226 { 00:13:40.226 "subsystem": "vhost_blk", 00:13:40.226 "config": [] 00:13:40.226 }, 00:13:40.226 { 00:13:40.226 "subsystem": "ublk", 00:13:40.226 "config": [ 00:13:40.226 { 00:13:40.226 "method": "ublk_create_target", 00:13:40.226 "params": { 00:13:40.226 "cpumask": "1" 00:13:40.226 } 00:13:40.226 }, 00:13:40.226 { 00:13:40.226 "method": "ublk_start_disk", 00:13:40.226 "params": { 00:13:40.226 "bdev_name": "malloc0", 00:13:40.226 "ublk_id": 0, 00:13:40.226 "num_queues": 1, 00:13:40.226 "queue_depth": 128 00:13:40.226 } 00:13:40.226 } 00:13:40.226 ] 00:13:40.226 }, 00:13:40.226 { 00:13:40.226 "subsystem": "nbd", 00:13:40.226 "config": [] 00:13:40.226 }, 00:13:40.226 { 00:13:40.226 "subsystem": "nvmf", 00:13:40.226 "config": [ 00:13:40.226 { 00:13:40.226 "method": "nvmf_set_config", 00:13:40.226 "params": { 00:13:40.226 "discovery_filter": "match_any", 00:13:40.226 "admin_cmd_passthru": { 00:13:40.226 "identify_ctrlr": false 00:13:40.226 }, 00:13:40.226 "dhchap_digests": [ 00:13:40.226 "sha256", 00:13:40.226 "sha384", 00:13:40.226 "sha512" 00:13:40.226 ], 00:13:40.226 "dhchap_dhgroups": [ 00:13:40.226 "null", 00:13:40.226 "ffdhe2048", 00:13:40.226 "ffdhe3072", 00:13:40.226 "ffdhe4096", 00:13:40.226 "ffdhe6144", 00:13:40.226 "ffdhe8192" 00:13:40.226 ] 00:13:40.226 } 00:13:40.226 }, 00:13:40.226 { 00:13:40.226 "method": "nvmf_set_max_subsystems", 00:13:40.226 "params": { 00:13:40.226 "max_subsystems": 1024 00:13:40.226 } 00:13:40.226 }, 00:13:40.226 { 00:13:40.226 "method": "nvmf_set_crdt", 00:13:40.226 "params": { 00:13:40.226 "crdt1": 0, 00:13:40.226 "crdt2": 0, 00:13:40.226 "crdt3": 0 00:13:40.226 } 00:13:40.226 } 00:13:40.226 ] 00:13:40.226 }, 00:13:40.226 { 00:13:40.226 "subsystem": "iscsi", 00:13:40.226 "config": [ 00:13:40.226 { 00:13:40.226 "method": "iscsi_set_options", 00:13:40.226 "params": { 00:13:40.226 "node_base": "iqn.2016-06.io.spdk", 00:13:40.226 "max_sessions": 128, 00:13:40.226 "max_connections_per_session": 2, 00:13:40.226 "max_queue_depth": 64, 00:13:40.226 "default_time2wait": 2, 00:13:40.226 "default_time2retain": 20, 00:13:40.226 "first_burst_length": 8192, 00:13:40.226 "immediate_data": true, 00:13:40.226 "allow_duplicated_isid": false, 00:13:40.226 "error_recovery_level": 0, 00:13:40.226 "nop_timeout": 60, 00:13:40.226 "nop_in_interval": 30, 00:13:40.226 "disable_chap": false, 00:13:40.226 "require_chap": false, 00:13:40.226 "mutual_chap": false, 00:13:40.226 "chap_group": 0, 00:13:40.226 "max_large_datain_per_connection": 64, 00:13:40.226 "max_r2t_per_connection": 4, 00:13:40.226 "pdu_pool_size": 36864, 00:13:40.226 "immediate_data_pool_size": 16384, 00:13:40.226 "data_out_pool_size": 2048 00:13:40.226 } 00:13:40.226 } 00:13:40.226 ] 00:13:40.226 } 00:13:40.226 ] 00:13:40.226 }' 00:13:40.226 14:02:18 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 82708 00:13:40.226 14:02:18 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 82708 ']' 00:13:40.226 14:02:18 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 82708 00:13:40.226 14:02:18 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:13:40.227 14:02:18 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:40.227 14:02:18 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82708 00:13:40.227 14:02:18 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:40.227 14:02:18 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:40.227 14:02:18 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82708' 00:13:40.227 killing process with pid 82708 00:13:40.227 14:02:18 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 82708 00:13:40.227 14:02:18 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 82708 00:13:40.227 [2024-11-17 14:02:18.503307] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:40.485 [2024-11-17 14:02:18.533281] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:40.485 [2024-11-17 14:02:18.533406] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:40.485 [2024-11-17 14:02:18.541273] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:40.485 [2024-11-17 14:02:18.541334] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:40.485 [2024-11-17 14:02:18.541341] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:40.485 [2024-11-17 14:02:18.541370] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:40.485 [2024-11-17 14:02:18.541501] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:40.744 14:02:18 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=82745 00:13:40.744 14:02:18 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 82745 00:13:40.744 14:02:18 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 82745 ']' 00:13:40.744 14:02:18 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:40.744 14:02:18 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:40.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:40.744 14:02:18 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:40.744 14:02:18 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:40.744 14:02:18 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:40.744 14:02:18 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:13:40.744 14:02:18 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:13:40.744 "subsystems": [ 00:13:40.744 { 00:13:40.744 "subsystem": "fsdev", 00:13:40.744 "config": [ 00:13:40.744 { 00:13:40.744 "method": "fsdev_set_opts", 00:13:40.744 "params": { 00:13:40.744 "fsdev_io_pool_size": 65535, 00:13:40.744 "fsdev_io_cache_size": 256 00:13:40.744 } 00:13:40.744 } 00:13:40.744 ] 00:13:40.744 }, 00:13:40.744 { 00:13:40.744 "subsystem": "keyring", 00:13:40.744 "config": [] 00:13:40.744 }, 00:13:40.744 { 00:13:40.744 "subsystem": "iobuf", 00:13:40.744 "config": [ 00:13:40.744 { 00:13:40.744 "method": "iobuf_set_options", 00:13:40.744 "params": { 00:13:40.744 "small_pool_count": 8192, 00:13:40.744 "large_pool_count": 1024, 00:13:40.744 "small_bufsize": 8192, 00:13:40.744 "large_bufsize": 135168 00:13:40.744 } 00:13:40.744 } 00:13:40.744 ] 00:13:40.744 }, 00:13:40.744 { 00:13:40.744 "subsystem": "sock", 00:13:40.744 "config": [ 00:13:40.744 { 00:13:40.744 "method": "sock_set_default_impl", 00:13:40.744 "params": { 00:13:40.744 "impl_name": "posix" 00:13:40.744 } 00:13:40.744 }, 00:13:40.744 { 00:13:40.744 "method": "sock_impl_set_options", 00:13:40.744 "params": { 00:13:40.744 "impl_name": "ssl", 00:13:40.744 "recv_buf_size": 4096, 00:13:40.744 "send_buf_size": 4096, 00:13:40.744 "enable_recv_pipe": true, 00:13:40.744 "enable_quickack": false, 00:13:40.744 "enable_placement_id": 0, 00:13:40.744 "enable_zerocopy_send_server": true, 00:13:40.744 "enable_zerocopy_send_client": false, 00:13:40.744 "zerocopy_threshold": 0, 00:13:40.744 "tls_version": 0, 00:13:40.744 "enable_ktls": false 00:13:40.744 } 00:13:40.744 }, 00:13:40.744 { 00:13:40.744 "method": "sock_impl_set_options", 00:13:40.744 "params": { 00:13:40.744 "impl_name": "posix", 00:13:40.744 "recv_buf_size": 2097152, 00:13:40.744 "send_buf_size": 2097152, 00:13:40.744 "enable_recv_pipe": true, 00:13:40.744 "enable_quickack": false, 00:13:40.744 "enable_placement_id": 0, 00:13:40.744 "enable_zerocopy_send_server": true, 00:13:40.744 "enable_zerocopy_send_client": false, 00:13:40.744 "zerocopy_threshold": 0, 00:13:40.744 "tls_version": 0, 00:13:40.744 "enable_ktls": false 00:13:40.744 } 00:13:40.744 } 00:13:40.744 ] 00:13:40.744 }, 00:13:40.744 { 00:13:40.744 "subsystem": "vmd", 00:13:40.744 "config": [] 00:13:40.744 }, 00:13:40.744 { 00:13:40.744 "subsystem": "accel", 00:13:40.744 "config": [ 00:13:40.744 { 00:13:40.744 "method": "accel_set_options", 00:13:40.744 "params": { 00:13:40.744 "small_cache_size": 128, 00:13:40.744 "large_cache_size": 16, 00:13:40.744 "task_count": 2048, 00:13:40.744 "sequence_count": 2048, 00:13:40.744 "buf_count": 2048 00:13:40.744 } 00:13:40.744 } 00:13:40.744 ] 00:13:40.744 }, 00:13:40.744 { 00:13:40.744 "subsystem": "bdev", 00:13:40.744 "config": [ 00:13:40.744 { 00:13:40.744 "method": "bdev_set_options", 00:13:40.744 "params": { 00:13:40.744 "bdev_io_pool_size": 65535, 00:13:40.744 "bdev_io_cache_size": 256, 00:13:40.744 "bdev_auto_examine": true, 00:13:40.744 "iobuf_small_cache_size": 128, 00:13:40.744 "iobuf_large_cache_size": 16 00:13:40.744 } 00:13:40.744 }, 00:13:40.744 { 00:13:40.744 "method": "bdev_raid_set_options", 00:13:40.744 "params": { 00:13:40.744 "process_window_size_kb": 1024, 00:13:40.744 "process_max_bandwidth_mb_sec": 0 00:13:40.744 } 00:13:40.744 }, 00:13:40.744 { 00:13:40.744 "method": "bdev_iscsi_set_options", 00:13:40.744 "params": { 00:13:40.744 "timeout_sec": 30 00:13:40.744 } 00:13:40.744 }, 00:13:40.744 { 00:13:40.744 "method": "bdev_nvme_set_options", 00:13:40.744 "params": { 00:13:40.744 "action_on_timeout": "none", 00:13:40.744 "timeout_us": 0, 00:13:40.744 "timeout_admin_us": 0, 00:13:40.744 "keep_alive_timeout_ms": 10000, 00:13:40.744 "arbitration_burst": 0, 00:13:40.744 "low_priority_weight": 0, 00:13:40.744 "medium_priority_weight": 0, 00:13:40.744 "high_priority_weight": 0, 00:13:40.744 "nvme_adminq_poll_period_us": 10000, 00:13:40.744 "nvme_ioq_poll_period_us": 0, 00:13:40.744 "io_queue_requests": 0, 00:13:40.744 "delay_cmd_submit": true, 00:13:40.744 "transport_retry_count": 4, 00:13:40.744 "bdev_retry_count": 3, 00:13:40.744 "transport_ack_timeout": 0, 00:13:40.744 "ctrlr_loss_timeout_sec": 0, 00:13:40.744 "reconnect_delay_sec": 0, 00:13:40.744 "fast_io_fail_timeout_sec": 0, 00:13:40.744 "disable_auto_failback": false, 00:13:40.744 "generate_uuids": false, 00:13:40.744 "transport_tos": 0, 00:13:40.744 "nvme_error_stat": false, 00:13:40.744 "rdma_srq_size": 0, 00:13:40.744 "io_path_stat": false, 00:13:40.744 "allow_accel_sequence": false, 00:13:40.744 "rdma_max_cq_size": 0, 00:13:40.744 "rdma_cm_event_timeout_ms": 0, 00:13:40.744 "dhchap_digests": [ 00:13:40.744 "sha256", 00:13:40.744 "sha384", 00:13:40.744 "sha512" 00:13:40.744 ], 00:13:40.744 "dhchap_dhgroups": [ 00:13:40.744 "null", 00:13:40.744 "ffdhe2048", 00:13:40.744 "ffdhe3072", 00:13:40.744 "ffdhe4096", 00:13:40.744 "ffdhe6144", 00:13:40.744 "ffdhe8192" 00:13:40.744 ] 00:13:40.744 } 00:13:40.744 }, 00:13:40.744 { 00:13:40.744 "method": "bdev_nvme_set_hotplug", 00:13:40.744 "params": { 00:13:40.744 "period_us": 100000, 00:13:40.744 "enable": false 00:13:40.744 } 00:13:40.744 }, 00:13:40.744 { 00:13:40.744 "method": "bdev_malloc_create", 00:13:40.744 "params": { 00:13:40.744 "name": "malloc0", 00:13:40.744 "num_blocks": 8192, 00:13:40.744 "block_size": 4096, 00:13:40.744 "physical_block_size": 4096, 00:13:40.744 "uuid": "f0f91461-cf4a-4c03-a8a3-e1c4ac54f638", 00:13:40.744 "optimal_io_boundary": 0, 00:13:40.744 "md_size": 0, 00:13:40.744 "dif_type": 0, 00:13:40.744 "dif_is_head_of_md": false, 00:13:40.744 "dif_pi_format": 0 00:13:40.744 } 00:13:40.744 }, 00:13:40.744 { 00:13:40.744 "method": "bdev_wait_for_examine" 00:13:40.744 } 00:13:40.744 ] 00:13:40.744 }, 00:13:40.744 { 00:13:40.744 "subsystem": "scsi", 00:13:40.744 "config": null 00:13:40.744 }, 00:13:40.744 { 00:13:40.744 "subsystem": "scheduler", 00:13:40.744 "config": [ 00:13:40.744 { 00:13:40.744 "method": "framework_set_scheduler", 00:13:40.745 "params": { 00:13:40.745 "name": "static" 00:13:40.745 } 00:13:40.745 } 00:13:40.745 ] 00:13:40.745 }, 00:13:40.745 { 00:13:40.745 "subsystem": "vhost_scsi", 00:13:40.745 "config": [] 00:13:40.745 }, 00:13:40.745 { 00:13:40.745 "subsystem": "vhost_blk", 00:13:40.745 "config": [] 00:13:40.745 }, 00:13:40.745 { 00:13:40.745 "subsystem": "ublk", 00:13:40.745 "config": [ 00:13:40.745 { 00:13:40.745 "method": "ublk_create_target", 00:13:40.745 "params": { 00:13:40.745 "cpumask": "1" 00:13:40.745 } 00:13:40.745 }, 00:13:40.745 { 00:13:40.745 "method": "ublk_start_disk", 00:13:40.745 "params": { 00:13:40.745 "bdev_name": "malloc0", 00:13:40.745 "ublk_id": 0, 00:13:40.745 "num_queues": 1, 00:13:40.745 "queue_depth": 128 00:13:40.745 } 00:13:40.745 } 00:13:40.745 ] 00:13:40.745 }, 00:13:40.745 { 00:13:40.745 "subsystem": "nbd", 00:13:40.745 "config": [] 00:13:40.745 }, 00:13:40.745 { 00:13:40.745 "subsystem": "nvmf", 00:13:40.745 "config": [ 00:13:40.745 { 00:13:40.745 "method": "nvmf_set_config", 00:13:40.745 "params": { 00:13:40.745 "discovery_filter": "match_any", 00:13:40.745 "admin_cmd_passthru": { 00:13:40.745 "identify_ctrlr": false 00:13:40.745 }, 00:13:40.745 "dhchap_digests": [ 00:13:40.745 "sha256", 00:13:40.745 "sha384", 00:13:40.745 "sha512" 00:13:40.745 ], 00:13:40.745 "dhchap_dhgroups": [ 00:13:40.745 "null", 00:13:40.745 "ffdhe2048", 00:13:40.745 "ffdhe3072", 00:13:40.745 "ffdhe4096", 00:13:40.745 "ffdhe6144", 00:13:40.745 "ffdhe8192" 00:13:40.745 ] 00:13:40.745 } 00:13:40.745 }, 00:13:40.745 { 00:13:40.745 "method": "nvmf_set_max_subsystems", 00:13:40.745 "params": { 00:13:40.745 "max_subsystems": 1024 00:13:40.745 } 00:13:40.745 }, 00:13:40.745 { 00:13:40.745 "method": "nvmf_set_crdt", 00:13:40.745 "params": { 00:13:40.745 "crdt1": 0, 00:13:40.745 "crdt2": 0, 00:13:40.745 "crdt3": 0 00:13:40.745 } 00:13:40.745 } 00:13:40.745 ] 00:13:40.745 }, 00:13:40.745 { 00:13:40.745 "subsystem": "iscsi", 00:13:40.745 "config": [ 00:13:40.745 { 00:13:40.745 "method": "iscsi_set_options", 00:13:40.745 "params": { 00:13:40.745 "node_base": "iqn.2016-06.io.spdk", 00:13:40.745 "max_sessions": 128, 00:13:40.745 "max_connections_per_session": 2, 00:13:40.745 "max_queue_depth": 64, 00:13:40.745 "default_time2wait": 2, 00:13:40.745 "default_time2retain": 20, 00:13:40.745 "first_burst_length": 8192, 00:13:40.745 "immediate_data": true, 00:13:40.745 "allow_duplicated_isid": false, 00:13:40.745 "error_recovery_level": 0, 00:13:40.745 "nop_timeout": 60, 00:13:40.745 "nop_in_interval": 30, 00:13:40.745 "disable_chap": false, 00:13:40.745 "require_chap": false, 00:13:40.745 "mutual_chap": false, 00:13:40.745 "chap_group": 0, 00:13:40.745 "max_large_datain_per_connection": 64, 00:13:40.745 "max_r2t_per_connection": 4, 00:13:40.745 "pdu_pool_size": 36864, 00:13:40.745 "immediate_data_pool_size": 16384, 00:13:40.745 "data_out_pool_size": 2048 00:13:40.745 } 00:13:40.745 } 00:13:40.745 ] 00:13:40.745 } 00:13:40.745 ] 00:13:40.745 }' 00:13:40.745 [2024-11-17 14:02:18.920268] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:40.745 [2024-11-17 14:02:18.920385] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82745 ] 00:13:41.003 [2024-11-17 14:02:19.068576] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:41.003 [2024-11-17 14:02:19.110404] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:41.261 [2024-11-17 14:02:19.408255] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:41.261 [2024-11-17 14:02:19.408517] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:41.261 [2024-11-17 14:02:19.416368] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:41.261 [2024-11-17 14:02:19.416441] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:41.261 [2024-11-17 14:02:19.416449] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:41.261 [2024-11-17 14:02:19.416456] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:41.261 [2024-11-17 14:02:19.424371] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:41.261 [2024-11-17 14:02:19.424394] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:41.261 [2024-11-17 14:02:19.432272] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:41.261 [2024-11-17 14:02:19.432358] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:41.261 [2024-11-17 14:02:19.449255] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:41.520 14:02:19 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:41.520 14:02:19 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:13:41.520 14:02:19 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:13:41.520 14:02:19 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:13:41.520 14:02:19 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.520 14:02:19 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:41.520 14:02:19 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.520 14:02:19 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:41.520 14:02:19 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:13:41.520 14:02:19 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 82745 00:13:41.520 14:02:19 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 82745 ']' 00:13:41.520 14:02:19 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 82745 00:13:41.520 14:02:19 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:13:41.520 14:02:19 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:41.520 14:02:19 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82745 00:13:41.778 14:02:19 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:41.778 killing process with pid 82745 00:13:41.778 14:02:19 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:41.778 14:02:19 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82745' 00:13:41.778 14:02:19 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 82745 00:13:41.778 14:02:19 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 82745 00:13:41.779 [2024-11-17 14:02:19.992897] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:41.779 [2024-11-17 14:02:20.032276] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:41.779 [2024-11-17 14:02:20.032414] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:41.779 [2024-11-17 14:02:20.038258] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:41.779 [2024-11-17 14:02:20.038320] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:41.779 [2024-11-17 14:02:20.038334] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:41.779 [2024-11-17 14:02:20.038366] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:41.779 [2024-11-17 14:02:20.038503] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:42.345 14:02:20 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:13:42.345 00:13:42.345 real 0m3.342s 00:13:42.345 user 0m2.494s 00:13:42.345 sys 0m1.469s 00:13:42.345 14:02:20 ublk.test_save_ublk_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:42.345 14:02:20 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:13:42.345 ************************************ 00:13:42.345 END TEST test_save_ublk_config 00:13:42.345 ************************************ 00:13:42.345 14:02:20 ublk -- ublk/ublk.sh@139 -- # spdk_pid=82791 00:13:42.345 14:02:20 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:42.345 14:02:20 ublk -- ublk/ublk.sh@141 -- # waitforlisten 82791 00:13:42.345 14:02:20 ublk -- common/autotest_common.sh@831 -- # '[' -z 82791 ']' 00:13:42.345 14:02:20 ublk -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:42.345 14:02:20 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:42.345 14:02:20 ublk -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:42.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:42.345 14:02:20 ublk -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:42.345 14:02:20 ublk -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:42.345 14:02:20 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:42.345 [2024-11-17 14:02:20.472855] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:42.345 [2024-11-17 14:02:20.472977] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82791 ] 00:13:42.345 [2024-11-17 14:02:20.620495] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:42.603 [2024-11-17 14:02:20.652721] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:42.603 [2024-11-17 14:02:20.652843] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:43.168 14:02:21 ublk -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:43.168 14:02:21 ublk -- common/autotest_common.sh@864 -- # return 0 00:13:43.168 14:02:21 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:13:43.168 14:02:21 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:43.168 14:02:21 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:43.168 14:02:21 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:43.168 ************************************ 00:13:43.168 START TEST test_create_ublk 00:13:43.168 ************************************ 00:13:43.168 14:02:21 ublk.test_create_ublk -- common/autotest_common.sh@1125 -- # test_create_ublk 00:13:43.168 14:02:21 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:13:43.168 14:02:21 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.168 14:02:21 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:43.168 [2024-11-17 14:02:21.323256] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:43.168 [2024-11-17 14:02:21.324331] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:43.168 14:02:21 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.168 14:02:21 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:13:43.168 14:02:21 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:13:43.168 14:02:21 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.168 14:02:21 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:43.168 14:02:21 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.168 14:02:21 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:13:43.168 14:02:21 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:13:43.168 14:02:21 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.168 14:02:21 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:43.168 [2024-11-17 14:02:21.390393] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:13:43.168 [2024-11-17 14:02:21.390778] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:13:43.168 [2024-11-17 14:02:21.390800] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:43.168 [2024-11-17 14:02:21.390815] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:43.169 [2024-11-17 14:02:21.398274] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:43.169 [2024-11-17 14:02:21.398307] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:43.169 [2024-11-17 14:02:21.406270] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:43.169 [2024-11-17 14:02:21.406891] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:43.169 [2024-11-17 14:02:21.417352] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:43.169 14:02:21 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.169 14:02:21 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:13:43.169 14:02:21 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:13:43.169 14:02:21 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:13:43.169 14:02:21 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.169 14:02:21 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:43.169 14:02:21 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.169 14:02:21 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:13:43.169 { 00:13:43.169 "ublk_device": "/dev/ublkb0", 00:13:43.169 "id": 0, 00:13:43.169 "queue_depth": 512, 00:13:43.169 "num_queues": 4, 00:13:43.169 "bdev_name": "Malloc0" 00:13:43.169 } 00:13:43.169 ]' 00:13:43.169 14:02:21 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:13:43.427 14:02:21 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:43.427 14:02:21 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:13:43.427 14:02:21 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:13:43.427 14:02:21 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:13:43.427 14:02:21 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:13:43.427 14:02:21 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:13:43.427 14:02:21 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:13:43.427 14:02:21 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:13:43.427 14:02:21 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:13:43.427 14:02:21 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:13:43.427 14:02:21 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:13:43.427 14:02:21 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:13:43.427 14:02:21 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:13:43.427 14:02:21 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:13:43.427 14:02:21 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:13:43.427 14:02:21 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:13:43.427 14:02:21 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:13:43.427 14:02:21 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:13:43.427 14:02:21 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:43.427 14:02:21 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:43.427 14:02:21 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:13:43.427 fio: verification read phase will never start because write phase uses all of runtime 00:13:43.427 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:13:43.427 fio-3.35 00:13:43.427 Starting 1 process 00:13:55.628 00:13:55.628 fio_test: (groupid=0, jobs=1): err= 0: pid=82836: Sun Nov 17 14:02:31 2024 00:13:55.628 write: IOPS=18.2k, BW=71.1MiB/s (74.5MB/s)(711MiB/10001msec); 0 zone resets 00:13:55.628 clat (usec): min=35, max=10890, avg=54.23, stdev=119.73 00:13:55.628 lat (usec): min=35, max=10907, avg=54.65, stdev=119.75 00:13:55.628 clat percentiles (usec): 00:13:55.628 | 1.00th=[ 39], 5.00th=[ 40], 10.00th=[ 41], 20.00th=[ 43], 00:13:55.628 | 30.00th=[ 45], 40.00th=[ 47], 50.00th=[ 49], 60.00th=[ 51], 00:13:55.628 | 70.00th=[ 53], 80.00th=[ 56], 90.00th=[ 61], 95.00th=[ 64], 00:13:55.628 | 99.00th=[ 76], 99.50th=[ 82], 99.90th=[ 2474], 99.95th=[ 3326], 00:13:55.628 | 99.99th=[ 3916] 00:13:55.628 bw ( KiB/s): min=27296, max=83848, per=99.54%, avg=72439.16, stdev=13346.13, samples=19 00:13:55.628 iops : min= 6824, max=20962, avg=18109.79, stdev=3336.53, samples=19 00:13:55.628 lat (usec) : 50=58.77%, 100=40.92%, 250=0.12%, 500=0.02%, 750=0.01% 00:13:55.628 lat (usec) : 1000=0.01% 00:13:55.628 lat (msec) : 2=0.04%, 4=0.12%, 10=0.01%, 20=0.01% 00:13:55.628 cpu : usr=2.62%, sys=13.03%, ctx=182013, majf=0, minf=797 00:13:55.628 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:55.628 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.628 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.628 issued rwts: total=0,181960,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:55.628 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:55.628 00:13:55.628 Run status group 0 (all jobs): 00:13:55.628 WRITE: bw=71.1MiB/s (74.5MB/s), 71.1MiB/s-71.1MiB/s (74.5MB/s-74.5MB/s), io=711MiB (745MB), run=10001-10001msec 00:13:55.628 00:13:55.628 Disk stats (read/write): 00:13:55.628 ublkb0: ios=0/179896, merge=0/0, ticks=0/8460, in_queue=8461, util=99.05% 00:13:55.628 14:02:31 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:13:55.628 14:02:31 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.628 14:02:31 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:55.628 [2024-11-17 14:02:31.829122] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:55.628 [2024-11-17 14:02:31.870281] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:55.628 [2024-11-17 14:02:31.870858] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:55.628 [2024-11-17 14:02:31.878337] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:55.628 [2024-11-17 14:02:31.878571] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:55.628 [2024-11-17 14:02:31.878583] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:55.628 14:02:31 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.628 14:02:31 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:13:55.628 14:02:31 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # local es=0 00:13:55.628 14:02:31 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:13:55.628 14:02:31 ublk.test_create_ublk -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:55.628 14:02:31 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:55.628 14:02:31 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:55.628 14:02:31 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:55.628 14:02:31 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:13:55.628 14:02:31 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.628 14:02:31 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:55.628 [2024-11-17 14:02:31.894341] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:13:55.628 request: 00:13:55.628 { 00:13:55.628 "ublk_id": 0, 00:13:55.628 "method": "ublk_stop_disk", 00:13:55.628 "req_id": 1 00:13:55.628 } 00:13:55.628 Got JSON-RPC error response 00:13:55.628 response: 00:13:55.628 { 00:13:55.628 "code": -19, 00:13:55.628 "message": "No such device" 00:13:55.628 } 00:13:55.628 14:02:31 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:55.628 14:02:31 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # es=1 00:13:55.628 14:02:31 ublk.test_create_ublk -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:55.628 14:02:31 ublk.test_create_ublk -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:55.629 14:02:31 ublk.test_create_ublk -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:55.629 14:02:31 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:13:55.629 14:02:31 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.629 14:02:31 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:55.629 [2024-11-17 14:02:31.910322] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:55.629 [2024-11-17 14:02:31.911220] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:55.629 [2024-11-17 14:02:31.911257] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:13:55.629 14:02:31 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.629 14:02:31 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:13:55.629 14:02:31 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.629 14:02:31 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:55.629 14:02:31 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.629 14:02:31 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:13:55.629 14:02:31 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:13:55.629 14:02:31 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.629 14:02:31 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:55.629 14:02:31 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.629 14:02:31 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:13:55.629 14:02:31 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:13:55.629 14:02:32 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:13:55.629 14:02:32 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:13:55.629 14:02:32 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.629 14:02:32 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:55.629 14:02:32 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.629 14:02:32 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:13:55.629 14:02:32 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:13:55.629 14:02:32 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:13:55.629 00:13:55.629 real 0m10.755s 00:13:55.629 user 0m0.564s 00:13:55.629 sys 0m1.369s 00:13:55.629 14:02:32 ublk.test_create_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:55.629 ************************************ 00:13:55.629 END TEST test_create_ublk 00:13:55.629 14:02:32 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:55.629 ************************************ 00:13:55.629 14:02:32 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:13:55.629 14:02:32 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:55.629 14:02:32 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:55.629 14:02:32 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:55.629 ************************************ 00:13:55.629 START TEST test_create_multi_ublk 00:13:55.629 ************************************ 00:13:55.629 14:02:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@1125 -- # test_create_multi_ublk 00:13:55.629 14:02:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:13:55.629 14:02:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.629 14:02:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:55.629 [2024-11-17 14:02:32.121255] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:55.629 [2024-11-17 14:02:32.122163] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:55.629 14:02:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.629 14:02:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:13:55.629 14:02:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:13:55.629 14:02:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:55.629 14:02:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:13:55.629 14:02:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.629 14:02:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:55.629 14:02:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.629 14:02:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:13:55.629 14:02:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:13:55.629 14:02:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.629 14:02:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:55.629 [2024-11-17 14:02:32.205366] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:13:55.629 [2024-11-17 14:02:32.205654] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:13:55.629 [2024-11-17 14:02:32.205667] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:55.629 [2024-11-17 14:02:32.205672] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:55.629 [2024-11-17 14:02:32.217470] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:55.629 [2024-11-17 14:02:32.217487] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:55.629 [2024-11-17 14:02:32.229279] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:55.629 [2024-11-17 14:02:32.229749] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:55.629 [2024-11-17 14:02:32.260264] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:55.629 14:02:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.629 14:02:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:13:55.629 14:02:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:55.629 14:02:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:13:55.629 14:02:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.629 14:02:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:55.629 14:02:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.629 14:02:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:13:55.629 14:02:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:13:55.629 14:02:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.629 14:02:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:55.629 [2024-11-17 14:02:32.343373] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:13:55.629 [2024-11-17 14:02:32.343680] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:13:55.629 [2024-11-17 14:02:32.343692] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:13:55.629 [2024-11-17 14:02:32.343699] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:13:55.629 [2024-11-17 14:02:32.355270] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:55.629 [2024-11-17 14:02:32.355291] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:55.629 [2024-11-17 14:02:32.367254] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:55.629 [2024-11-17 14:02:32.367752] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:13:55.629 [2024-11-17 14:02:32.370965] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:13:55.629 14:02:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.629 14:02:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:13:55.629 14:02:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:55.629 14:02:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:13:55.629 14:02:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.629 14:02:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:55.629 14:02:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.629 14:02:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:13:55.629 14:02:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:13:55.629 14:02:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.629 14:02:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:55.629 [2024-11-17 14:02:32.451354] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:13:55.629 [2024-11-17 14:02:32.451653] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:13:55.629 [2024-11-17 14:02:32.451667] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:13:55.629 [2024-11-17 14:02:32.451673] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:13:55.629 [2024-11-17 14:02:32.463267] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:55.629 [2024-11-17 14:02:32.463285] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:55.629 [2024-11-17 14:02:32.475256] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:55.629 [2024-11-17 14:02:32.475746] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:13:55.629 [2024-11-17 14:02:32.480583] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:13:55.630 14:02:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.630 14:02:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:13:55.630 14:02:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:55.630 14:02:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:13:55.630 14:02:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.630 14:02:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:55.630 14:02:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.630 14:02:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:13:55.630 14:02:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:13:55.630 14:02:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.630 14:02:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:55.630 [2024-11-17 14:02:32.563345] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:13:55.630 [2024-11-17 14:02:32.563641] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:13:55.630 [2024-11-17 14:02:32.563654] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:13:55.630 [2024-11-17 14:02:32.563660] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:13:55.630 [2024-11-17 14:02:32.575270] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:55.630 [2024-11-17 14:02:32.575293] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:55.630 [2024-11-17 14:02:32.587259] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:55.630 [2024-11-17 14:02:32.587744] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:13:55.630 [2024-11-17 14:02:32.590509] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:13:55.630 14:02:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.630 14:02:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:13:55.630 14:02:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:13:55.630 14:02:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.630 14:02:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:55.630 14:02:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.630 14:02:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:13:55.630 { 00:13:55.630 "ublk_device": "/dev/ublkb0", 00:13:55.630 "id": 0, 00:13:55.630 "queue_depth": 512, 00:13:55.630 "num_queues": 4, 00:13:55.630 "bdev_name": "Malloc0" 00:13:55.630 }, 00:13:55.630 { 00:13:55.630 "ublk_device": "/dev/ublkb1", 00:13:55.630 "id": 1, 00:13:55.630 "queue_depth": 512, 00:13:55.630 "num_queues": 4, 00:13:55.630 "bdev_name": "Malloc1" 00:13:55.630 }, 00:13:55.630 { 00:13:55.630 "ublk_device": "/dev/ublkb2", 00:13:55.630 "id": 2, 00:13:55.630 "queue_depth": 512, 00:13:55.630 "num_queues": 4, 00:13:55.630 "bdev_name": "Malloc2" 00:13:55.630 }, 00:13:55.630 { 00:13:55.630 "ublk_device": "/dev/ublkb3", 00:13:55.630 "id": 3, 00:13:55.630 "queue_depth": 512, 00:13:55.630 "num_queues": 4, 00:13:55.630 "bdev_name": "Malloc3" 00:13:55.630 } 00:13:55.630 ]' 00:13:55.630 14:02:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:13:55.630 14:02:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:55.630 14:02:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:13:55.630 14:02:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:55.630 14:02:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:13:55.630 14:02:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:13:55.630 14:02:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:13:55.630 14:02:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:55.630 14:02:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:13:55.630 14:02:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:55.630 14:02:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:13:55.630 14:02:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:13:55.630 14:02:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:55.630 14:02:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:13:55.630 14:02:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:13:55.630 14:02:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:13:55.630 14:02:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:13:55.630 14:02:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:13:55.630 14:02:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:55.630 14:02:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:13:55.630 14:02:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:55.630 14:02:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:13:55.630 14:02:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:13:55.630 14:02:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:55.630 14:02:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:13:55.630 14:02:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:13:55.630 14:02:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:13:55.630 14:02:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:13:55.630 14:02:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:13:55.630 14:02:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:55.630 14:02:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:13:55.630 14:02:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:55.630 14:02:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:13:55.630 14:02:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:13:55.630 14:02:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:55.630 14:02:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:13:55.630 14:02:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:13:55.630 14:02:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:13:55.630 14:02:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:13:55.630 14:02:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:13:55.630 14:02:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:55.630 14:02:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:13:55.630 14:02:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:55.630 14:02:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:13:55.630 14:02:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:13:55.630 14:02:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:13:55.630 14:02:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:13:55.630 14:02:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:55.630 14:02:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:13:55.630 14:02:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.630 14:02:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:55.630 [2024-11-17 14:02:33.251332] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:55.630 [2024-11-17 14:02:33.290611] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:55.630 [2024-11-17 14:02:33.291670] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:55.630 [2024-11-17 14:02:33.299265] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:55.630 [2024-11-17 14:02:33.299490] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:55.630 [2024-11-17 14:02:33.299501] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:55.630 14:02:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.630 14:02:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:55.631 14:02:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:13:55.631 14:02:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.631 14:02:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:55.631 [2024-11-17 14:02:33.315328] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:13:55.631 [2024-11-17 14:02:33.355695] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:55.631 [2024-11-17 14:02:33.356628] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:13:55.631 [2024-11-17 14:02:33.363259] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:55.631 [2024-11-17 14:02:33.363483] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:13:55.631 [2024-11-17 14:02:33.363494] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:13:55.631 14:02:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.631 14:02:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:55.631 14:02:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:13:55.631 14:02:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.631 14:02:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:55.631 [2024-11-17 14:02:33.377357] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:13:55.631 [2024-11-17 14:02:33.412278] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:55.631 [2024-11-17 14:02:33.412879] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:13:55.631 [2024-11-17 14:02:33.422319] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:55.631 [2024-11-17 14:02:33.422562] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:13:55.631 [2024-11-17 14:02:33.422573] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:13:55.631 14:02:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.631 14:02:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:55.631 14:02:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:13:55.631 14:02:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.631 14:02:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:55.631 [2024-11-17 14:02:33.429318] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:13:55.631 [2024-11-17 14:02:33.478291] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:55.631 [2024-11-17 14:02:33.478868] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:13:55.631 [2024-11-17 14:02:33.490257] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:55.631 [2024-11-17 14:02:33.490483] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:13:55.631 [2024-11-17 14:02:33.490492] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:13:55.631 14:02:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.631 14:02:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:13:55.631 [2024-11-17 14:02:33.686335] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:55.631 [2024-11-17 14:02:33.687172] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:55.631 [2024-11-17 14:02:33.687207] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:13:55.631 14:02:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:13:55.631 14:02:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:55.631 14:02:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:13:55.631 14:02:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.631 14:02:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:55.631 14:02:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.631 14:02:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:55.631 14:02:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:13:55.631 14:02:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.631 14:02:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:55.631 14:02:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.631 14:02:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:55.631 14:02:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:13:55.631 14:02:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.631 14:02:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:55.631 14:02:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.631 14:02:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:55.631 14:02:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:13:55.631 14:02:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.631 14:02:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:55.900 14:02:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.900 14:02:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:13:55.900 14:02:33 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:13:55.900 14:02:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.900 14:02:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:55.900 14:02:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.900 14:02:33 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:13:55.900 14:02:33 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:13:55.900 14:02:33 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:13:55.900 14:02:33 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:13:55.900 14:02:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.900 14:02:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:55.901 14:02:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.901 14:02:34 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:13:55.901 14:02:34 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:13:55.901 14:02:34 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:13:55.901 00:13:55.901 real 0m1.932s 00:13:55.901 user 0m0.804s 00:13:55.901 sys 0m0.130s 00:13:55.901 14:02:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:55.901 14:02:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:13:55.901 ************************************ 00:13:55.901 END TEST test_create_multi_ublk 00:13:55.901 ************************************ 00:13:55.901 14:02:34 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:13:55.901 14:02:34 ublk -- ublk/ublk.sh@147 -- # cleanup 00:13:55.901 14:02:34 ublk -- ublk/ublk.sh@130 -- # killprocess 82791 00:13:55.901 14:02:34 ublk -- common/autotest_common.sh@950 -- # '[' -z 82791 ']' 00:13:55.901 14:02:34 ublk -- common/autotest_common.sh@954 -- # kill -0 82791 00:13:55.901 14:02:34 ublk -- common/autotest_common.sh@955 -- # uname 00:13:55.901 14:02:34 ublk -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:55.901 14:02:34 ublk -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82791 00:13:55.901 killing process with pid 82791 00:13:55.901 14:02:34 ublk -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:55.901 14:02:34 ublk -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:55.901 14:02:34 ublk -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82791' 00:13:55.901 14:02:34 ublk -- common/autotest_common.sh@969 -- # kill 82791 00:13:55.901 14:02:34 ublk -- common/autotest_common.sh@974 -- # wait 82791 00:13:56.203 [2024-11-17 14:02:34.249928] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:13:56.203 [2024-11-17 14:02:34.249980] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:13:56.462 00:13:56.462 real 0m17.655s 00:13:56.462 user 0m27.853s 00:13:56.462 sys 0m7.276s 00:13:56.462 14:02:34 ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:56.462 14:02:34 ublk -- common/autotest_common.sh@10 -- # set +x 00:13:56.462 ************************************ 00:13:56.462 END TEST ublk 00:13:56.462 ************************************ 00:13:56.462 14:02:34 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:13:56.462 14:02:34 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:56.462 14:02:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:56.462 14:02:34 -- common/autotest_common.sh@10 -- # set +x 00:13:56.462 ************************************ 00:13:56.462 START TEST ublk_recovery 00:13:56.462 ************************************ 00:13:56.462 14:02:34 ublk_recovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:13:56.462 * Looking for test storage... 00:13:56.462 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:13:56.462 14:02:34 ublk_recovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:56.462 14:02:34 ublk_recovery -- common/autotest_common.sh@1681 -- # lcov --version 00:13:56.462 14:02:34 ublk_recovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:56.462 14:02:34 ublk_recovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:56.462 14:02:34 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:56.462 14:02:34 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:56.462 14:02:34 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:56.462 14:02:34 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:13:56.462 14:02:34 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:13:56.462 14:02:34 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:13:56.462 14:02:34 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:13:56.462 14:02:34 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:13:56.462 14:02:34 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:13:56.462 14:02:34 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:13:56.462 14:02:34 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:56.462 14:02:34 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:13:56.462 14:02:34 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:13:56.462 14:02:34 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:56.462 14:02:34 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:56.462 14:02:34 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:13:56.462 14:02:34 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:13:56.462 14:02:34 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:56.462 14:02:34 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:13:56.462 14:02:34 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:13:56.462 14:02:34 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:13:56.462 14:02:34 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:13:56.462 14:02:34 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:56.462 14:02:34 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:13:56.462 14:02:34 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:13:56.462 14:02:34 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:56.462 14:02:34 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:56.462 14:02:34 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:13:56.462 14:02:34 ublk_recovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:56.462 14:02:34 ublk_recovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:56.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.462 --rc genhtml_branch_coverage=1 00:13:56.462 --rc genhtml_function_coverage=1 00:13:56.462 --rc genhtml_legend=1 00:13:56.462 --rc geninfo_all_blocks=1 00:13:56.462 --rc geninfo_unexecuted_blocks=1 00:13:56.462 00:13:56.462 ' 00:13:56.462 14:02:34 ublk_recovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:56.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.462 --rc genhtml_branch_coverage=1 00:13:56.462 --rc genhtml_function_coverage=1 00:13:56.462 --rc genhtml_legend=1 00:13:56.462 --rc geninfo_all_blocks=1 00:13:56.462 --rc geninfo_unexecuted_blocks=1 00:13:56.462 00:13:56.462 ' 00:13:56.462 14:02:34 ublk_recovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:56.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.462 --rc genhtml_branch_coverage=1 00:13:56.462 --rc genhtml_function_coverage=1 00:13:56.462 --rc genhtml_legend=1 00:13:56.462 --rc geninfo_all_blocks=1 00:13:56.462 --rc geninfo_unexecuted_blocks=1 00:13:56.462 00:13:56.462 ' 00:13:56.462 14:02:34 ublk_recovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:56.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.462 --rc genhtml_branch_coverage=1 00:13:56.462 --rc genhtml_function_coverage=1 00:13:56.462 --rc genhtml_legend=1 00:13:56.462 --rc geninfo_all_blocks=1 00:13:56.462 --rc geninfo_unexecuted_blocks=1 00:13:56.462 00:13:56.462 ' 00:13:56.462 14:02:34 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:13:56.462 14:02:34 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:13:56.462 14:02:34 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:13:56.462 14:02:34 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:13:56.462 14:02:34 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:13:56.462 14:02:34 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:13:56.462 14:02:34 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:13:56.462 14:02:34 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:13:56.462 14:02:34 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:13:56.462 14:02:34 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:13:56.462 14:02:34 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=83154 00:13:56.462 14:02:34 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:56.462 14:02:34 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 83154 00:13:56.462 14:02:34 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 83154 ']' 00:13:56.462 14:02:34 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:56.462 14:02:34 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:56.462 14:02:34 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:56.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:56.462 14:02:34 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:56.462 14:02:34 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:56.462 14:02:34 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:56.721 [2024-11-17 14:02:34.766513] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:56.721 [2024-11-17 14:02:34.766942] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83154 ] 00:13:56.721 [2024-11-17 14:02:34.910796] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:56.721 [2024-11-17 14:02:34.940183] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:56.721 [2024-11-17 14:02:34.940230] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:57.655 14:02:35 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:57.655 14:02:35 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:13:57.655 14:02:35 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:13:57.655 14:02:35 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.655 14:02:35 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:57.655 [2024-11-17 14:02:35.604253] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:13:57.655 [2024-11-17 14:02:35.605206] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:57.655 14:02:35 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.655 14:02:35 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:13:57.655 14:02:35 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.655 14:02:35 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:57.655 malloc0 00:13:57.655 14:02:35 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.655 14:02:35 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:13:57.655 14:02:35 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.655 14:02:35 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:13:57.655 [2024-11-17 14:02:35.636351] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:13:57.655 [2024-11-17 14:02:35.636440] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:13:57.655 [2024-11-17 14:02:35.636446] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:13:57.655 [2024-11-17 14:02:35.636453] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:13:57.655 [2024-11-17 14:02:35.644363] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:57.655 [2024-11-17 14:02:35.644386] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:57.655 [2024-11-17 14:02:35.652255] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:57.655 [2024-11-17 14:02:35.652372] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:13:57.655 [2024-11-17 14:02:35.674259] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:13:57.655 1 00:13:57.655 14:02:35 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.655 14:02:35 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:13:58.590 14:02:36 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=83187 00:13:58.590 14:02:36 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:13:58.590 14:02:36 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:13:58.590 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:58.590 fio-3.35 00:13:58.590 Starting 1 process 00:14:03.854 14:02:41 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 83154 00:14:03.854 14:02:41 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:14:09.125 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 83154 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:14:09.125 14:02:46 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=83298 00:14:09.125 14:02:46 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:09.125 14:02:46 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:09.125 14:02:46 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 83298 00:14:09.125 14:02:46 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 83298 ']' 00:14:09.125 14:02:46 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:09.125 14:02:46 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:09.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:09.125 14:02:46 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:09.125 14:02:46 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:09.125 14:02:46 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:09.125 [2024-11-17 14:02:46.768406] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:09.125 [2024-11-17 14:02:46.768529] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83298 ] 00:14:09.125 [2024-11-17 14:02:46.916343] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:09.125 [2024-11-17 14:02:46.948507] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:09.125 [2024-11-17 14:02:46.948553] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:09.383 14:02:47 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:09.383 14:02:47 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:14:09.383 14:02:47 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:14:09.383 14:02:47 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.383 14:02:47 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:09.383 [2024-11-17 14:02:47.603256] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:09.384 [2024-11-17 14:02:47.604374] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:09.384 14:02:47 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.384 14:02:47 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:09.384 14:02:47 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.384 14:02:47 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:09.384 malloc0 00:14:09.384 14:02:47 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.384 14:02:47 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:14:09.384 14:02:47 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.384 14:02:47 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:09.384 [2024-11-17 14:02:47.635379] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:14:09.384 [2024-11-17 14:02:47.635422] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:09.384 [2024-11-17 14:02:47.635430] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:09.384 [2024-11-17 14:02:47.643297] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:09.384 [2024-11-17 14:02:47.643325] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:09.384 1 00:14:09.384 14:02:47 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.384 14:02:47 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 83187 00:14:10.758 [2024-11-17 14:02:48.643370] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:10.758 [2024-11-17 14:02:48.651264] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:10.759 [2024-11-17 14:02:48.651290] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:11.706 [2024-11-17 14:02:49.651313] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:11.706 [2024-11-17 14:02:49.655265] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:11.706 [2024-11-17 14:02:49.655277] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:12.641 [2024-11-17 14:02:50.655311] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:12.641 [2024-11-17 14:02:50.663258] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:12.641 [2024-11-17 14:02:50.663283] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:14:12.641 [2024-11-17 14:02:50.663289] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:14:12.641 [2024-11-17 14:02:50.663372] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:14:34.621 [2024-11-17 14:03:12.005260] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:14:34.621 [2024-11-17 14:03:12.011900] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:14:34.621 [2024-11-17 14:03:12.019442] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:14:34.621 [2024-11-17 14:03:12.019463] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:15:01.162 00:15:01.162 fio_test: (groupid=0, jobs=1): err= 0: pid=83190: Sun Nov 17 14:03:36 2024 00:15:01.162 read: IOPS=16.2k, BW=63.5MiB/s (66.5MB/s)(3807MiB/60002msec) 00:15:01.162 slat (nsec): min=940, max=3178.6k, avg=4789.55, stdev=3669.99 00:15:01.162 clat (usec): min=529, max=30339k, avg=3961.64, stdev=249628.75 00:15:01.162 lat (usec): min=533, max=30339k, avg=3966.43, stdev=249628.75 00:15:01.162 clat percentiles (usec): 00:15:01.162 | 1.00th=[ 1582], 5.00th=[ 1696], 10.00th=[ 1713], 20.00th=[ 1745], 00:15:01.162 | 30.00th=[ 1762], 40.00th=[ 1778], 50.00th=[ 1778], 60.00th=[ 1795], 00:15:01.162 | 70.00th=[ 1811], 80.00th=[ 1827], 90.00th=[ 1876], 95.00th=[ 2835], 00:15:01.162 | 99.00th=[ 4883], 99.50th=[ 5342], 99.90th=[ 6194], 99.95th=[ 7373], 00:15:01.162 | 99.99th=[12649] 00:15:01.162 bw ( KiB/s): min=51792, max=137256, per=100.00%, avg=130030.10, stdev=15138.77, samples=59 00:15:01.162 iops : min=12948, max=34314, avg=32507.53, stdev=3784.69, samples=59 00:15:01.162 write: IOPS=16.2k, BW=63.4MiB/s (66.5MB/s)(3803MiB/60002msec); 0 zone resets 00:15:01.162 slat (nsec): min=928, max=113585, avg=4812.46, stdev=1532.39 00:15:01.162 clat (usec): min=523, max=30340k, avg=3912.09, stdev=242086.18 00:15:01.162 lat (usec): min=527, max=30340k, avg=3916.90, stdev=242086.18 00:15:01.162 clat percentiles (usec): 00:15:01.162 | 1.00th=[ 1614], 5.00th=[ 1778], 10.00th=[ 1811], 20.00th=[ 1827], 00:15:01.162 | 30.00th=[ 1844], 40.00th=[ 1860], 50.00th=[ 1876], 60.00th=[ 1893], 00:15:01.162 | 70.00th=[ 1893], 80.00th=[ 1926], 90.00th=[ 1958], 95.00th=[ 2769], 00:15:01.162 | 99.00th=[ 4883], 99.50th=[ 5407], 99.90th=[ 6259], 99.95th=[ 7177], 00:15:01.162 | 99.99th=[12780] 00:15:01.162 bw ( KiB/s): min=50360, max=135536, per=100.00%, avg=129872.95, stdev=15378.75, samples=59 00:15:01.162 iops : min=12590, max=33884, avg=32468.24, stdev=3844.69, samples=59 00:15:01.162 lat (usec) : 750=0.01%, 1000=0.01% 00:15:01.162 lat (msec) : 2=92.45%, 4=5.23%, 10=2.29%, 20=0.01%, >=2000=0.01% 00:15:01.162 cpu : usr=3.46%, sys=15.84%, ctx=64730, majf=0, minf=15 00:15:01.162 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:15:01.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:01.162 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:01.162 issued rwts: total=974663,973545,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:01.163 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:01.163 00:15:01.163 Run status group 0 (all jobs): 00:15:01.163 READ: bw=63.5MiB/s (66.5MB/s), 63.5MiB/s-63.5MiB/s (66.5MB/s-66.5MB/s), io=3807MiB (3992MB), run=60002-60002msec 00:15:01.163 WRITE: bw=63.4MiB/s (66.5MB/s), 63.4MiB/s-63.4MiB/s (66.5MB/s-66.5MB/s), io=3803MiB (3988MB), run=60002-60002msec 00:15:01.163 00:15:01.163 Disk stats (read/write): 00:15:01.163 ublkb1: ios=971159/970024, merge=0/0, ticks=3807160/3678391, in_queue=7485552, util=99.87% 00:15:01.163 14:03:36 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:15:01.163 14:03:36 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.163 14:03:36 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:01.163 [2024-11-17 14:03:36.932449] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:15:01.163 [2024-11-17 14:03:36.972279] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:01.163 [2024-11-17 14:03:36.972407] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:15:01.163 [2024-11-17 14:03:36.980270] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:01.163 [2024-11-17 14:03:36.980348] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:15:01.163 [2024-11-17 14:03:36.980364] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:15:01.163 14:03:36 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.163 14:03:36 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:15:01.163 14:03:36 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.163 14:03:36 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:01.163 [2024-11-17 14:03:36.996312] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:01.163 [2024-11-17 14:03:36.997174] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:01.163 [2024-11-17 14:03:36.997199] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:01.163 14:03:37 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.163 14:03:37 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:15:01.163 14:03:37 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:15:01.163 14:03:37 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 83298 00:15:01.163 14:03:37 ublk_recovery -- common/autotest_common.sh@950 -- # '[' -z 83298 ']' 00:15:01.163 14:03:37 ublk_recovery -- common/autotest_common.sh@954 -- # kill -0 83298 00:15:01.163 14:03:37 ublk_recovery -- common/autotest_common.sh@955 -- # uname 00:15:01.163 14:03:37 ublk_recovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:01.163 14:03:37 ublk_recovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83298 00:15:01.163 14:03:37 ublk_recovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:01.163 14:03:37 ublk_recovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:01.163 killing process with pid 83298 00:15:01.163 14:03:37 ublk_recovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83298' 00:15:01.163 14:03:37 ublk_recovery -- common/autotest_common.sh@969 -- # kill 83298 00:15:01.163 14:03:37 ublk_recovery -- common/autotest_common.sh@974 -- # wait 83298 00:15:01.163 [2024-11-17 14:03:37.185771] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:01.163 [2024-11-17 14:03:37.185816] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:01.163 00:15:01.163 real 1m2.916s 00:15:01.163 user 1m46.790s 00:15:01.163 sys 0m20.205s 00:15:01.163 14:03:37 ublk_recovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:01.163 ************************************ 00:15:01.163 END TEST ublk_recovery 00:15:01.163 ************************************ 00:15:01.163 14:03:37 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:01.163 14:03:37 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:15:01.163 14:03:37 -- spdk/autotest.sh@256 -- # timing_exit lib 00:15:01.163 14:03:37 -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:01.163 14:03:37 -- common/autotest_common.sh@10 -- # set +x 00:15:01.163 14:03:37 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:15:01.163 14:03:37 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:15:01.163 14:03:37 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:15:01.163 14:03:37 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:15:01.163 14:03:37 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:15:01.163 14:03:37 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:15:01.163 14:03:37 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:15:01.163 14:03:37 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:15:01.163 14:03:37 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:15:01.163 14:03:37 -- spdk/autotest.sh@338 -- # '[' 1 -eq 1 ']' 00:15:01.163 14:03:37 -- spdk/autotest.sh@339 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:01.163 14:03:37 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:01.163 14:03:37 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:01.163 14:03:37 -- common/autotest_common.sh@10 -- # set +x 00:15:01.163 ************************************ 00:15:01.163 START TEST ftl 00:15:01.163 ************************************ 00:15:01.163 14:03:37 ftl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:01.163 * Looking for test storage... 00:15:01.163 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:01.163 14:03:37 ftl -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:01.163 14:03:37 ftl -- common/autotest_common.sh@1681 -- # lcov --version 00:15:01.163 14:03:37 ftl -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:01.163 14:03:37 ftl -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:01.163 14:03:37 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:01.163 14:03:37 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:01.163 14:03:37 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:01.163 14:03:37 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:15:01.163 14:03:37 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:15:01.163 14:03:37 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:15:01.163 14:03:37 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:15:01.163 14:03:37 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:15:01.163 14:03:37 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:15:01.163 14:03:37 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:15:01.163 14:03:37 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:01.163 14:03:37 ftl -- scripts/common.sh@344 -- # case "$op" in 00:15:01.163 14:03:37 ftl -- scripts/common.sh@345 -- # : 1 00:15:01.163 14:03:37 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:01.163 14:03:37 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:01.163 14:03:37 ftl -- scripts/common.sh@365 -- # decimal 1 00:15:01.163 14:03:37 ftl -- scripts/common.sh@353 -- # local d=1 00:15:01.163 14:03:37 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:01.163 14:03:37 ftl -- scripts/common.sh@355 -- # echo 1 00:15:01.163 14:03:37 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:15:01.163 14:03:37 ftl -- scripts/common.sh@366 -- # decimal 2 00:15:01.163 14:03:37 ftl -- scripts/common.sh@353 -- # local d=2 00:15:01.163 14:03:37 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:01.163 14:03:37 ftl -- scripts/common.sh@355 -- # echo 2 00:15:01.163 14:03:37 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:15:01.163 14:03:37 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:01.163 14:03:37 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:01.163 14:03:37 ftl -- scripts/common.sh@368 -- # return 0 00:15:01.163 14:03:37 ftl -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:01.163 14:03:37 ftl -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:01.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:01.163 --rc genhtml_branch_coverage=1 00:15:01.163 --rc genhtml_function_coverage=1 00:15:01.163 --rc genhtml_legend=1 00:15:01.163 --rc geninfo_all_blocks=1 00:15:01.163 --rc geninfo_unexecuted_blocks=1 00:15:01.163 00:15:01.163 ' 00:15:01.163 14:03:37 ftl -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:01.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:01.163 --rc genhtml_branch_coverage=1 00:15:01.163 --rc genhtml_function_coverage=1 00:15:01.163 --rc genhtml_legend=1 00:15:01.163 --rc geninfo_all_blocks=1 00:15:01.163 --rc geninfo_unexecuted_blocks=1 00:15:01.163 00:15:01.163 ' 00:15:01.163 14:03:37 ftl -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:01.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:01.163 --rc genhtml_branch_coverage=1 00:15:01.163 --rc genhtml_function_coverage=1 00:15:01.163 --rc genhtml_legend=1 00:15:01.163 --rc geninfo_all_blocks=1 00:15:01.163 --rc geninfo_unexecuted_blocks=1 00:15:01.163 00:15:01.163 ' 00:15:01.163 14:03:37 ftl -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:01.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:01.163 --rc genhtml_branch_coverage=1 00:15:01.163 --rc genhtml_function_coverage=1 00:15:01.163 --rc genhtml_legend=1 00:15:01.163 --rc geninfo_all_blocks=1 00:15:01.163 --rc geninfo_unexecuted_blocks=1 00:15:01.163 00:15:01.163 ' 00:15:01.163 14:03:37 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:01.163 14:03:37 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:01.163 14:03:37 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:01.163 14:03:37 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:01.163 14:03:37 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:01.163 14:03:37 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:01.163 14:03:37 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:01.163 14:03:37 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:01.163 14:03:37 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:01.163 14:03:37 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:01.163 14:03:37 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:01.163 14:03:37 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:01.163 14:03:37 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:01.163 14:03:37 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:01.163 14:03:37 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:01.163 14:03:37 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:01.163 14:03:37 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:01.163 14:03:37 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:01.164 14:03:37 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:01.164 14:03:37 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:01.164 14:03:37 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:01.164 14:03:37 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:01.164 14:03:37 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:01.164 14:03:37 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:01.164 14:03:37 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:01.164 14:03:37 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:01.164 14:03:37 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:01.164 14:03:37 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:01.164 14:03:37 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:01.164 14:03:37 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:01.164 14:03:37 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:15:01.164 14:03:37 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:15:01.164 14:03:37 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:15:01.164 14:03:37 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:15:01.164 14:03:37 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:01.164 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:01.164 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:01.164 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:01.164 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:01.164 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:01.164 14:03:38 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=84098 00:15:01.164 14:03:38 ftl -- ftl/ftl.sh@38 -- # waitforlisten 84098 00:15:01.164 14:03:38 ftl -- common/autotest_common.sh@831 -- # '[' -z 84098 ']' 00:15:01.164 14:03:38 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:01.164 14:03:38 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:01.164 14:03:38 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:15:01.164 14:03:38 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:01.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:01.164 14:03:38 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:01.164 14:03:38 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:01.164 [2024-11-17 14:03:38.177439] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:01.164 [2024-11-17 14:03:38.177559] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84098 ] 00:15:01.164 [2024-11-17 14:03:38.313675] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:01.164 [2024-11-17 14:03:38.341854] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:01.164 14:03:38 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:01.164 14:03:38 ftl -- common/autotest_common.sh@864 -- # return 0 00:15:01.164 14:03:38 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:15:01.164 14:03:39 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:15:01.164 14:03:39 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:15:01.164 14:03:39 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:01.731 14:03:39 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:15:01.731 14:03:39 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:01.731 14:03:39 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:01.990 14:03:40 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:15:01.990 14:03:40 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:15:01.990 14:03:40 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:15:01.990 14:03:40 ftl -- ftl/ftl.sh@50 -- # break 00:15:01.990 14:03:40 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:15:01.990 14:03:40 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:15:01.990 14:03:40 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:01.990 14:03:40 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:02.248 14:03:40 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:15:02.248 14:03:40 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:15:02.248 14:03:40 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:15:02.248 14:03:40 ftl -- ftl/ftl.sh@63 -- # break 00:15:02.248 14:03:40 ftl -- ftl/ftl.sh@66 -- # killprocess 84098 00:15:02.248 14:03:40 ftl -- common/autotest_common.sh@950 -- # '[' -z 84098 ']' 00:15:02.248 14:03:40 ftl -- common/autotest_common.sh@954 -- # kill -0 84098 00:15:02.248 14:03:40 ftl -- common/autotest_common.sh@955 -- # uname 00:15:02.248 14:03:40 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:02.248 14:03:40 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84098 00:15:02.248 14:03:40 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:02.248 killing process with pid 84098 00:15:02.248 14:03:40 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:02.248 14:03:40 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84098' 00:15:02.248 14:03:40 ftl -- common/autotest_common.sh@969 -- # kill 84098 00:15:02.249 14:03:40 ftl -- common/autotest_common.sh@974 -- # wait 84098 00:15:02.507 14:03:40 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:15:02.507 14:03:40 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:15:02.507 14:03:40 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:02.507 14:03:40 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:02.507 14:03:40 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:02.507 ************************************ 00:15:02.507 START TEST ftl_fio_basic 00:15:02.507 ************************************ 00:15:02.507 14:03:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:15:02.507 * Looking for test storage... 00:15:02.507 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:02.507 14:03:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:02.507 14:03:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:02.507 14:03:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # lcov --version 00:15:02.507 14:03:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:02.507 14:03:40 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:02.507 14:03:40 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:02.507 14:03:40 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:02.507 14:03:40 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:15:02.507 14:03:40 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:15:02.507 14:03:40 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:15:02.507 14:03:40 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:15:02.507 14:03:40 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:15:02.507 14:03:40 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:15:02.507 14:03:40 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:15:02.507 14:03:40 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:02.507 14:03:40 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:15:02.507 14:03:40 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:15:02.507 14:03:40 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:02.507 14:03:40 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:02.507 14:03:40 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:15:02.507 14:03:40 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:15:02.507 14:03:40 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:02.507 14:03:40 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:15:02.507 14:03:40 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:15:02.507 14:03:40 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:15:02.507 14:03:40 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:15:02.507 14:03:40 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:02.507 14:03:40 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:15:02.507 14:03:40 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:15:02.507 14:03:40 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:02.507 14:03:40 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:02.507 14:03:40 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:15:02.507 14:03:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:02.507 14:03:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:02.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:02.507 --rc genhtml_branch_coverage=1 00:15:02.507 --rc genhtml_function_coverage=1 00:15:02.507 --rc genhtml_legend=1 00:15:02.507 --rc geninfo_all_blocks=1 00:15:02.507 --rc geninfo_unexecuted_blocks=1 00:15:02.507 00:15:02.507 ' 00:15:02.507 14:03:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:02.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:02.507 --rc genhtml_branch_coverage=1 00:15:02.507 --rc genhtml_function_coverage=1 00:15:02.507 --rc genhtml_legend=1 00:15:02.507 --rc geninfo_all_blocks=1 00:15:02.507 --rc geninfo_unexecuted_blocks=1 00:15:02.507 00:15:02.507 ' 00:15:02.507 14:03:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:02.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:02.507 --rc genhtml_branch_coverage=1 00:15:02.508 --rc genhtml_function_coverage=1 00:15:02.508 --rc genhtml_legend=1 00:15:02.508 --rc geninfo_all_blocks=1 00:15:02.508 --rc geninfo_unexecuted_blocks=1 00:15:02.508 00:15:02.508 ' 00:15:02.508 14:03:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:02.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:02.508 --rc genhtml_branch_coverage=1 00:15:02.508 --rc genhtml_function_coverage=1 00:15:02.508 --rc genhtml_legend=1 00:15:02.508 --rc geninfo_all_blocks=1 00:15:02.508 --rc geninfo_unexecuted_blocks=1 00:15:02.508 00:15:02.508 ' 00:15:02.508 14:03:40 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:02.508 14:03:40 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:15:02.508 14:03:40 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:02.508 14:03:40 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:02.508 14:03:40 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:02.508 14:03:40 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:02.508 14:03:40 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:02.508 14:03:40 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:02.508 14:03:40 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:02.508 14:03:40 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:02.508 14:03:40 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:02.508 14:03:40 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:02.508 14:03:40 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:02.508 14:03:40 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:02.508 14:03:40 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:02.508 14:03:40 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:02.508 14:03:40 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:02.508 14:03:40 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:02.508 14:03:40 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:02.508 14:03:40 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:02.508 14:03:40 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:02.508 14:03:40 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:02.508 14:03:40 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:02.508 14:03:40 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:02.508 14:03:40 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:02.508 14:03:40 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:02.508 14:03:40 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:02.508 14:03:40 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:02.508 14:03:40 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:02.508 14:03:40 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:15:02.508 14:03:40 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:02.508 14:03:40 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:15:02.508 14:03:40 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:15:02.508 14:03:40 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:02.508 14:03:40 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:15:02.508 14:03:40 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:15:02.508 14:03:40 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:02.508 14:03:40 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:15:02.508 14:03:40 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:15:02.508 14:03:40 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:15:02.508 14:03:40 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:15:02.508 14:03:40 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:15:02.508 14:03:40 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:15:02.508 14:03:40 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:02.508 14:03:40 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:02.508 14:03:40 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:15:02.508 14:03:40 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=84209 00:15:02.508 14:03:40 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 84209 00:15:02.508 14:03:40 ftl.ftl_fio_basic -- common/autotest_common.sh@831 -- # '[' -z 84209 ']' 00:15:02.508 14:03:40 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:02.508 14:03:40 ftl.ftl_fio_basic -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:02.508 14:03:40 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:02.508 14:03:40 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:15:02.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:02.508 14:03:40 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:02.508 14:03:40 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:02.780 [2024-11-17 14:03:40.837384] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:02.780 [2024-11-17 14:03:40.837505] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84209 ] 00:15:02.780 [2024-11-17 14:03:40.984815] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:02.780 [2024-11-17 14:03:41.017653] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:02.780 [2024-11-17 14:03:41.017895] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:02.780 [2024-11-17 14:03:41.017971] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:03.729 14:03:41 ftl.ftl_fio_basic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:03.729 14:03:41 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # return 0 00:15:03.729 14:03:41 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:15:03.729 14:03:41 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:15:03.729 14:03:41 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:15:03.729 14:03:41 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:15:03.729 14:03:41 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:15:03.729 14:03:41 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:15:03.729 14:03:41 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:15:03.729 14:03:41 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:15:03.729 14:03:41 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:15:03.729 14:03:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:15:03.729 14:03:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:03.729 14:03:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:03.729 14:03:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:03.729 14:03:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:15:03.988 14:03:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:03.988 { 00:15:03.988 "name": "nvme0n1", 00:15:03.988 "aliases": [ 00:15:03.988 "5a443bf6-37c4-4a3b-8cc8-213de4357f83" 00:15:03.988 ], 00:15:03.988 "product_name": "NVMe disk", 00:15:03.988 "block_size": 4096, 00:15:03.988 "num_blocks": 1310720, 00:15:03.988 "uuid": "5a443bf6-37c4-4a3b-8cc8-213de4357f83", 00:15:03.988 "numa_id": -1, 00:15:03.988 "assigned_rate_limits": { 00:15:03.988 "rw_ios_per_sec": 0, 00:15:03.988 "rw_mbytes_per_sec": 0, 00:15:03.988 "r_mbytes_per_sec": 0, 00:15:03.988 "w_mbytes_per_sec": 0 00:15:03.988 }, 00:15:03.988 "claimed": false, 00:15:03.988 "zoned": false, 00:15:03.988 "supported_io_types": { 00:15:03.988 "read": true, 00:15:03.988 "write": true, 00:15:03.988 "unmap": true, 00:15:03.988 "flush": true, 00:15:03.988 "reset": true, 00:15:03.988 "nvme_admin": true, 00:15:03.988 "nvme_io": true, 00:15:03.988 "nvme_io_md": false, 00:15:03.988 "write_zeroes": true, 00:15:03.988 "zcopy": false, 00:15:03.988 "get_zone_info": false, 00:15:03.988 "zone_management": false, 00:15:03.988 "zone_append": false, 00:15:03.988 "compare": true, 00:15:03.988 "compare_and_write": false, 00:15:03.988 "abort": true, 00:15:03.988 "seek_hole": false, 00:15:03.988 "seek_data": false, 00:15:03.988 "copy": true, 00:15:03.988 "nvme_iov_md": false 00:15:03.988 }, 00:15:03.988 "driver_specific": { 00:15:03.988 "nvme": [ 00:15:03.988 { 00:15:03.988 "pci_address": "0000:00:11.0", 00:15:03.988 "trid": { 00:15:03.988 "trtype": "PCIe", 00:15:03.988 "traddr": "0000:00:11.0" 00:15:03.988 }, 00:15:03.988 "ctrlr_data": { 00:15:03.988 "cntlid": 0, 00:15:03.988 "vendor_id": "0x1b36", 00:15:03.988 "model_number": "QEMU NVMe Ctrl", 00:15:03.988 "serial_number": "12341", 00:15:03.988 "firmware_revision": "8.0.0", 00:15:03.988 "subnqn": "nqn.2019-08.org.qemu:12341", 00:15:03.988 "oacs": { 00:15:03.988 "security": 0, 00:15:03.988 "format": 1, 00:15:03.988 "firmware": 0, 00:15:03.988 "ns_manage": 1 00:15:03.988 }, 00:15:03.988 "multi_ctrlr": false, 00:15:03.988 "ana_reporting": false 00:15:03.988 }, 00:15:03.988 "vs": { 00:15:03.988 "nvme_version": "1.4" 00:15:03.988 }, 00:15:03.988 "ns_data": { 00:15:03.988 "id": 1, 00:15:03.988 "can_share": false 00:15:03.988 } 00:15:03.988 } 00:15:03.988 ], 00:15:03.988 "mp_policy": "active_passive" 00:15:03.988 } 00:15:03.988 } 00:15:03.988 ]' 00:15:03.988 14:03:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:03.988 14:03:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:03.988 14:03:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:03.988 14:03:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=1310720 00:15:03.988 14:03:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:15:03.988 14:03:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 5120 00:15:03.988 14:03:42 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:15:03.988 14:03:42 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:15:03.988 14:03:42 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:15:03.988 14:03:42 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:15:03.988 14:03:42 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:04.247 14:03:42 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:15:04.247 14:03:42 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:15:04.506 14:03:42 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=9d1bb0ea-329f-4e80-aaab-221222c5fd1a 00:15:04.506 14:03:42 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 9d1bb0ea-329f-4e80-aaab-221222c5fd1a 00:15:04.765 14:03:42 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=a5caa8c2-3209-4169-a215-bfd5ce41e8ed 00:15:04.765 14:03:42 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 a5caa8c2-3209-4169-a215-bfd5ce41e8ed 00:15:04.765 14:03:42 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:15:04.765 14:03:42 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:15:04.765 14:03:42 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=a5caa8c2-3209-4169-a215-bfd5ce41e8ed 00:15:04.765 14:03:42 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:15:04.765 14:03:42 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size a5caa8c2-3209-4169-a215-bfd5ce41e8ed 00:15:04.765 14:03:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=a5caa8c2-3209-4169-a215-bfd5ce41e8ed 00:15:04.765 14:03:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:04.765 14:03:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:04.765 14:03:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:04.765 14:03:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a5caa8c2-3209-4169-a215-bfd5ce41e8ed 00:15:04.765 14:03:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:04.765 { 00:15:04.765 "name": "a5caa8c2-3209-4169-a215-bfd5ce41e8ed", 00:15:04.765 "aliases": [ 00:15:04.765 "lvs/nvme0n1p0" 00:15:04.765 ], 00:15:04.765 "product_name": "Logical Volume", 00:15:04.765 "block_size": 4096, 00:15:04.765 "num_blocks": 26476544, 00:15:04.765 "uuid": "a5caa8c2-3209-4169-a215-bfd5ce41e8ed", 00:15:04.765 "assigned_rate_limits": { 00:15:04.765 "rw_ios_per_sec": 0, 00:15:04.765 "rw_mbytes_per_sec": 0, 00:15:04.765 "r_mbytes_per_sec": 0, 00:15:04.765 "w_mbytes_per_sec": 0 00:15:04.765 }, 00:15:04.765 "claimed": false, 00:15:04.765 "zoned": false, 00:15:04.765 "supported_io_types": { 00:15:04.765 "read": true, 00:15:04.765 "write": true, 00:15:04.765 "unmap": true, 00:15:04.765 "flush": false, 00:15:04.765 "reset": true, 00:15:04.765 "nvme_admin": false, 00:15:04.765 "nvme_io": false, 00:15:04.765 "nvme_io_md": false, 00:15:04.765 "write_zeroes": true, 00:15:04.765 "zcopy": false, 00:15:04.765 "get_zone_info": false, 00:15:04.765 "zone_management": false, 00:15:04.765 "zone_append": false, 00:15:04.765 "compare": false, 00:15:04.765 "compare_and_write": false, 00:15:04.765 "abort": false, 00:15:04.765 "seek_hole": true, 00:15:04.765 "seek_data": true, 00:15:04.765 "copy": false, 00:15:04.765 "nvme_iov_md": false 00:15:04.765 }, 00:15:04.765 "driver_specific": { 00:15:04.765 "lvol": { 00:15:04.765 "lvol_store_uuid": "9d1bb0ea-329f-4e80-aaab-221222c5fd1a", 00:15:04.765 "base_bdev": "nvme0n1", 00:15:04.765 "thin_provision": true, 00:15:04.765 "num_allocated_clusters": 0, 00:15:04.765 "snapshot": false, 00:15:04.765 "clone": false, 00:15:04.765 "esnap_clone": false 00:15:04.765 } 00:15:04.765 } 00:15:04.765 } 00:15:04.765 ]' 00:15:04.765 14:03:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:05.024 14:03:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:05.024 14:03:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:05.024 14:03:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:05.024 14:03:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:05.024 14:03:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:15:05.024 14:03:43 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:15:05.024 14:03:43 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:15:05.024 14:03:43 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:15:05.283 14:03:43 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:15:05.283 14:03:43 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:15:05.283 14:03:43 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size a5caa8c2-3209-4169-a215-bfd5ce41e8ed 00:15:05.283 14:03:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=a5caa8c2-3209-4169-a215-bfd5ce41e8ed 00:15:05.283 14:03:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:05.283 14:03:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:05.283 14:03:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:05.283 14:03:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a5caa8c2-3209-4169-a215-bfd5ce41e8ed 00:15:05.283 14:03:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:05.283 { 00:15:05.283 "name": "a5caa8c2-3209-4169-a215-bfd5ce41e8ed", 00:15:05.283 "aliases": [ 00:15:05.283 "lvs/nvme0n1p0" 00:15:05.283 ], 00:15:05.283 "product_name": "Logical Volume", 00:15:05.283 "block_size": 4096, 00:15:05.283 "num_blocks": 26476544, 00:15:05.283 "uuid": "a5caa8c2-3209-4169-a215-bfd5ce41e8ed", 00:15:05.283 "assigned_rate_limits": { 00:15:05.283 "rw_ios_per_sec": 0, 00:15:05.283 "rw_mbytes_per_sec": 0, 00:15:05.283 "r_mbytes_per_sec": 0, 00:15:05.283 "w_mbytes_per_sec": 0 00:15:05.283 }, 00:15:05.283 "claimed": false, 00:15:05.283 "zoned": false, 00:15:05.283 "supported_io_types": { 00:15:05.283 "read": true, 00:15:05.283 "write": true, 00:15:05.283 "unmap": true, 00:15:05.283 "flush": false, 00:15:05.283 "reset": true, 00:15:05.283 "nvme_admin": false, 00:15:05.283 "nvme_io": false, 00:15:05.283 "nvme_io_md": false, 00:15:05.283 "write_zeroes": true, 00:15:05.283 "zcopy": false, 00:15:05.283 "get_zone_info": false, 00:15:05.283 "zone_management": false, 00:15:05.283 "zone_append": false, 00:15:05.283 "compare": false, 00:15:05.283 "compare_and_write": false, 00:15:05.283 "abort": false, 00:15:05.283 "seek_hole": true, 00:15:05.283 "seek_data": true, 00:15:05.283 "copy": false, 00:15:05.284 "nvme_iov_md": false 00:15:05.284 }, 00:15:05.284 "driver_specific": { 00:15:05.284 "lvol": { 00:15:05.284 "lvol_store_uuid": "9d1bb0ea-329f-4e80-aaab-221222c5fd1a", 00:15:05.284 "base_bdev": "nvme0n1", 00:15:05.284 "thin_provision": true, 00:15:05.284 "num_allocated_clusters": 0, 00:15:05.284 "snapshot": false, 00:15:05.284 "clone": false, 00:15:05.284 "esnap_clone": false 00:15:05.284 } 00:15:05.284 } 00:15:05.284 } 00:15:05.284 ]' 00:15:05.284 14:03:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:05.542 14:03:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:05.542 14:03:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:05.542 14:03:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:05.542 14:03:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:05.542 14:03:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:15:05.542 14:03:43 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:15:05.542 14:03:43 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:15:05.542 14:03:43 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:15:05.542 14:03:43 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:15:05.542 14:03:43 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:15:05.542 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:15:05.542 14:03:43 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size a5caa8c2-3209-4169-a215-bfd5ce41e8ed 00:15:05.542 14:03:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=a5caa8c2-3209-4169-a215-bfd5ce41e8ed 00:15:05.542 14:03:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:05.542 14:03:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:05.542 14:03:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:05.542 14:03:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a5caa8c2-3209-4169-a215-bfd5ce41e8ed 00:15:05.802 14:03:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:05.802 { 00:15:05.802 "name": "a5caa8c2-3209-4169-a215-bfd5ce41e8ed", 00:15:05.802 "aliases": [ 00:15:05.802 "lvs/nvme0n1p0" 00:15:05.802 ], 00:15:05.802 "product_name": "Logical Volume", 00:15:05.802 "block_size": 4096, 00:15:05.802 "num_blocks": 26476544, 00:15:05.802 "uuid": "a5caa8c2-3209-4169-a215-bfd5ce41e8ed", 00:15:05.802 "assigned_rate_limits": { 00:15:05.802 "rw_ios_per_sec": 0, 00:15:05.802 "rw_mbytes_per_sec": 0, 00:15:05.802 "r_mbytes_per_sec": 0, 00:15:05.802 "w_mbytes_per_sec": 0 00:15:05.802 }, 00:15:05.802 "claimed": false, 00:15:05.802 "zoned": false, 00:15:05.802 "supported_io_types": { 00:15:05.802 "read": true, 00:15:05.802 "write": true, 00:15:05.802 "unmap": true, 00:15:05.802 "flush": false, 00:15:05.802 "reset": true, 00:15:05.802 "nvme_admin": false, 00:15:05.802 "nvme_io": false, 00:15:05.802 "nvme_io_md": false, 00:15:05.802 "write_zeroes": true, 00:15:05.802 "zcopy": false, 00:15:05.802 "get_zone_info": false, 00:15:05.802 "zone_management": false, 00:15:05.802 "zone_append": false, 00:15:05.802 "compare": false, 00:15:05.802 "compare_and_write": false, 00:15:05.802 "abort": false, 00:15:05.802 "seek_hole": true, 00:15:05.802 "seek_data": true, 00:15:05.802 "copy": false, 00:15:05.802 "nvme_iov_md": false 00:15:05.802 }, 00:15:05.802 "driver_specific": { 00:15:05.802 "lvol": { 00:15:05.802 "lvol_store_uuid": "9d1bb0ea-329f-4e80-aaab-221222c5fd1a", 00:15:05.802 "base_bdev": "nvme0n1", 00:15:05.802 "thin_provision": true, 00:15:05.802 "num_allocated_clusters": 0, 00:15:05.802 "snapshot": false, 00:15:05.802 "clone": false, 00:15:05.802 "esnap_clone": false 00:15:05.802 } 00:15:05.802 } 00:15:05.802 } 00:15:05.802 ]' 00:15:05.802 14:03:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:05.802 14:03:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:05.802 14:03:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:05.802 14:03:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:05.802 14:03:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:05.802 14:03:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:15:05.802 14:03:44 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:15:05.802 14:03:44 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:15:05.802 14:03:44 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d a5caa8c2-3209-4169-a215-bfd5ce41e8ed -c nvc0n1p0 --l2p_dram_limit 60 00:15:06.062 [2024-11-17 14:03:44.263205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:06.062 [2024-11-17 14:03:44.263255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:06.062 [2024-11-17 14:03:44.263273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:06.062 [2024-11-17 14:03:44.263281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:06.062 [2024-11-17 14:03:44.263329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:06.062 [2024-11-17 14:03:44.263338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:06.062 [2024-11-17 14:03:44.263346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:15:06.062 [2024-11-17 14:03:44.263355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:06.062 [2024-11-17 14:03:44.263384] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:06.062 [2024-11-17 14:03:44.263633] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:06.062 [2024-11-17 14:03:44.263647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:06.062 [2024-11-17 14:03:44.263654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:06.062 [2024-11-17 14:03:44.263668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:15:06.062 [2024-11-17 14:03:44.263676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:06.062 [2024-11-17 14:03:44.263701] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 8f757042-4c42-4d99-a402-37f6b2ddd3c9 00:15:06.062 [2024-11-17 14:03:44.264640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:06.062 [2024-11-17 14:03:44.264667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:15:06.062 [2024-11-17 14:03:44.264687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:15:06.062 [2024-11-17 14:03:44.264693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:06.062 [2024-11-17 14:03:44.269331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:06.062 [2024-11-17 14:03:44.269357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:06.062 [2024-11-17 14:03:44.269368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.570 ms 00:15:06.062 [2024-11-17 14:03:44.269374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:06.062 [2024-11-17 14:03:44.269454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:06.062 [2024-11-17 14:03:44.269464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:06.062 [2024-11-17 14:03:44.269471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:15:06.062 [2024-11-17 14:03:44.269477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:06.062 [2024-11-17 14:03:44.269521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:06.062 [2024-11-17 14:03:44.269534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:06.062 [2024-11-17 14:03:44.269542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:15:06.062 [2024-11-17 14:03:44.269555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:06.062 [2024-11-17 14:03:44.269579] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:06.062 [2024-11-17 14:03:44.270830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:06.062 [2024-11-17 14:03:44.270857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:06.062 [2024-11-17 14:03:44.270864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.257 ms 00:15:06.062 [2024-11-17 14:03:44.270871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:06.062 [2024-11-17 14:03:44.270911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:06.062 [2024-11-17 14:03:44.270920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:06.062 [2024-11-17 14:03:44.270926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:15:06.062 [2024-11-17 14:03:44.270935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:06.062 [2024-11-17 14:03:44.270958] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:15:06.062 [2024-11-17 14:03:44.271073] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:15:06.062 [2024-11-17 14:03:44.271090] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:06.062 [2024-11-17 14:03:44.271101] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:15:06.062 [2024-11-17 14:03:44.271109] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:06.062 [2024-11-17 14:03:44.271117] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:06.062 [2024-11-17 14:03:44.271124] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:15:06.062 [2024-11-17 14:03:44.271132] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:06.062 [2024-11-17 14:03:44.271138] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:15:06.062 [2024-11-17 14:03:44.271146] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:15:06.062 [2024-11-17 14:03:44.271152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:06.062 [2024-11-17 14:03:44.271160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:06.062 [2024-11-17 14:03:44.271165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:15:06.062 [2024-11-17 14:03:44.271172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:06.062 [2024-11-17 14:03:44.271249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:06.062 [2024-11-17 14:03:44.271266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:06.062 [2024-11-17 14:03:44.271273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:15:06.062 [2024-11-17 14:03:44.271279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:06.062 [2024-11-17 14:03:44.271370] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:06.062 [2024-11-17 14:03:44.271379] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:06.062 [2024-11-17 14:03:44.271385] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:06.062 [2024-11-17 14:03:44.271392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:06.062 [2024-11-17 14:03:44.271398] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:06.062 [2024-11-17 14:03:44.271405] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:06.062 [2024-11-17 14:03:44.271410] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:15:06.062 [2024-11-17 14:03:44.271417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:06.062 [2024-11-17 14:03:44.271422] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:15:06.062 [2024-11-17 14:03:44.271429] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:06.062 [2024-11-17 14:03:44.271433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:06.062 [2024-11-17 14:03:44.271440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:15:06.062 [2024-11-17 14:03:44.271445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:06.062 [2024-11-17 14:03:44.271454] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:06.062 [2024-11-17 14:03:44.271459] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:15:06.062 [2024-11-17 14:03:44.271465] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:06.062 [2024-11-17 14:03:44.271470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:06.062 [2024-11-17 14:03:44.271477] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:15:06.062 [2024-11-17 14:03:44.271482] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:06.062 [2024-11-17 14:03:44.271489] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:06.062 [2024-11-17 14:03:44.271505] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:15:06.062 [2024-11-17 14:03:44.271519] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:06.062 [2024-11-17 14:03:44.271525] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:06.062 [2024-11-17 14:03:44.271532] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:15:06.062 [2024-11-17 14:03:44.271538] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:06.063 [2024-11-17 14:03:44.271545] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:06.063 [2024-11-17 14:03:44.271551] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:15:06.063 [2024-11-17 14:03:44.271558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:06.063 [2024-11-17 14:03:44.271564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:06.063 [2024-11-17 14:03:44.271573] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:15:06.063 [2024-11-17 14:03:44.271579] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:06.063 [2024-11-17 14:03:44.271586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:06.063 [2024-11-17 14:03:44.271593] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:15:06.063 [2024-11-17 14:03:44.271602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:06.063 [2024-11-17 14:03:44.271608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:06.063 [2024-11-17 14:03:44.271615] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:15:06.063 [2024-11-17 14:03:44.271621] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:06.063 [2024-11-17 14:03:44.271631] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:15:06.063 [2024-11-17 14:03:44.271637] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:15:06.063 [2024-11-17 14:03:44.271644] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:06.063 [2024-11-17 14:03:44.271651] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:15:06.063 [2024-11-17 14:03:44.271658] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:15:06.063 [2024-11-17 14:03:44.271664] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:06.063 [2024-11-17 14:03:44.271671] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:06.063 [2024-11-17 14:03:44.271678] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:06.063 [2024-11-17 14:03:44.271687] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:06.063 [2024-11-17 14:03:44.271693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:06.063 [2024-11-17 14:03:44.271701] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:06.063 [2024-11-17 14:03:44.271707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:06.063 [2024-11-17 14:03:44.271714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:06.063 [2024-11-17 14:03:44.271719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:06.063 [2024-11-17 14:03:44.271726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:06.063 [2024-11-17 14:03:44.271732] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:06.063 [2024-11-17 14:03:44.271741] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:06.063 [2024-11-17 14:03:44.271751] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:06.063 [2024-11-17 14:03:44.271766] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:15:06.063 [2024-11-17 14:03:44.271773] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:15:06.063 [2024-11-17 14:03:44.271781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:15:06.063 [2024-11-17 14:03:44.271787] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:15:06.063 [2024-11-17 14:03:44.271796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:15:06.063 [2024-11-17 14:03:44.271802] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:15:06.063 [2024-11-17 14:03:44.271811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:15:06.063 [2024-11-17 14:03:44.271817] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:15:06.063 [2024-11-17 14:03:44.271825] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:15:06.063 [2024-11-17 14:03:44.271831] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:15:06.063 [2024-11-17 14:03:44.271840] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:15:06.063 [2024-11-17 14:03:44.271847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:15:06.063 [2024-11-17 14:03:44.271854] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:15:06.063 [2024-11-17 14:03:44.271860] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:15:06.063 [2024-11-17 14:03:44.271868] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:06.063 [2024-11-17 14:03:44.271875] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:06.063 [2024-11-17 14:03:44.271883] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:06.063 [2024-11-17 14:03:44.271888] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:06.063 [2024-11-17 14:03:44.271895] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:06.063 [2024-11-17 14:03:44.271900] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:06.063 [2024-11-17 14:03:44.271907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:06.063 [2024-11-17 14:03:44.271913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:06.063 [2024-11-17 14:03:44.271928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.586 ms 00:15:06.063 [2024-11-17 14:03:44.271934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:06.063 [2024-11-17 14:03:44.271984] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:15:06.063 [2024-11-17 14:03:44.271999] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:15:09.348 [2024-11-17 14:03:47.083479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.348 [2024-11-17 14:03:47.083554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:15:09.348 [2024-11-17 14:03:47.083571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2811.477 ms 00:15:09.348 [2024-11-17 14:03:47.083579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.348 [2024-11-17 14:03:47.101476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.348 [2024-11-17 14:03:47.101527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:09.348 [2024-11-17 14:03:47.101544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.793 ms 00:15:09.348 [2024-11-17 14:03:47.101552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.348 [2024-11-17 14:03:47.101690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.348 [2024-11-17 14:03:47.101700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:09.348 [2024-11-17 14:03:47.101710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:15:09.348 [2024-11-17 14:03:47.101717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.348 [2024-11-17 14:03:47.111990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.348 [2024-11-17 14:03:47.112043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:09.348 [2024-11-17 14:03:47.112062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.211 ms 00:15:09.348 [2024-11-17 14:03:47.112075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.348 [2024-11-17 14:03:47.112127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.348 [2024-11-17 14:03:47.112140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:09.348 [2024-11-17 14:03:47.112154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:09.348 [2024-11-17 14:03:47.112166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.348 [2024-11-17 14:03:47.112611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.348 [2024-11-17 14:03:47.112643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:09.348 [2024-11-17 14:03:47.112673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.350 ms 00:15:09.348 [2024-11-17 14:03:47.112686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.348 [2024-11-17 14:03:47.112887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.348 [2024-11-17 14:03:47.112909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:09.348 [2024-11-17 14:03:47.112926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.158 ms 00:15:09.348 [2024-11-17 14:03:47.112939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.348 [2024-11-17 14:03:47.118836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.348 [2024-11-17 14:03:47.118869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:09.348 [2024-11-17 14:03:47.118879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.860 ms 00:15:09.348 [2024-11-17 14:03:47.118897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.348 [2024-11-17 14:03:47.127074] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:15:09.348 [2024-11-17 14:03:47.141125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.348 [2024-11-17 14:03:47.141164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:09.348 [2024-11-17 14:03:47.141175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.145 ms 00:15:09.348 [2024-11-17 14:03:47.141194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.348 [2024-11-17 14:03:47.181779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.348 [2024-11-17 14:03:47.181832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:15:09.348 [2024-11-17 14:03:47.181846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.551 ms 00:15:09.348 [2024-11-17 14:03:47.181858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.348 [2024-11-17 14:03:47.182024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.348 [2024-11-17 14:03:47.182046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:09.348 [2024-11-17 14:03:47.182065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:15:09.348 [2024-11-17 14:03:47.182075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.348 [2024-11-17 14:03:47.184919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.348 [2024-11-17 14:03:47.184957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:15:09.348 [2024-11-17 14:03:47.184975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.798 ms 00:15:09.348 [2024-11-17 14:03:47.184988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.348 [2024-11-17 14:03:47.187296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.348 [2024-11-17 14:03:47.187330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:15:09.348 [2024-11-17 14:03:47.187339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.275 ms 00:15:09.349 [2024-11-17 14:03:47.187348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.349 [2024-11-17 14:03:47.187663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.349 [2024-11-17 14:03:47.187718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:09.349 [2024-11-17 14:03:47.187727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:15:09.349 [2024-11-17 14:03:47.187737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.349 [2024-11-17 14:03:47.209736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.349 [2024-11-17 14:03:47.209778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:15:09.349 [2024-11-17 14:03:47.209791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.976 ms 00:15:09.349 [2024-11-17 14:03:47.209801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.349 [2024-11-17 14:03:47.213359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.349 [2024-11-17 14:03:47.213397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:15:09.349 [2024-11-17 14:03:47.213408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.517 ms 00:15:09.349 [2024-11-17 14:03:47.213419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.349 [2024-11-17 14:03:47.216212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.349 [2024-11-17 14:03:47.216259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:15:09.349 [2024-11-17 14:03:47.216268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.761 ms 00:15:09.349 [2024-11-17 14:03:47.216277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.349 [2024-11-17 14:03:47.219163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.349 [2024-11-17 14:03:47.219199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:09.349 [2024-11-17 14:03:47.219208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.862 ms 00:15:09.349 [2024-11-17 14:03:47.219219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.349 [2024-11-17 14:03:47.219256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.349 [2024-11-17 14:03:47.219266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:09.349 [2024-11-17 14:03:47.219275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:15:09.349 [2024-11-17 14:03:47.219283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.349 [2024-11-17 14:03:47.219374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.349 [2024-11-17 14:03:47.219385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:09.349 [2024-11-17 14:03:47.219403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:15:09.349 [2024-11-17 14:03:47.219416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.349 [2024-11-17 14:03:47.220371] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2956.695 ms, result 0 00:15:09.349 { 00:15:09.349 "name": "ftl0", 00:15:09.349 "uuid": "8f757042-4c42-4d99-a402-37f6b2ddd3c9" 00:15:09.349 } 00:15:09.349 14:03:47 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:15:09.349 14:03:47 ftl.ftl_fio_basic -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:15:09.349 14:03:47 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:09.349 14:03:47 ftl.ftl_fio_basic -- common/autotest_common.sh@901 -- # local i 00:15:09.349 14:03:47 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:09.349 14:03:47 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:09.349 14:03:47 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:09.349 14:03:47 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:15:09.349 [ 00:15:09.349 { 00:15:09.349 "name": "ftl0", 00:15:09.349 "aliases": [ 00:15:09.349 "8f757042-4c42-4d99-a402-37f6b2ddd3c9" 00:15:09.349 ], 00:15:09.349 "product_name": "FTL disk", 00:15:09.349 "block_size": 4096, 00:15:09.349 "num_blocks": 20971520, 00:15:09.349 "uuid": "8f757042-4c42-4d99-a402-37f6b2ddd3c9", 00:15:09.349 "assigned_rate_limits": { 00:15:09.349 "rw_ios_per_sec": 0, 00:15:09.349 "rw_mbytes_per_sec": 0, 00:15:09.349 "r_mbytes_per_sec": 0, 00:15:09.349 "w_mbytes_per_sec": 0 00:15:09.349 }, 00:15:09.349 "claimed": false, 00:15:09.349 "zoned": false, 00:15:09.349 "supported_io_types": { 00:15:09.349 "read": true, 00:15:09.349 "write": true, 00:15:09.349 "unmap": true, 00:15:09.349 "flush": true, 00:15:09.349 "reset": false, 00:15:09.349 "nvme_admin": false, 00:15:09.349 "nvme_io": false, 00:15:09.349 "nvme_io_md": false, 00:15:09.349 "write_zeroes": true, 00:15:09.349 "zcopy": false, 00:15:09.349 "get_zone_info": false, 00:15:09.349 "zone_management": false, 00:15:09.349 "zone_append": false, 00:15:09.349 "compare": false, 00:15:09.349 "compare_and_write": false, 00:15:09.349 "abort": false, 00:15:09.349 "seek_hole": false, 00:15:09.349 "seek_data": false, 00:15:09.349 "copy": false, 00:15:09.349 "nvme_iov_md": false 00:15:09.349 }, 00:15:09.349 "driver_specific": { 00:15:09.349 "ftl": { 00:15:09.349 "base_bdev": "a5caa8c2-3209-4169-a215-bfd5ce41e8ed", 00:15:09.349 "cache": "nvc0n1p0" 00:15:09.349 } 00:15:09.349 } 00:15:09.349 } 00:15:09.349 ] 00:15:09.349 14:03:47 ftl.ftl_fio_basic -- common/autotest_common.sh@907 -- # return 0 00:15:09.349 14:03:47 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:15:09.349 14:03:47 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:15:09.607 14:03:47 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:15:09.607 14:03:47 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:15:09.867 [2024-11-17 14:03:48.026188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.867 [2024-11-17 14:03:48.026235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:09.867 [2024-11-17 14:03:48.026260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:09.867 [2024-11-17 14:03:48.026268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.867 [2024-11-17 14:03:48.026304] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:09.867 [2024-11-17 14:03:48.026774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.867 [2024-11-17 14:03:48.026802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:09.867 [2024-11-17 14:03:48.026812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.446 ms 00:15:09.867 [2024-11-17 14:03:48.026823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.867 [2024-11-17 14:03:48.027229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.867 [2024-11-17 14:03:48.027264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:09.867 [2024-11-17 14:03:48.027273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.368 ms 00:15:09.867 [2024-11-17 14:03:48.027283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.867 [2024-11-17 14:03:48.030514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.867 [2024-11-17 14:03:48.030549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:09.867 [2024-11-17 14:03:48.030558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.210 ms 00:15:09.867 [2024-11-17 14:03:48.030568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.867 [2024-11-17 14:03:48.036764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.867 [2024-11-17 14:03:48.036799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:15:09.867 [2024-11-17 14:03:48.036809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.175 ms 00:15:09.867 [2024-11-17 14:03:48.036818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.867 [2024-11-17 14:03:48.038434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.867 [2024-11-17 14:03:48.038484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:09.867 [2024-11-17 14:03:48.038493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.536 ms 00:15:09.867 [2024-11-17 14:03:48.038502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.867 [2024-11-17 14:03:48.042537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.867 [2024-11-17 14:03:48.042576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:09.867 [2024-11-17 14:03:48.042586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.992 ms 00:15:09.867 [2024-11-17 14:03:48.042598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.867 [2024-11-17 14:03:48.042753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.867 [2024-11-17 14:03:48.042770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:09.867 [2024-11-17 14:03:48.042778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:15:09.867 [2024-11-17 14:03:48.042787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.867 [2024-11-17 14:03:48.044538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.867 [2024-11-17 14:03:48.044575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:15:09.867 [2024-11-17 14:03:48.044584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.729 ms 00:15:09.867 [2024-11-17 14:03:48.044593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.867 [2024-11-17 14:03:48.045887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.867 [2024-11-17 14:03:48.045925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:15:09.867 [2024-11-17 14:03:48.045934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.260 ms 00:15:09.867 [2024-11-17 14:03:48.045943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.867 [2024-11-17 14:03:48.046837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.867 [2024-11-17 14:03:48.046873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:09.867 [2024-11-17 14:03:48.046883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.858 ms 00:15:09.867 [2024-11-17 14:03:48.046891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.867 [2024-11-17 14:03:48.047814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.867 [2024-11-17 14:03:48.047850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:09.867 [2024-11-17 14:03:48.047859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.844 ms 00:15:09.867 [2024-11-17 14:03:48.047868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.867 [2024-11-17 14:03:48.047902] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:09.867 [2024-11-17 14:03:48.047918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:09.867 [2024-11-17 14:03:48.047928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:09.867 [2024-11-17 14:03:48.047939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:09.867 [2024-11-17 14:03:48.047946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:09.867 [2024-11-17 14:03:48.047956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:09.867 [2024-11-17 14:03:48.047963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:09.867 [2024-11-17 14:03:48.047972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:09.867 [2024-11-17 14:03:48.047980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:09.867 [2024-11-17 14:03:48.047989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:09.867 [2024-11-17 14:03:48.047996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:09.867 [2024-11-17 14:03:48.048005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:09.867 [2024-11-17 14:03:48.048012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:09.867 [2024-11-17 14:03:48.048020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:09.867 [2024-11-17 14:03:48.048027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:09.867 [2024-11-17 14:03:48.048036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:09.867 [2024-11-17 14:03:48.048043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:09.867 [2024-11-17 14:03:48.048052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:09.867 [2024-11-17 14:03:48.048059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:09.867 [2024-11-17 14:03:48.048067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:09.867 [2024-11-17 14:03:48.048075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:09.867 [2024-11-17 14:03:48.048085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:09.867 [2024-11-17 14:03:48.048092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:09.868 [2024-11-17 14:03:48.048101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:09.868 [2024-11-17 14:03:48.048108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:09.868 [2024-11-17 14:03:48.048117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:09.868 [2024-11-17 14:03:48.048124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:09.868 [2024-11-17 14:03:48.048133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:09.868 [2024-11-17 14:03:48.048140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:09.868 [2024-11-17 14:03:48.048150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:09.868 [2024-11-17 14:03:48.048157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:09.868 [2024-11-17 14:03:48.048166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:09.868 [2024-11-17 14:03:48.048173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:09.868 [2024-11-17 14:03:48.048181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:09.868 [2024-11-17 14:03:48.048192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:09.868 [2024-11-17 14:03:48.048201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:09.868 [2024-11-17 14:03:48.048209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:09.868 [2024-11-17 14:03:48.048219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:09.868 [2024-11-17 14:03:48.048226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:09.868 [2024-11-17 14:03:48.048235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:09.868 [2024-11-17 14:03:48.048254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:09.868 [2024-11-17 14:03:48.048263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:09.868 [2024-11-17 14:03:48.048270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:09.868 [2024-11-17 14:03:48.048279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:09.868 [2024-11-17 14:03:48.048286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:09.868 [2024-11-17 14:03:48.048295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:09.868 [2024-11-17 14:03:48.048303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:09.868 [2024-11-17 14:03:48.048312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:09.868 [2024-11-17 14:03:48.048319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:09.868 [2024-11-17 14:03:48.048328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:09.868 [2024-11-17 14:03:48.048335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:09.868 [2024-11-17 14:03:48.048344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:09.868 [2024-11-17 14:03:48.048351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:09.868 [2024-11-17 14:03:48.048361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:09.868 [2024-11-17 14:03:48.048369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:09.868 [2024-11-17 14:03:48.048379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:09.868 [2024-11-17 14:03:48.048386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:09.868 [2024-11-17 14:03:48.048395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:09.868 [2024-11-17 14:03:48.048402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:09.868 [2024-11-17 14:03:48.048411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:09.868 [2024-11-17 14:03:48.048419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:09.868 [2024-11-17 14:03:48.048428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:09.868 [2024-11-17 14:03:48.048435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:09.868 [2024-11-17 14:03:48.048443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:09.868 [2024-11-17 14:03:48.048451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:09.868 [2024-11-17 14:03:48.048459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:09.868 [2024-11-17 14:03:48.048469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:09.868 [2024-11-17 14:03:48.048479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:09.868 [2024-11-17 14:03:48.048486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:09.868 [2024-11-17 14:03:48.048497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:09.868 [2024-11-17 14:03:48.048504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:09.868 [2024-11-17 14:03:48.048513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:09.868 [2024-11-17 14:03:48.048520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:09.868 [2024-11-17 14:03:48.048528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:09.868 [2024-11-17 14:03:48.048535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:09.868 [2024-11-17 14:03:48.048544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:09.868 [2024-11-17 14:03:48.048551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:09.868 [2024-11-17 14:03:48.048560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:09.868 [2024-11-17 14:03:48.048567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:09.868 [2024-11-17 14:03:48.048575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:09.868 [2024-11-17 14:03:48.048582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:09.868 [2024-11-17 14:03:48.048591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:09.868 [2024-11-17 14:03:48.048599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:09.868 [2024-11-17 14:03:48.048608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:09.868 [2024-11-17 14:03:48.048616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:09.868 [2024-11-17 14:03:48.048626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:09.868 [2024-11-17 14:03:48.048633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:09.868 [2024-11-17 14:03:48.048642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:09.868 [2024-11-17 14:03:48.048649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:09.868 [2024-11-17 14:03:48.048657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:09.868 [2024-11-17 14:03:48.048664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:09.868 [2024-11-17 14:03:48.048686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:09.868 [2024-11-17 14:03:48.048693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:09.868 [2024-11-17 14:03:48.048702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:09.868 [2024-11-17 14:03:48.048709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:09.868 [2024-11-17 14:03:48.048718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:09.868 [2024-11-17 14:03:48.048725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:09.868 [2024-11-17 14:03:48.048734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:09.868 [2024-11-17 14:03:48.048743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:09.868 [2024-11-17 14:03:48.048753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:09.868 [2024-11-17 14:03:48.048760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:09.868 [2024-11-17 14:03:48.048778] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:09.868 [2024-11-17 14:03:48.048786] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8f757042-4c42-4d99-a402-37f6b2ddd3c9 00:15:09.868 [2024-11-17 14:03:48.048796] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:09.868 [2024-11-17 14:03:48.048811] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:09.868 [2024-11-17 14:03:48.048823] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:09.868 [2024-11-17 14:03:48.048830] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:09.868 [2024-11-17 14:03:48.048839] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:09.868 [2024-11-17 14:03:48.048846] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:09.868 [2024-11-17 14:03:48.048855] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:09.868 [2024-11-17 14:03:48.048861] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:09.868 [2024-11-17 14:03:48.048869] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:09.868 [2024-11-17 14:03:48.048876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.869 [2024-11-17 14:03:48.048884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:09.869 [2024-11-17 14:03:48.048892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.974 ms 00:15:09.869 [2024-11-17 14:03:48.048900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.869 [2024-11-17 14:03:48.050342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.869 [2024-11-17 14:03:48.050369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:09.869 [2024-11-17 14:03:48.050386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.420 ms 00:15:09.869 [2024-11-17 14:03:48.050395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.869 [2024-11-17 14:03:48.050489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:09.869 [2024-11-17 14:03:48.050516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:09.869 [2024-11-17 14:03:48.050525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:15:09.869 [2024-11-17 14:03:48.050534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.869 [2024-11-17 14:03:48.055708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:09.869 [2024-11-17 14:03:48.055745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:09.869 [2024-11-17 14:03:48.055754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:09.869 [2024-11-17 14:03:48.055764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.869 [2024-11-17 14:03:48.055820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:09.869 [2024-11-17 14:03:48.055840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:09.869 [2024-11-17 14:03:48.055847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:09.869 [2024-11-17 14:03:48.055857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.869 [2024-11-17 14:03:48.055915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:09.869 [2024-11-17 14:03:48.055934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:09.869 [2024-11-17 14:03:48.055942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:09.869 [2024-11-17 14:03:48.055951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.869 [2024-11-17 14:03:48.055978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:09.869 [2024-11-17 14:03:48.055997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:09.869 [2024-11-17 14:03:48.056004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:09.869 [2024-11-17 14:03:48.056012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.869 [2024-11-17 14:03:48.065133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:09.869 [2024-11-17 14:03:48.065178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:09.869 [2024-11-17 14:03:48.065197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:09.869 [2024-11-17 14:03:48.065207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.869 [2024-11-17 14:03:48.072713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:09.869 [2024-11-17 14:03:48.072759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:09.869 [2024-11-17 14:03:48.072769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:09.869 [2024-11-17 14:03:48.072780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.869 [2024-11-17 14:03:48.072845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:09.869 [2024-11-17 14:03:48.072859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:09.869 [2024-11-17 14:03:48.072869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:09.869 [2024-11-17 14:03:48.072877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.869 [2024-11-17 14:03:48.072915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:09.869 [2024-11-17 14:03:48.072926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:09.869 [2024-11-17 14:03:48.072934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:09.869 [2024-11-17 14:03:48.072942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.869 [2024-11-17 14:03:48.073015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:09.869 [2024-11-17 14:03:48.073026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:09.869 [2024-11-17 14:03:48.073034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:09.869 [2024-11-17 14:03:48.073053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.869 [2024-11-17 14:03:48.073099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:09.869 [2024-11-17 14:03:48.073110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:09.869 [2024-11-17 14:03:48.073117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:09.869 [2024-11-17 14:03:48.073126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.869 [2024-11-17 14:03:48.073164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:09.869 [2024-11-17 14:03:48.073176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:09.869 [2024-11-17 14:03:48.073184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:09.869 [2024-11-17 14:03:48.073195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.869 [2024-11-17 14:03:48.073255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:09.869 [2024-11-17 14:03:48.073268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:09.869 [2024-11-17 14:03:48.073284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:09.869 [2024-11-17 14:03:48.073301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:09.869 [2024-11-17 14:03:48.073446] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 47.240 ms, result 0 00:15:09.869 true 00:15:09.869 14:03:48 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 84209 00:15:09.869 14:03:48 ftl.ftl_fio_basic -- common/autotest_common.sh@950 -- # '[' -z 84209 ']' 00:15:09.869 14:03:48 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # kill -0 84209 00:15:09.869 14:03:48 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # uname 00:15:09.869 14:03:48 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:09.869 14:03:48 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84209 00:15:09.869 14:03:48 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:09.869 14:03:48 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:09.869 killing process with pid 84209 00:15:09.869 14:03:48 ftl.ftl_fio_basic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84209' 00:15:09.869 14:03:48 ftl.ftl_fio_basic -- common/autotest_common.sh@969 -- # kill 84209 00:15:09.869 14:03:48 ftl.ftl_fio_basic -- common/autotest_common.sh@974 -- # wait 84209 00:15:15.138 14:03:53 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:15:15.138 14:03:53 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:15.138 14:03:53 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:15:15.138 14:03:53 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:15.138 14:03:53 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:15.138 14:03:53 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:15.138 14:03:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:15.138 14:03:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:15.138 14:03:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:15.138 14:03:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:15.138 14:03:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:15.138 14:03:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:15:15.138 14:03:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:15.138 14:03:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:15.138 14:03:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:15.138 14:03:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:15:15.138 14:03:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:15.138 14:03:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:15.138 14:03:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:15.138 14:03:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:15:15.138 14:03:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:15.138 14:03:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:15.138 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:15:15.138 fio-3.35 00:15:15.138 Starting 1 thread 00:15:18.425 00:15:18.425 test: (groupid=0, jobs=1): err= 0: pid=84372: Sun Nov 17 14:03:56 2024 00:15:18.425 read: IOPS=1459, BW=96.9MiB/s (102MB/s)(255MiB/2627msec) 00:15:18.425 slat (nsec): min=2926, max=18047, avg=3695.36, stdev=1507.25 00:15:18.425 clat (usec): min=229, max=770, avg=309.54, stdev=35.95 00:15:18.425 lat (usec): min=232, max=779, avg=313.23, stdev=36.68 00:15:18.425 clat percentiles (usec): 00:15:18.425 | 1.00th=[ 251], 5.00th=[ 273], 10.00th=[ 277], 20.00th=[ 285], 00:15:18.425 | 30.00th=[ 297], 40.00th=[ 306], 50.00th=[ 306], 60.00th=[ 310], 00:15:18.425 | 70.00th=[ 314], 80.00th=[ 318], 90.00th=[ 330], 95.00th=[ 383], 00:15:18.425 | 99.00th=[ 433], 99.50th=[ 486], 99.90th=[ 594], 99.95th=[ 742], 00:15:18.425 | 99.99th=[ 775] 00:15:18.425 write: IOPS=1469, BW=97.6MiB/s (102MB/s)(256MiB/2624msec); 0 zone resets 00:15:18.425 slat (nsec): min=13717, max=74883, avg=16837.50, stdev=2801.01 00:15:18.425 clat (usec): min=257, max=935, avg=339.59, stdev=50.41 00:15:18.425 lat (usec): min=273, max=989, avg=356.43, stdev=51.07 00:15:18.425 clat percentiles (usec): 00:15:18.425 | 1.00th=[ 285], 5.00th=[ 297], 10.00th=[ 302], 20.00th=[ 310], 00:15:18.425 | 30.00th=[ 326], 40.00th=[ 330], 50.00th=[ 334], 60.00th=[ 338], 00:15:18.425 | 70.00th=[ 343], 80.00th=[ 351], 90.00th=[ 371], 95.00th=[ 404], 00:15:18.425 | 99.00th=[ 644], 99.50th=[ 676], 99.90th=[ 824], 99.95th=[ 873], 00:15:18.425 | 99.99th=[ 938] 00:15:18.425 bw ( KiB/s): min=96832, max=102816, per=99.98%, avg=99905.60, stdev=2749.76, samples=5 00:15:18.425 iops : min= 1424, max= 1512, avg=1469.20, stdev=40.44, samples=5 00:15:18.425 lat (usec) : 250=0.38%, 500=98.63%, 750=0.87%, 1000=0.12% 00:15:18.425 cpu : usr=99.35%, sys=0.11%, ctx=6, majf=0, minf=1181 00:15:18.425 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:18.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:18.425 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:18.425 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:18.425 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:18.425 00:15:18.425 Run status group 0 (all jobs): 00:15:18.425 READ: bw=96.9MiB/s (102MB/s), 96.9MiB/s-96.9MiB/s (102MB/s-102MB/s), io=255MiB (267MB), run=2627-2627msec 00:15:18.425 WRITE: bw=97.6MiB/s (102MB/s), 97.6MiB/s-97.6MiB/s (102MB/s-102MB/s), io=256MiB (269MB), run=2624-2624msec 00:15:19.360 ----------------------------------------------------- 00:15:19.360 Suppressions used: 00:15:19.360 count bytes template 00:15:19.360 1 5 /usr/src/fio/parse.c 00:15:19.360 1 8 libtcmalloc_minimal.so 00:15:19.360 1 904 libcrypto.so 00:15:19.360 ----------------------------------------------------- 00:15:19.360 00:15:19.360 14:03:57 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:15:19.360 14:03:57 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:19.360 14:03:57 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:19.360 14:03:57 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:19.360 14:03:57 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:15:19.360 14:03:57 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:19.360 14:03:57 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:19.360 14:03:57 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:19.360 14:03:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:19.360 14:03:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:19.360 14:03:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:19.360 14:03:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:19.360 14:03:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:19.360 14:03:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:15:19.360 14:03:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:19.360 14:03:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:19.360 14:03:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:19.360 14:03:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:15:19.360 14:03:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:19.360 14:03:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:19.360 14:03:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:19.360 14:03:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:15:19.360 14:03:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:19.360 14:03:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:19.360 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:19.360 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:19.360 fio-3.35 00:15:19.360 Starting 2 threads 00:15:41.287 00:15:41.287 first_half: (groupid=0, jobs=1): err= 0: pid=84447: Sun Nov 17 14:04:18 2024 00:15:41.287 read: IOPS=3184, BW=12.4MiB/s (13.0MB/s)(256MiB/20562msec) 00:15:41.287 slat (nsec): min=2993, max=22573, avg=3735.17, stdev=558.32 00:15:41.287 clat (usec): min=456, max=249826, avg=34327.51, stdev=20531.07 00:15:41.287 lat (usec): min=459, max=249831, avg=34331.25, stdev=20531.11 00:15:41.287 clat percentiles (msec): 00:15:41.287 | 1.00th=[ 7], 5.00th=[ 26], 10.00th=[ 29], 20.00th=[ 29], 00:15:41.287 | 30.00th=[ 29], 40.00th=[ 29], 50.00th=[ 29], 60.00th=[ 30], 00:15:41.287 | 70.00th=[ 32], 80.00th=[ 34], 90.00th=[ 38], 95.00th=[ 66], 00:15:41.287 | 99.00th=[ 142], 99.50th=[ 148], 99.90th=[ 186], 99.95th=[ 220], 00:15:41.287 | 99.99th=[ 245] 00:15:41.287 write: IOPS=3191, BW=12.5MiB/s (13.1MB/s)(256MiB/20537msec); 0 zone resets 00:15:41.287 slat (usec): min=3, max=1276, avg= 5.12, stdev= 5.62 00:15:41.287 clat (usec): min=349, max=36887, avg=5838.43, stdev=5756.87 00:15:41.287 lat (usec): min=353, max=36891, avg=5843.55, stdev=5757.07 00:15:41.287 clat percentiles (usec): 00:15:41.287 | 1.00th=[ 701], 5.00th=[ 832], 10.00th=[ 1123], 20.00th=[ 2474], 00:15:41.287 | 30.00th=[ 3163], 40.00th=[ 3851], 50.00th=[ 4490], 60.00th=[ 4948], 00:15:41.287 | 70.00th=[ 5407], 80.00th=[ 6390], 90.00th=[12125], 95.00th=[17171], 00:15:41.287 | 99.00th=[29754], 99.50th=[31065], 99.90th=[35390], 99.95th=[35914], 00:15:41.287 | 99.99th=[36963] 00:15:41.287 bw ( KiB/s): min= 192, max=48144, per=97.09%, avg=24785.14, stdev=15195.74, samples=21 00:15:41.287 iops : min= 48, max=12036, avg=6196.29, stdev=3798.94, samples=21 00:15:41.287 lat (usec) : 500=0.04%, 750=1.15%, 1000=3.06% 00:15:41.288 lat (msec) : 2=3.32%, 4=13.52%, 10=22.62%, 20=5.54%, 50=47.65% 00:15:41.288 lat (msec) : 100=1.52%, 250=1.58% 00:15:41.288 cpu : usr=99.45%, sys=0.10%, ctx=44, majf=0, minf=5569 00:15:41.288 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:41.288 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:41.288 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:41.288 issued rwts: total=65475,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:41.288 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:41.288 second_half: (groupid=0, jobs=1): err= 0: pid=84448: Sun Nov 17 14:04:18 2024 00:15:41.288 read: IOPS=3207, BW=12.5MiB/s (13.1MB/s)(256MiB/20417msec) 00:15:41.288 slat (nsec): min=2997, max=37235, avg=3737.47, stdev=669.55 00:15:41.288 clat (msec): min=9, max=165, avg=34.57, stdev=18.12 00:15:41.288 lat (msec): min=9, max=166, avg=34.58, stdev=18.12 00:15:41.288 clat percentiles (msec): 00:15:41.288 | 1.00th=[ 26], 5.00th=[ 28], 10.00th=[ 29], 20.00th=[ 29], 00:15:41.288 | 30.00th=[ 29], 40.00th=[ 29], 50.00th=[ 29], 60.00th=[ 30], 00:15:41.288 | 70.00th=[ 32], 80.00th=[ 34], 90.00th=[ 38], 95.00th=[ 63], 00:15:41.288 | 99.00th=[ 132], 99.50th=[ 146], 99.90th=[ 159], 99.95th=[ 161], 00:15:41.288 | 99.99th=[ 163] 00:15:41.288 write: IOPS=3451, BW=13.5MiB/s (14.1MB/s)(256MiB/18986msec); 0 zone resets 00:15:41.288 slat (usec): min=3, max=391, avg= 5.15, stdev= 3.48 00:15:41.288 clat (usec): min=365, max=38247, avg=5311.67, stdev=3545.24 00:15:41.288 lat (usec): min=372, max=38252, avg=5316.83, stdev=3545.62 00:15:41.288 clat percentiles (usec): 00:15:41.288 | 1.00th=[ 816], 5.00th=[ 1565], 10.00th=[ 2278], 20.00th=[ 2933], 00:15:41.288 | 30.00th=[ 3490], 40.00th=[ 4047], 50.00th=[ 4686], 60.00th=[ 5080], 00:15:41.288 | 70.00th=[ 5342], 80.00th=[ 5932], 90.00th=[10552], 95.00th=[12518], 00:15:41.288 | 99.00th=[17695], 99.50th=[23987], 99.90th=[28705], 99.95th=[30016], 00:15:41.288 | 99.99th=[36439] 00:15:41.288 bw ( KiB/s): min= 2592, max=45928, per=100.00%, avg=32762.88, stdev=12981.52, samples=16 00:15:41.288 iops : min= 648, max=11482, avg=8190.69, stdev=3245.36, samples=16 00:15:41.288 lat (usec) : 500=0.03%, 750=0.27%, 1000=0.80% 00:15:41.288 lat (msec) : 2=2.36%, 4=15.97%, 10=24.59%, 20=5.61%, 50=47.30% 00:15:41.288 lat (msec) : 100=1.67%, 250=1.41% 00:15:41.288 cpu : usr=99.22%, sys=0.15%, ctx=54, majf=0, minf=5567 00:15:41.288 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:41.288 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:41.288 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:41.288 issued rwts: total=65489,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:41.288 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:41.288 00:15:41.288 Run status group 0 (all jobs): 00:15:41.288 READ: bw=24.9MiB/s (26.1MB/s), 12.4MiB/s-12.5MiB/s (13.0MB/s-13.1MB/s), io=512MiB (536MB), run=20417-20562msec 00:15:41.288 WRITE: bw=24.9MiB/s (26.1MB/s), 12.5MiB/s-13.5MiB/s (13.1MB/s-14.1MB/s), io=512MiB (537MB), run=18986-20537msec 00:15:41.546 ----------------------------------------------------- 00:15:41.546 Suppressions used: 00:15:41.546 count bytes template 00:15:41.546 2 10 /usr/src/fio/parse.c 00:15:41.546 2 192 /usr/src/fio/iolog.c 00:15:41.546 1 8 libtcmalloc_minimal.so 00:15:41.546 1 904 libcrypto.so 00:15:41.546 ----------------------------------------------------- 00:15:41.546 00:15:41.547 14:04:19 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:15:41.547 14:04:19 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:41.547 14:04:19 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:41.547 14:04:19 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:41.547 14:04:19 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:15:41.547 14:04:19 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:41.547 14:04:19 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:41.547 14:04:19 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:41.547 14:04:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:41.547 14:04:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:41.547 14:04:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:41.547 14:04:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:41.547 14:04:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:41.547 14:04:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:15:41.547 14:04:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:41.547 14:04:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:41.547 14:04:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:15:41.547 14:04:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:41.547 14:04:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:41.547 14:04:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:41.547 14:04:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:41.547 14:04:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:15:41.547 14:04:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:41.547 14:04:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:41.805 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:41.805 fio-3.35 00:15:41.805 Starting 1 thread 00:15:56.679 00:15:56.680 test: (groupid=0, jobs=1): err= 0: pid=84716: Sun Nov 17 14:04:32 2024 00:15:56.680 read: IOPS=8263, BW=32.3MiB/s (33.8MB/s)(255MiB/7890msec) 00:15:56.680 slat (nsec): min=2967, max=20297, avg=3423.68, stdev=649.35 00:15:56.680 clat (usec): min=466, max=31779, avg=15482.25, stdev=1519.49 00:15:56.680 lat (usec): min=473, max=31782, avg=15485.68, stdev=1519.51 00:15:56.680 clat percentiles (usec): 00:15:56.680 | 1.00th=[14353], 5.00th=[14615], 10.00th=[14615], 20.00th=[14877], 00:15:56.680 | 30.00th=[14877], 40.00th=[15008], 50.00th=[15139], 60.00th=[15270], 00:15:56.680 | 70.00th=[15401], 80.00th=[15533], 90.00th=[15926], 95.00th=[18220], 00:15:56.680 | 99.00th=[22414], 99.50th=[23987], 99.90th=[29492], 99.95th=[30802], 00:15:56.680 | 99.99th=[31589] 00:15:56.680 write: IOPS=17.1k, BW=66.6MiB/s (69.8MB/s)(256MiB/3843msec); 0 zone resets 00:15:56.680 slat (usec): min=3, max=114, avg= 5.25, stdev= 2.03 00:15:56.680 clat (usec): min=488, max=45088, avg=7462.83, stdev=9393.28 00:15:56.680 lat (usec): min=493, max=45093, avg=7468.08, stdev=9393.28 00:15:56.680 clat percentiles (usec): 00:15:56.680 | 1.00th=[ 603], 5.00th=[ 652], 10.00th=[ 693], 20.00th=[ 791], 00:15:56.680 | 30.00th=[ 988], 40.00th=[ 1385], 50.00th=[ 5145], 60.00th=[ 5800], 00:15:56.680 | 70.00th=[ 6718], 80.00th=[ 7767], 90.00th=[26870], 95.00th=[28443], 00:15:56.680 | 99.00th=[34341], 99.50th=[36439], 99.90th=[39060], 99.95th=[39584], 00:15:56.680 | 99.99th=[43254] 00:15:56.680 bw ( KiB/s): min=40328, max=85880, per=96.08%, avg=65536.00, stdev=14915.16, samples=8 00:15:56.680 iops : min=10082, max=21470, avg=16384.00, stdev=3728.79, samples=8 00:15:56.680 lat (usec) : 500=0.01%, 750=8.32%, 1000=7.15% 00:15:56.680 lat (msec) : 2=5.13%, 4=0.61%, 10=20.87%, 20=48.66%, 50=9.26% 00:15:56.680 cpu : usr=99.17%, sys=0.13%, ctx=18, majf=0, minf=5577 00:15:56.680 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:15:56.680 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:56.680 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:56.680 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:56.680 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:56.680 00:15:56.680 Run status group 0 (all jobs): 00:15:56.680 READ: bw=32.3MiB/s (33.8MB/s), 32.3MiB/s-32.3MiB/s (33.8MB/s-33.8MB/s), io=255MiB (267MB), run=7890-7890msec 00:15:56.680 WRITE: bw=66.6MiB/s (69.8MB/s), 66.6MiB/s-66.6MiB/s (69.8MB/s-69.8MB/s), io=256MiB (268MB), run=3843-3843msec 00:15:56.680 ----------------------------------------------------- 00:15:56.680 Suppressions used: 00:15:56.680 count bytes template 00:15:56.680 1 5 /usr/src/fio/parse.c 00:15:56.680 2 192 /usr/src/fio/iolog.c 00:15:56.680 1 8 libtcmalloc_minimal.so 00:15:56.680 1 904 libcrypto.so 00:15:56.680 ----------------------------------------------------- 00:15:56.680 00:15:56.680 14:04:33 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:15:56.680 14:04:33 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:56.680 14:04:33 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:56.680 14:04:33 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:56.680 14:04:33 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:15:56.680 Remove shared memory files 00:15:56.680 14:04:33 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:15:56.680 14:04:33 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:15:56.680 14:04:33 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:15:56.680 14:04:33 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid69792 /dev/shm/spdk_tgt_trace.pid83154 00:15:56.680 14:04:33 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:15:56.680 14:04:33 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:15:56.680 00:15:56.680 real 0m52.595s 00:15:56.680 user 1m55.316s 00:15:56.680 sys 0m2.370s 00:15:56.680 14:04:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:56.680 14:04:33 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:56.680 ************************************ 00:15:56.680 END TEST ftl_fio_basic 00:15:56.680 ************************************ 00:15:56.680 14:04:33 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:15:56.680 14:04:33 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:15:56.680 14:04:33 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:56.680 14:04:33 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:56.680 ************************************ 00:15:56.680 START TEST ftl_bdevperf 00:15:56.680 ************************************ 00:15:56.680 14:04:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:15:56.680 * Looking for test storage... 00:15:56.680 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:56.680 14:04:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:56.680 14:04:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # lcov --version 00:15:56.680 14:04:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:56.680 14:04:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:56.680 14:04:33 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:56.680 14:04:33 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:56.680 14:04:33 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:56.680 14:04:33 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:15:56.680 14:04:33 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:15:56.680 14:04:33 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:15:56.680 14:04:33 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:15:56.680 14:04:33 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:15:56.680 14:04:33 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:15:56.680 14:04:33 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:15:56.680 14:04:33 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:56.680 14:04:33 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:15:56.680 14:04:33 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:15:56.680 14:04:33 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:56.680 14:04:33 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:56.680 14:04:33 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:15:56.680 14:04:33 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:15:56.680 14:04:33 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:56.680 14:04:33 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:15:56.680 14:04:33 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:15:56.680 14:04:33 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:15:56.680 14:04:33 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:15:56.680 14:04:33 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:56.680 14:04:33 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:15:56.680 14:04:33 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:15:56.680 14:04:33 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:56.680 14:04:33 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:56.680 14:04:33 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:15:56.680 14:04:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:56.680 14:04:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:56.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:56.680 --rc genhtml_branch_coverage=1 00:15:56.680 --rc genhtml_function_coverage=1 00:15:56.680 --rc genhtml_legend=1 00:15:56.680 --rc geninfo_all_blocks=1 00:15:56.680 --rc geninfo_unexecuted_blocks=1 00:15:56.680 00:15:56.680 ' 00:15:56.680 14:04:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:56.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:56.680 --rc genhtml_branch_coverage=1 00:15:56.680 --rc genhtml_function_coverage=1 00:15:56.680 --rc genhtml_legend=1 00:15:56.680 --rc geninfo_all_blocks=1 00:15:56.680 --rc geninfo_unexecuted_blocks=1 00:15:56.680 00:15:56.680 ' 00:15:56.680 14:04:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:56.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:56.680 --rc genhtml_branch_coverage=1 00:15:56.680 --rc genhtml_function_coverage=1 00:15:56.680 --rc genhtml_legend=1 00:15:56.680 --rc geninfo_all_blocks=1 00:15:56.680 --rc geninfo_unexecuted_blocks=1 00:15:56.680 00:15:56.680 ' 00:15:56.680 14:04:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:56.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:56.680 --rc genhtml_branch_coverage=1 00:15:56.680 --rc genhtml_function_coverage=1 00:15:56.680 --rc genhtml_legend=1 00:15:56.680 --rc geninfo_all_blocks=1 00:15:56.680 --rc geninfo_unexecuted_blocks=1 00:15:56.680 00:15:56.680 ' 00:15:56.680 14:04:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:56.680 14:04:33 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:15:56.680 14:04:33 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:56.680 14:04:33 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:56.680 14:04:33 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:56.680 14:04:33 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:56.680 14:04:33 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:56.680 14:04:33 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:56.680 14:04:33 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:56.680 14:04:33 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:56.680 14:04:33 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:56.680 14:04:33 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:56.681 14:04:33 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:56.681 14:04:33 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:56.681 14:04:33 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:56.681 14:04:33 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:56.681 14:04:33 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:56.681 14:04:33 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:56.681 14:04:33 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:56.681 14:04:33 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:56.681 14:04:33 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:56.681 14:04:33 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:56.681 14:04:33 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:56.681 14:04:33 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:56.681 14:04:33 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:56.681 14:04:33 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:56.681 14:04:33 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:56.681 14:04:33 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:56.681 14:04:33 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:56.681 14:04:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:15:56.681 14:04:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:15:56.681 14:04:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:15:56.681 14:04:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:56.681 14:04:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:15:56.681 14:04:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=84932 00:15:56.681 14:04:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:15:56.681 14:04:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:15:56.681 14:04:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 84932 00:15:56.681 14:04:33 ftl.ftl_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 84932 ']' 00:15:56.681 14:04:33 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:56.681 14:04:33 ftl.ftl_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:56.681 14:04:33 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:56.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:56.681 14:04:33 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:56.681 14:04:33 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:56.681 [2024-11-17 14:04:33.450957] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:56.681 [2024-11-17 14:04:33.451687] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84932 ] 00:15:56.681 [2024-11-17 14:04:33.599761] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:56.681 [2024-11-17 14:04:33.631641] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:56.681 14:04:34 ftl.ftl_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:56.681 14:04:34 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:15:56.681 14:04:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:15:56.681 14:04:34 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:15:56.681 14:04:34 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:15:56.681 14:04:34 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:15:56.681 14:04:34 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:15:56.681 14:04:34 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:15:56.681 14:04:34 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:15:56.681 14:04:34 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:15:56.681 14:04:34 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:15:56.681 14:04:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:15:56.681 14:04:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:56.681 14:04:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:15:56.681 14:04:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:15:56.681 14:04:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:15:56.681 14:04:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:56.681 { 00:15:56.681 "name": "nvme0n1", 00:15:56.681 "aliases": [ 00:15:56.681 "ab4c8fb5-2b5e-44f8-a854-ac57410ea7ae" 00:15:56.681 ], 00:15:56.681 "product_name": "NVMe disk", 00:15:56.681 "block_size": 4096, 00:15:56.681 "num_blocks": 1310720, 00:15:56.681 "uuid": "ab4c8fb5-2b5e-44f8-a854-ac57410ea7ae", 00:15:56.681 "numa_id": -1, 00:15:56.681 "assigned_rate_limits": { 00:15:56.681 "rw_ios_per_sec": 0, 00:15:56.681 "rw_mbytes_per_sec": 0, 00:15:56.681 "r_mbytes_per_sec": 0, 00:15:56.681 "w_mbytes_per_sec": 0 00:15:56.681 }, 00:15:56.681 "claimed": true, 00:15:56.681 "claim_type": "read_many_write_one", 00:15:56.681 "zoned": false, 00:15:56.681 "supported_io_types": { 00:15:56.681 "read": true, 00:15:56.681 "write": true, 00:15:56.681 "unmap": true, 00:15:56.681 "flush": true, 00:15:56.681 "reset": true, 00:15:56.681 "nvme_admin": true, 00:15:56.681 "nvme_io": true, 00:15:56.681 "nvme_io_md": false, 00:15:56.681 "write_zeroes": true, 00:15:56.681 "zcopy": false, 00:15:56.681 "get_zone_info": false, 00:15:56.681 "zone_management": false, 00:15:56.681 "zone_append": false, 00:15:56.681 "compare": true, 00:15:56.681 "compare_and_write": false, 00:15:56.681 "abort": true, 00:15:56.681 "seek_hole": false, 00:15:56.681 "seek_data": false, 00:15:56.681 "copy": true, 00:15:56.681 "nvme_iov_md": false 00:15:56.681 }, 00:15:56.681 "driver_specific": { 00:15:56.681 "nvme": [ 00:15:56.681 { 00:15:56.681 "pci_address": "0000:00:11.0", 00:15:56.681 "trid": { 00:15:56.681 "trtype": "PCIe", 00:15:56.681 "traddr": "0000:00:11.0" 00:15:56.681 }, 00:15:56.681 "ctrlr_data": { 00:15:56.681 "cntlid": 0, 00:15:56.681 "vendor_id": "0x1b36", 00:15:56.681 "model_number": "QEMU NVMe Ctrl", 00:15:56.681 "serial_number": "12341", 00:15:56.681 "firmware_revision": "8.0.0", 00:15:56.681 "subnqn": "nqn.2019-08.org.qemu:12341", 00:15:56.681 "oacs": { 00:15:56.681 "security": 0, 00:15:56.681 "format": 1, 00:15:56.681 "firmware": 0, 00:15:56.681 "ns_manage": 1 00:15:56.681 }, 00:15:56.681 "multi_ctrlr": false, 00:15:56.681 "ana_reporting": false 00:15:56.681 }, 00:15:56.681 "vs": { 00:15:56.681 "nvme_version": "1.4" 00:15:56.681 }, 00:15:56.681 "ns_data": { 00:15:56.681 "id": 1, 00:15:56.681 "can_share": false 00:15:56.681 } 00:15:56.681 } 00:15:56.681 ], 00:15:56.681 "mp_policy": "active_passive" 00:15:56.681 } 00:15:56.681 } 00:15:56.681 ]' 00:15:56.681 14:04:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:56.681 14:04:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:15:56.681 14:04:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:56.681 14:04:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=1310720 00:15:56.681 14:04:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:15:56.681 14:04:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 5120 00:15:56.681 14:04:34 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:15:56.681 14:04:34 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:15:56.681 14:04:34 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:15:56.681 14:04:34 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:15:56.681 14:04:34 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:56.940 14:04:34 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=9d1bb0ea-329f-4e80-aaab-221222c5fd1a 00:15:56.940 14:04:34 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:15:56.940 14:04:34 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9d1bb0ea-329f-4e80-aaab-221222c5fd1a 00:15:56.940 14:04:35 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:15:57.198 14:04:35 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=676f6b94-c723-4f75-85ff-b42635aa8172 00:15:57.198 14:04:35 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 676f6b94-c723-4f75-85ff-b42635aa8172 00:15:57.457 14:04:35 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=fe9ec4a6-1bd8-4429-b6a9-a2617a3ddc69 00:15:57.457 14:04:35 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 fe9ec4a6-1bd8-4429-b6a9-a2617a3ddc69 00:15:57.457 14:04:35 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:15:57.457 14:04:35 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:15:57.457 14:04:35 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=fe9ec4a6-1bd8-4429-b6a9-a2617a3ddc69 00:15:57.457 14:04:35 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:15:57.457 14:04:35 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size fe9ec4a6-1bd8-4429-b6a9-a2617a3ddc69 00:15:57.457 14:04:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=fe9ec4a6-1bd8-4429-b6a9-a2617a3ddc69 00:15:57.457 14:04:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:57.457 14:04:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:15:57.457 14:04:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:15:57.457 14:04:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fe9ec4a6-1bd8-4429-b6a9-a2617a3ddc69 00:15:57.716 14:04:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:57.716 { 00:15:57.716 "name": "fe9ec4a6-1bd8-4429-b6a9-a2617a3ddc69", 00:15:57.716 "aliases": [ 00:15:57.716 "lvs/nvme0n1p0" 00:15:57.716 ], 00:15:57.716 "product_name": "Logical Volume", 00:15:57.716 "block_size": 4096, 00:15:57.716 "num_blocks": 26476544, 00:15:57.716 "uuid": "fe9ec4a6-1bd8-4429-b6a9-a2617a3ddc69", 00:15:57.716 "assigned_rate_limits": { 00:15:57.716 "rw_ios_per_sec": 0, 00:15:57.716 "rw_mbytes_per_sec": 0, 00:15:57.716 "r_mbytes_per_sec": 0, 00:15:57.716 "w_mbytes_per_sec": 0 00:15:57.716 }, 00:15:57.716 "claimed": false, 00:15:57.716 "zoned": false, 00:15:57.716 "supported_io_types": { 00:15:57.716 "read": true, 00:15:57.716 "write": true, 00:15:57.716 "unmap": true, 00:15:57.716 "flush": false, 00:15:57.716 "reset": true, 00:15:57.716 "nvme_admin": false, 00:15:57.716 "nvme_io": false, 00:15:57.716 "nvme_io_md": false, 00:15:57.716 "write_zeroes": true, 00:15:57.716 "zcopy": false, 00:15:57.716 "get_zone_info": false, 00:15:57.716 "zone_management": false, 00:15:57.716 "zone_append": false, 00:15:57.716 "compare": false, 00:15:57.716 "compare_and_write": false, 00:15:57.716 "abort": false, 00:15:57.716 "seek_hole": true, 00:15:57.716 "seek_data": true, 00:15:57.716 "copy": false, 00:15:57.716 "nvme_iov_md": false 00:15:57.716 }, 00:15:57.716 "driver_specific": { 00:15:57.716 "lvol": { 00:15:57.716 "lvol_store_uuid": "676f6b94-c723-4f75-85ff-b42635aa8172", 00:15:57.716 "base_bdev": "nvme0n1", 00:15:57.716 "thin_provision": true, 00:15:57.716 "num_allocated_clusters": 0, 00:15:57.716 "snapshot": false, 00:15:57.716 "clone": false, 00:15:57.716 "esnap_clone": false 00:15:57.716 } 00:15:57.716 } 00:15:57.716 } 00:15:57.716 ]' 00:15:57.716 14:04:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:57.716 14:04:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:15:57.716 14:04:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:57.716 14:04:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:57.716 14:04:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:57.716 14:04:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:15:57.716 14:04:35 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:15:57.716 14:04:35 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:15:57.716 14:04:35 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:15:57.975 14:04:36 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:15:57.975 14:04:36 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:15:57.975 14:04:36 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size fe9ec4a6-1bd8-4429-b6a9-a2617a3ddc69 00:15:57.975 14:04:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=fe9ec4a6-1bd8-4429-b6a9-a2617a3ddc69 00:15:57.975 14:04:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:57.975 14:04:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:15:57.975 14:04:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:15:57.975 14:04:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fe9ec4a6-1bd8-4429-b6a9-a2617a3ddc69 00:15:58.234 14:04:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:58.234 { 00:15:58.234 "name": "fe9ec4a6-1bd8-4429-b6a9-a2617a3ddc69", 00:15:58.234 "aliases": [ 00:15:58.234 "lvs/nvme0n1p0" 00:15:58.234 ], 00:15:58.234 "product_name": "Logical Volume", 00:15:58.234 "block_size": 4096, 00:15:58.234 "num_blocks": 26476544, 00:15:58.234 "uuid": "fe9ec4a6-1bd8-4429-b6a9-a2617a3ddc69", 00:15:58.234 "assigned_rate_limits": { 00:15:58.234 "rw_ios_per_sec": 0, 00:15:58.234 "rw_mbytes_per_sec": 0, 00:15:58.234 "r_mbytes_per_sec": 0, 00:15:58.234 "w_mbytes_per_sec": 0 00:15:58.234 }, 00:15:58.234 "claimed": false, 00:15:58.234 "zoned": false, 00:15:58.234 "supported_io_types": { 00:15:58.234 "read": true, 00:15:58.234 "write": true, 00:15:58.234 "unmap": true, 00:15:58.234 "flush": false, 00:15:58.234 "reset": true, 00:15:58.234 "nvme_admin": false, 00:15:58.234 "nvme_io": false, 00:15:58.234 "nvme_io_md": false, 00:15:58.234 "write_zeroes": true, 00:15:58.234 "zcopy": false, 00:15:58.234 "get_zone_info": false, 00:15:58.234 "zone_management": false, 00:15:58.234 "zone_append": false, 00:15:58.234 "compare": false, 00:15:58.234 "compare_and_write": false, 00:15:58.234 "abort": false, 00:15:58.234 "seek_hole": true, 00:15:58.234 "seek_data": true, 00:15:58.234 "copy": false, 00:15:58.234 "nvme_iov_md": false 00:15:58.234 }, 00:15:58.234 "driver_specific": { 00:15:58.234 "lvol": { 00:15:58.234 "lvol_store_uuid": "676f6b94-c723-4f75-85ff-b42635aa8172", 00:15:58.234 "base_bdev": "nvme0n1", 00:15:58.234 "thin_provision": true, 00:15:58.234 "num_allocated_clusters": 0, 00:15:58.234 "snapshot": false, 00:15:58.234 "clone": false, 00:15:58.234 "esnap_clone": false 00:15:58.234 } 00:15:58.234 } 00:15:58.234 } 00:15:58.234 ]' 00:15:58.234 14:04:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:58.234 14:04:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:15:58.234 14:04:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:58.234 14:04:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:58.234 14:04:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:58.234 14:04:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:15:58.234 14:04:36 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:15:58.234 14:04:36 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:15:58.493 14:04:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:15:58.493 14:04:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size fe9ec4a6-1bd8-4429-b6a9-a2617a3ddc69 00:15:58.493 14:04:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=fe9ec4a6-1bd8-4429-b6a9-a2617a3ddc69 00:15:58.493 14:04:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:58.493 14:04:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:15:58.493 14:04:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:15:58.493 14:04:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fe9ec4a6-1bd8-4429-b6a9-a2617a3ddc69 00:15:58.751 14:04:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:58.751 { 00:15:58.751 "name": "fe9ec4a6-1bd8-4429-b6a9-a2617a3ddc69", 00:15:58.751 "aliases": [ 00:15:58.751 "lvs/nvme0n1p0" 00:15:58.751 ], 00:15:58.751 "product_name": "Logical Volume", 00:15:58.751 "block_size": 4096, 00:15:58.751 "num_blocks": 26476544, 00:15:58.751 "uuid": "fe9ec4a6-1bd8-4429-b6a9-a2617a3ddc69", 00:15:58.751 "assigned_rate_limits": { 00:15:58.751 "rw_ios_per_sec": 0, 00:15:58.751 "rw_mbytes_per_sec": 0, 00:15:58.751 "r_mbytes_per_sec": 0, 00:15:58.751 "w_mbytes_per_sec": 0 00:15:58.751 }, 00:15:58.751 "claimed": false, 00:15:58.751 "zoned": false, 00:15:58.751 "supported_io_types": { 00:15:58.751 "read": true, 00:15:58.751 "write": true, 00:15:58.751 "unmap": true, 00:15:58.751 "flush": false, 00:15:58.751 "reset": true, 00:15:58.751 "nvme_admin": false, 00:15:58.751 "nvme_io": false, 00:15:58.751 "nvme_io_md": false, 00:15:58.751 "write_zeroes": true, 00:15:58.751 "zcopy": false, 00:15:58.751 "get_zone_info": false, 00:15:58.751 "zone_management": false, 00:15:58.751 "zone_append": false, 00:15:58.751 "compare": false, 00:15:58.751 "compare_and_write": false, 00:15:58.751 "abort": false, 00:15:58.751 "seek_hole": true, 00:15:58.751 "seek_data": true, 00:15:58.751 "copy": false, 00:15:58.751 "nvme_iov_md": false 00:15:58.751 }, 00:15:58.751 "driver_specific": { 00:15:58.751 "lvol": { 00:15:58.751 "lvol_store_uuid": "676f6b94-c723-4f75-85ff-b42635aa8172", 00:15:58.751 "base_bdev": "nvme0n1", 00:15:58.751 "thin_provision": true, 00:15:58.751 "num_allocated_clusters": 0, 00:15:58.751 "snapshot": false, 00:15:58.751 "clone": false, 00:15:58.751 "esnap_clone": false 00:15:58.751 } 00:15:58.751 } 00:15:58.751 } 00:15:58.751 ]' 00:15:58.751 14:04:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:58.752 14:04:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:15:58.752 14:04:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:58.752 14:04:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:58.752 14:04:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:58.752 14:04:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:15:58.752 14:04:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:15:58.752 14:04:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d fe9ec4a6-1bd8-4429-b6a9-a2617a3ddc69 -c nvc0n1p0 --l2p_dram_limit 20 00:15:58.752 [2024-11-17 14:04:37.045007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:58.752 [2024-11-17 14:04:37.045194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:58.752 [2024-11-17 14:04:37.045213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:58.752 [2024-11-17 14:04:37.045222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:58.752 [2024-11-17 14:04:37.045285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:58.752 [2024-11-17 14:04:37.045294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:58.752 [2024-11-17 14:04:37.045304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:15:58.752 [2024-11-17 14:04:37.045309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:58.752 [2024-11-17 14:04:37.045325] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:58.752 [2024-11-17 14:04:37.045551] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:58.752 [2024-11-17 14:04:37.045568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:58.752 [2024-11-17 14:04:37.045574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:58.752 [2024-11-17 14:04:37.045585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.245 ms 00:15:58.752 [2024-11-17 14:04:37.045590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:58.752 [2024-11-17 14:04:37.045643] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID d5c4efc5-f107-4f3d-9bb5-20cb69ac4170 00:15:58.752 [2024-11-17 14:04:37.046561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:58.752 [2024-11-17 14:04:37.046579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:15:58.752 [2024-11-17 14:04:37.046586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:15:58.752 [2024-11-17 14:04:37.046594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.011 [2024-11-17 14:04:37.051326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:59.011 [2024-11-17 14:04:37.051436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:59.011 [2024-11-17 14:04:37.051448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.701 ms 00:15:59.011 [2024-11-17 14:04:37.051459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.011 [2024-11-17 14:04:37.051518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:59.011 [2024-11-17 14:04:37.051526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:59.011 [2024-11-17 14:04:37.051533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:15:59.011 [2024-11-17 14:04:37.051540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.011 [2024-11-17 14:04:37.051580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:59.011 [2024-11-17 14:04:37.051590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:59.011 [2024-11-17 14:04:37.051597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:15:59.011 [2024-11-17 14:04:37.051604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.011 [2024-11-17 14:04:37.051619] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:59.011 [2024-11-17 14:04:37.052873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:59.011 [2024-11-17 14:04:37.052900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:59.011 [2024-11-17 14:04:37.052909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.257 ms 00:15:59.011 [2024-11-17 14:04:37.052914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.011 [2024-11-17 14:04:37.052941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:59.011 [2024-11-17 14:04:37.052947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:59.011 [2024-11-17 14:04:37.052956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:15:59.011 [2024-11-17 14:04:37.052964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.011 [2024-11-17 14:04:37.052976] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:15:59.011 [2024-11-17 14:04:37.053085] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:15:59.011 [2024-11-17 14:04:37.053095] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:59.011 [2024-11-17 14:04:37.053103] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:15:59.011 [2024-11-17 14:04:37.053113] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:59.011 [2024-11-17 14:04:37.053120] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:59.011 [2024-11-17 14:04:37.053129] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:15:59.011 [2024-11-17 14:04:37.053134] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:59.011 [2024-11-17 14:04:37.053141] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:15:59.012 [2024-11-17 14:04:37.053147] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:15:59.012 [2024-11-17 14:04:37.053153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:59.012 [2024-11-17 14:04:37.053159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:59.012 [2024-11-17 14:04:37.053168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.178 ms 00:15:59.012 [2024-11-17 14:04:37.053173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.012 [2024-11-17 14:04:37.053247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:59.012 [2024-11-17 14:04:37.053254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:59.012 [2024-11-17 14:04:37.053264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:15:59.012 [2024-11-17 14:04:37.053270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.012 [2024-11-17 14:04:37.053340] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:59.012 [2024-11-17 14:04:37.053351] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:59.012 [2024-11-17 14:04:37.053358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:59.012 [2024-11-17 14:04:37.053366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:59.012 [2024-11-17 14:04:37.053374] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:59.012 [2024-11-17 14:04:37.053379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:59.012 [2024-11-17 14:04:37.053386] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:15:59.012 [2024-11-17 14:04:37.053391] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:59.012 [2024-11-17 14:04:37.053399] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:15:59.012 [2024-11-17 14:04:37.053403] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:59.012 [2024-11-17 14:04:37.053410] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:59.012 [2024-11-17 14:04:37.053416] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:15:59.012 [2024-11-17 14:04:37.053424] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:59.012 [2024-11-17 14:04:37.053429] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:59.012 [2024-11-17 14:04:37.053436] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:15:59.012 [2024-11-17 14:04:37.053441] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:59.012 [2024-11-17 14:04:37.053448] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:59.012 [2024-11-17 14:04:37.053453] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:15:59.012 [2024-11-17 14:04:37.053459] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:59.012 [2024-11-17 14:04:37.053464] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:59.012 [2024-11-17 14:04:37.053472] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:15:59.012 [2024-11-17 14:04:37.053477] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:59.012 [2024-11-17 14:04:37.053484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:59.012 [2024-11-17 14:04:37.053489] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:15:59.012 [2024-11-17 14:04:37.053495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:59.012 [2024-11-17 14:04:37.053500] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:59.012 [2024-11-17 14:04:37.053506] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:15:59.012 [2024-11-17 14:04:37.053511] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:59.012 [2024-11-17 14:04:37.053519] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:59.012 [2024-11-17 14:04:37.053525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:15:59.012 [2024-11-17 14:04:37.053532] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:59.012 [2024-11-17 14:04:37.053538] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:59.012 [2024-11-17 14:04:37.053545] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:15:59.012 [2024-11-17 14:04:37.053550] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:59.012 [2024-11-17 14:04:37.053558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:59.012 [2024-11-17 14:04:37.053564] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:15:59.012 [2024-11-17 14:04:37.053571] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:59.012 [2024-11-17 14:04:37.053577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:15:59.012 [2024-11-17 14:04:37.053584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:15:59.012 [2024-11-17 14:04:37.053590] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:59.012 [2024-11-17 14:04:37.053597] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:15:59.012 [2024-11-17 14:04:37.053603] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:15:59.012 [2024-11-17 14:04:37.053610] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:59.012 [2024-11-17 14:04:37.053615] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:59.012 [2024-11-17 14:04:37.053625] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:59.012 [2024-11-17 14:04:37.053633] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:59.012 [2024-11-17 14:04:37.053641] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:59.012 [2024-11-17 14:04:37.053650] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:59.012 [2024-11-17 14:04:37.053658] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:59.012 [2024-11-17 14:04:37.053663] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:59.012 [2024-11-17 14:04:37.053670] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:59.012 [2024-11-17 14:04:37.053676] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:59.012 [2024-11-17 14:04:37.053683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:59.012 [2024-11-17 14:04:37.053692] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:59.012 [2024-11-17 14:04:37.053701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:59.012 [2024-11-17 14:04:37.053708] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:15:59.012 [2024-11-17 14:04:37.053716] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:15:59.012 [2024-11-17 14:04:37.053723] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:15:59.012 [2024-11-17 14:04:37.053731] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:15:59.012 [2024-11-17 14:04:37.053737] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:15:59.012 [2024-11-17 14:04:37.053745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:15:59.012 [2024-11-17 14:04:37.053752] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:15:59.012 [2024-11-17 14:04:37.053759] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:15:59.012 [2024-11-17 14:04:37.053765] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:15:59.012 [2024-11-17 14:04:37.053773] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:15:59.012 [2024-11-17 14:04:37.053779] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:15:59.012 [2024-11-17 14:04:37.053787] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:15:59.012 [2024-11-17 14:04:37.053793] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:15:59.012 [2024-11-17 14:04:37.053800] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:15:59.012 [2024-11-17 14:04:37.053807] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:59.012 [2024-11-17 14:04:37.053817] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:59.012 [2024-11-17 14:04:37.053824] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:59.012 [2024-11-17 14:04:37.053832] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:59.012 [2024-11-17 14:04:37.053838] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:59.012 [2024-11-17 14:04:37.053847] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:59.012 [2024-11-17 14:04:37.053853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:59.012 [2024-11-17 14:04:37.053864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:59.012 [2024-11-17 14:04:37.053870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.566 ms 00:15:59.012 [2024-11-17 14:04:37.053879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.012 [2024-11-17 14:04:37.053902] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:15:59.012 [2024-11-17 14:04:37.053910] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:00.915 [2024-11-17 14:04:39.118266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.915 [2024-11-17 14:04:39.118477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:00.915 [2024-11-17 14:04:39.118547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2064.355 ms 00:16:00.915 [2024-11-17 14:04:39.118575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.915 [2024-11-17 14:04:39.135994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.915 [2024-11-17 14:04:39.136221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:00.915 [2024-11-17 14:04:39.136359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.303 ms 00:16:00.915 [2024-11-17 14:04:39.136399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.915 [2024-11-17 14:04:39.136623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.915 [2024-11-17 14:04:39.136665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:00.915 [2024-11-17 14:04:39.136829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:16:00.915 [2024-11-17 14:04:39.136862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.915 [2024-11-17 14:04:39.145181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.915 [2024-11-17 14:04:39.145332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:00.915 [2024-11-17 14:04:39.145393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.245 ms 00:16:00.915 [2024-11-17 14:04:39.145418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.915 [2024-11-17 14:04:39.145486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.915 [2024-11-17 14:04:39.145512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:00.915 [2024-11-17 14:04:39.145533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:00.915 [2024-11-17 14:04:39.145584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.915 [2024-11-17 14:04:39.145955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.915 [2024-11-17 14:04:39.146056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:00.915 [2024-11-17 14:04:39.146107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:16:00.915 [2024-11-17 14:04:39.146133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.915 [2024-11-17 14:04:39.146263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.915 [2024-11-17 14:04:39.146292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:00.915 [2024-11-17 14:04:39.146339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:16:00.915 [2024-11-17 14:04:39.146362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.915 [2024-11-17 14:04:39.150652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.915 [2024-11-17 14:04:39.150754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:00.915 [2024-11-17 14:04:39.150805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.232 ms 00:16:00.915 [2024-11-17 14:04:39.150829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.915 [2024-11-17 14:04:39.158967] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:16:00.915 [2024-11-17 14:04:39.163861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.915 [2024-11-17 14:04:39.163960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:00.915 [2024-11-17 14:04:39.164013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.921 ms 00:16:00.915 [2024-11-17 14:04:39.164038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.915 [2024-11-17 14:04:39.205675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.915 [2024-11-17 14:04:39.205806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:00.915 [2024-11-17 14:04:39.205871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.600 ms 00:16:00.915 [2024-11-17 14:04:39.205895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.915 [2024-11-17 14:04:39.206077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.915 [2024-11-17 14:04:39.206115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:00.915 [2024-11-17 14:04:39.206170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:16:00.915 [2024-11-17 14:04:39.206192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.915 [2024-11-17 14:04:39.208889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.915 [2024-11-17 14:04:39.208993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:00.915 [2024-11-17 14:04:39.209049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.626 ms 00:16:00.915 [2024-11-17 14:04:39.209072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.915 [2024-11-17 14:04:39.211300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.915 [2024-11-17 14:04:39.211395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:00.915 [2024-11-17 14:04:39.211453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.161 ms 00:16:00.915 [2024-11-17 14:04:39.211462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.915 [2024-11-17 14:04:39.211748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.915 [2024-11-17 14:04:39.211765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:00.915 [2024-11-17 14:04:39.211779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.254 ms 00:16:00.915 [2024-11-17 14:04:39.211787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.173 [2024-11-17 14:04:39.234158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.173 [2024-11-17 14:04:39.234192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:01.173 [2024-11-17 14:04:39.234208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.352 ms 00:16:01.173 [2024-11-17 14:04:39.234216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.173 [2024-11-17 14:04:39.237626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.173 [2024-11-17 14:04:39.237740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:01.173 [2024-11-17 14:04:39.237760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.344 ms 00:16:01.173 [2024-11-17 14:04:39.237769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.173 [2024-11-17 14:04:39.240370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.173 [2024-11-17 14:04:39.240398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:01.173 [2024-11-17 14:04:39.240409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.578 ms 00:16:01.173 [2024-11-17 14:04:39.240416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.173 [2024-11-17 14:04:39.243646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.173 [2024-11-17 14:04:39.243681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:01.173 [2024-11-17 14:04:39.243696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.210 ms 00:16:01.173 [2024-11-17 14:04:39.243704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.173 [2024-11-17 14:04:39.243728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.173 [2024-11-17 14:04:39.243737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:01.173 [2024-11-17 14:04:39.243750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:01.173 [2024-11-17 14:04:39.243760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.173 [2024-11-17 14:04:39.243823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:01.173 [2024-11-17 14:04:39.243832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:01.173 [2024-11-17 14:04:39.243842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:16:01.174 [2024-11-17 14:04:39.243849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:01.174 [2024-11-17 14:04:39.244678] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2199.292 ms, result 0 00:16:01.174 { 00:16:01.174 "name": "ftl0", 00:16:01.174 "uuid": "d5c4efc5-f107-4f3d-9bb5-20cb69ac4170" 00:16:01.174 } 00:16:01.174 14:04:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:16:01.174 14:04:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:16:01.174 14:04:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:16:01.174 14:04:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:16:01.482 [2024-11-17 14:04:39.555727] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:01.482 I/O size of 69632 is greater than zero copy threshold (65536). 00:16:01.482 Zero copy mechanism will not be used. 00:16:01.482 Running I/O for 4 seconds... 00:16:03.404 3251.00 IOPS, 215.89 MiB/s [2024-11-17T14:04:42.639Z] 3270.00 IOPS, 217.15 MiB/s [2024-11-17T14:04:43.573Z] 3369.67 IOPS, 223.77 MiB/s [2024-11-17T14:04:43.573Z] 3468.50 IOPS, 230.33 MiB/s 00:16:05.272 Latency(us) 00:16:05.272 [2024-11-17T14:04:43.573Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:05.272 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:16:05.272 ftl0 : 4.00 3467.58 230.27 0.00 0.00 302.82 148.87 1978.68 00:16:05.272 [2024-11-17T14:04:43.573Z] =================================================================================================================== 00:16:05.272 [2024-11-17T14:04:43.573Z] Total : 3467.58 230.27 0.00 0.00 302.82 148.87 1978.68 00:16:05.272 { 00:16:05.272 "results": [ 00:16:05.272 { 00:16:05.272 "job": "ftl0", 00:16:05.272 "core_mask": "0x1", 00:16:05.272 "workload": "randwrite", 00:16:05.272 "status": "finished", 00:16:05.272 "queue_depth": 1, 00:16:05.272 "io_size": 69632, 00:16:05.272 "runtime": 4.001348, 00:16:05.272 "iops": 3467.581425059755, 00:16:05.272 "mibps": 230.26907900787435, 00:16:05.272 "io_failed": 0, 00:16:05.272 "io_timeout": 0, 00:16:05.272 "avg_latency_us": 302.82414924462927, 00:16:05.272 "min_latency_us": 148.87384615384616, 00:16:05.272 "max_latency_us": 1978.683076923077 00:16:05.272 } 00:16:05.272 ], 00:16:05.272 "core_count": 1 00:16:05.272 } 00:16:05.272 [2024-11-17 14:04:43.562780] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:05.531 14:04:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:16:05.531 [2024-11-17 14:04:43.673689] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:05.531 Running I/O for 4 seconds... 00:16:07.401 12080.00 IOPS, 47.19 MiB/s [2024-11-17T14:04:47.080Z] 10475.00 IOPS, 40.92 MiB/s [2024-11-17T14:04:48.016Z] 9348.33 IOPS, 36.52 MiB/s [2024-11-17T14:04:48.016Z] 9570.25 IOPS, 37.38 MiB/s 00:16:09.715 Latency(us) 00:16:09.715 [2024-11-17T14:04:48.016Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:09.715 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:16:09.715 ftl0 : 4.03 9533.36 37.24 0.00 0.00 13381.07 226.86 43757.88 00:16:09.715 [2024-11-17T14:04:48.016Z] =================================================================================================================== 00:16:09.715 [2024-11-17T14:04:48.016Z] Total : 9533.36 37.24 0.00 0.00 13381.07 0.00 43757.88 00:16:09.715 [2024-11-17 14:04:47.708721] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:09.715 { 00:16:09.715 "results": [ 00:16:09.715 { 00:16:09.715 "job": "ftl0", 00:16:09.715 "core_mask": "0x1", 00:16:09.715 "workload": "randwrite", 00:16:09.715 "status": "finished", 00:16:09.715 "queue_depth": 128, 00:16:09.715 "io_size": 4096, 00:16:09.715 "runtime": 4.02859, 00:16:09.715 "iops": 9533.360307204257, 00:16:09.715 "mibps": 37.23968870001663, 00:16:09.715 "io_failed": 0, 00:16:09.715 "io_timeout": 0, 00:16:09.715 "avg_latency_us": 13381.071074311305, 00:16:09.715 "min_latency_us": 226.85538461538462, 00:16:09.715 "max_latency_us": 43757.88307692308 00:16:09.715 } 00:16:09.715 ], 00:16:09.715 "core_count": 1 00:16:09.715 } 00:16:09.715 14:04:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:16:09.715 [2024-11-17 14:04:47.814795] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:09.715 Running I/O for 4 seconds... 00:16:11.585 6748.00 IOPS, 26.36 MiB/s [2024-11-17T14:04:51.260Z] 8013.00 IOPS, 31.30 MiB/s [2024-11-17T14:04:51.827Z] 8201.67 IOPS, 32.04 MiB/s [2024-11-17T14:04:52.086Z] 8421.25 IOPS, 32.90 MiB/s 00:16:13.785 Latency(us) 00:16:13.785 [2024-11-17T14:04:52.086Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:13.785 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:13.785 Verification LBA range: start 0x0 length 0x1400000 00:16:13.785 ftl0 : 4.01 8434.86 32.95 0.00 0.00 15133.61 215.83 93161.94 00:16:13.785 [2024-11-17T14:04:52.086Z] =================================================================================================================== 00:16:13.785 [2024-11-17T14:04:52.086Z] Total : 8434.86 32.95 0.00 0.00 15133.61 0.00 93161.94 00:16:13.785 [2024-11-17 14:04:51.830170] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:13.785 { 00:16:13.785 "results": [ 00:16:13.785 { 00:16:13.785 "job": "ftl0", 00:16:13.785 "core_mask": "0x1", 00:16:13.785 "workload": "verify", 00:16:13.785 "status": "finished", 00:16:13.785 "verify_range": { 00:16:13.785 "start": 0, 00:16:13.785 "length": 20971520 00:16:13.785 }, 00:16:13.785 "queue_depth": 128, 00:16:13.785 "io_size": 4096, 00:16:13.785 "runtime": 4.008603, 00:16:13.785 "iops": 8434.858727591632, 00:16:13.785 "mibps": 32.948666904654814, 00:16:13.785 "io_failed": 0, 00:16:13.785 "io_timeout": 0, 00:16:13.785 "avg_latency_us": 15133.606882581515, 00:16:13.785 "min_latency_us": 215.8276923076923, 00:16:13.785 "max_latency_us": 93161.94461538462 00:16:13.785 } 00:16:13.785 ], 00:16:13.785 "core_count": 1 00:16:13.785 } 00:16:13.785 14:04:51 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:16:13.785 [2024-11-17 14:04:52.034484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.785 [2024-11-17 14:04:52.034528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:13.785 [2024-11-17 14:04:52.034543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:13.785 [2024-11-17 14:04:52.034552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.785 [2024-11-17 14:04:52.034577] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:13.785 [2024-11-17 14:04:52.034981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.785 [2024-11-17 14:04:52.035000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:13.785 [2024-11-17 14:04:52.035008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.391 ms 00:16:13.785 [2024-11-17 14:04:52.035019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.785 [2024-11-17 14:04:52.037020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.785 [2024-11-17 14:04:52.037161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:13.785 [2024-11-17 14:04:52.037181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.978 ms 00:16:13.785 [2024-11-17 14:04:52.037193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.046 [2024-11-17 14:04:52.177519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.046 [2024-11-17 14:04:52.177560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:14.046 [2024-11-17 14:04:52.177571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 140.303 ms 00:16:14.046 [2024-11-17 14:04:52.177581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.046 [2024-11-17 14:04:52.183731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.046 [2024-11-17 14:04:52.183859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:14.046 [2024-11-17 14:04:52.183874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.119 ms 00:16:14.046 [2024-11-17 14:04:52.183886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.046 [2024-11-17 14:04:52.184953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.046 [2024-11-17 14:04:52.184986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:14.046 [2024-11-17 14:04:52.184995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.019 ms 00:16:14.046 [2024-11-17 14:04:52.185004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.046 [2024-11-17 14:04:52.188839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.046 [2024-11-17 14:04:52.188878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:14.046 [2024-11-17 14:04:52.188888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.806 ms 00:16:14.046 [2024-11-17 14:04:52.188902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.046 [2024-11-17 14:04:52.189008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.046 [2024-11-17 14:04:52.189019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:14.046 [2024-11-17 14:04:52.189027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:16:14.046 [2024-11-17 14:04:52.189036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.046 [2024-11-17 14:04:52.190867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.046 [2024-11-17 14:04:52.190988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:14.046 [2024-11-17 14:04:52.191002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.816 ms 00:16:14.046 [2024-11-17 14:04:52.191011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.046 [2024-11-17 14:04:52.192446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.046 [2024-11-17 14:04:52.192477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:14.046 [2024-11-17 14:04:52.192485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.408 ms 00:16:14.046 [2024-11-17 14:04:52.192493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.046 [2024-11-17 14:04:52.193643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.046 [2024-11-17 14:04:52.193674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:14.046 [2024-11-17 14:04:52.193682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.121 ms 00:16:14.046 [2024-11-17 14:04:52.193695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.046 [2024-11-17 14:04:52.194699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.046 [2024-11-17 14:04:52.194746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:14.046 [2024-11-17 14:04:52.194759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.956 ms 00:16:14.046 [2024-11-17 14:04:52.194771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.046 [2024-11-17 14:04:52.194812] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:14.046 [2024-11-17 14:04:52.194835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:14.046 [2024-11-17 14:04:52.194849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:14.046 [2024-11-17 14:04:52.194864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:14.046 [2024-11-17 14:04:52.194875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:14.046 [2024-11-17 14:04:52.194889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:14.046 [2024-11-17 14:04:52.194900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:14.046 [2024-11-17 14:04:52.194913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:14.046 [2024-11-17 14:04:52.194925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:14.046 [2024-11-17 14:04:52.194939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:14.046 [2024-11-17 14:04:52.194951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:14.046 [2024-11-17 14:04:52.194968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:14.046 [2024-11-17 14:04:52.194980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:14.046 [2024-11-17 14:04:52.194994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:14.046 [2024-11-17 14:04:52.195006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:14.046 [2024-11-17 14:04:52.195019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:14.046 [2024-11-17 14:04:52.195031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:14.046 [2024-11-17 14:04:52.195045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:14.046 [2024-11-17 14:04:52.195057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:14.046 [2024-11-17 14:04:52.195072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:14.046 [2024-11-17 14:04:52.195085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:14.046 [2024-11-17 14:04:52.195103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:14.046 [2024-11-17 14:04:52.195116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:14.046 [2024-11-17 14:04:52.195129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:14.046 [2024-11-17 14:04:52.195142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:14.046 [2024-11-17 14:04:52.195157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:14.046 [2024-11-17 14:04:52.195168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:14.046 [2024-11-17 14:04:52.195185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:14.046 [2024-11-17 14:04:52.195196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:14.046 [2024-11-17 14:04:52.195209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:14.046 [2024-11-17 14:04:52.195221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:14.046 [2024-11-17 14:04:52.195234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:14.046 [2024-11-17 14:04:52.195268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:14.046 [2024-11-17 14:04:52.195283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:14.046 [2024-11-17 14:04:52.195296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:14.046 [2024-11-17 14:04:52.195312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:14.046 [2024-11-17 14:04:52.195332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:14.046 [2024-11-17 14:04:52.195347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:14.047 [2024-11-17 14:04:52.195360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:14.047 [2024-11-17 14:04:52.195375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:14.047 [2024-11-17 14:04:52.195390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:14.047 [2024-11-17 14:04:52.195405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:14.047 [2024-11-17 14:04:52.195418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:14.047 [2024-11-17 14:04:52.195436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:14.047 [2024-11-17 14:04:52.195449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:14.047 [2024-11-17 14:04:52.195464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:14.047 [2024-11-17 14:04:52.195487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:14.047 [2024-11-17 14:04:52.195505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:14.047 [2024-11-17 14:04:52.195524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:14.047 [2024-11-17 14:04:52.195540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:14.047 [2024-11-17 14:04:52.195553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:14.047 [2024-11-17 14:04:52.195567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:14.047 [2024-11-17 14:04:52.195581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:14.047 [2024-11-17 14:04:52.195595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:14.047 [2024-11-17 14:04:52.195609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:14.047 [2024-11-17 14:04:52.195637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:14.047 [2024-11-17 14:04:52.195656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:14.047 [2024-11-17 14:04:52.195673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:14.047 [2024-11-17 14:04:52.195686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:14.047 [2024-11-17 14:04:52.195703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:14.047 [2024-11-17 14:04:52.195716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:14.047 [2024-11-17 14:04:52.195731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:14.047 [2024-11-17 14:04:52.195745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:14.047 [2024-11-17 14:04:52.195759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:14.047 [2024-11-17 14:04:52.195772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:14.047 [2024-11-17 14:04:52.195788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:14.047 [2024-11-17 14:04:52.195801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:14.047 [2024-11-17 14:04:52.195817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:14.047 [2024-11-17 14:04:52.195830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:14.047 [2024-11-17 14:04:52.195845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:14.047 [2024-11-17 14:04:52.195857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:14.047 [2024-11-17 14:04:52.195872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:14.047 [2024-11-17 14:04:52.195889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:14.047 [2024-11-17 14:04:52.195906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:14.047 [2024-11-17 14:04:52.195920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:14.047 [2024-11-17 14:04:52.195936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:14.047 [2024-11-17 14:04:52.195949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:14.047 [2024-11-17 14:04:52.195964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:14.047 [2024-11-17 14:04:52.195977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:14.047 [2024-11-17 14:04:52.195991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:14.047 [2024-11-17 14:04:52.196004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:14.047 [2024-11-17 14:04:52.196018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:14.047 [2024-11-17 14:04:52.196031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:14.047 [2024-11-17 14:04:52.196045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:14.047 [2024-11-17 14:04:52.196057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:14.047 [2024-11-17 14:04:52.196073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:14.047 [2024-11-17 14:04:52.196085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:14.047 [2024-11-17 14:04:52.196100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:14.047 [2024-11-17 14:04:52.196113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:14.047 [2024-11-17 14:04:52.196128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:14.047 [2024-11-17 14:04:52.196140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:14.047 [2024-11-17 14:04:52.196157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:14.047 [2024-11-17 14:04:52.196170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:14.047 [2024-11-17 14:04:52.196184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:14.047 [2024-11-17 14:04:52.196197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:14.047 [2024-11-17 14:04:52.196211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:14.047 [2024-11-17 14:04:52.196223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:14.047 [2024-11-17 14:04:52.196252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:14.047 [2024-11-17 14:04:52.196266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:14.047 [2024-11-17 14:04:52.196283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:14.047 [2024-11-17 14:04:52.196295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:14.047 [2024-11-17 14:04:52.196322] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:14.047 [2024-11-17 14:04:52.196335] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d5c4efc5-f107-4f3d-9bb5-20cb69ac4170 00:16:14.047 [2024-11-17 14:04:52.196348] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:14.047 [2024-11-17 14:04:52.196364] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:14.047 [2024-11-17 14:04:52.196381] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:14.047 [2024-11-17 14:04:52.196395] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:14.047 [2024-11-17 14:04:52.196409] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:14.047 [2024-11-17 14:04:52.196421] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:14.047 [2024-11-17 14:04:52.196435] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:14.047 [2024-11-17 14:04:52.196445] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:14.047 [2024-11-17 14:04:52.196458] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:14.047 [2024-11-17 14:04:52.196470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.047 [2024-11-17 14:04:52.196485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:14.047 [2024-11-17 14:04:52.196498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.659 ms 00:16:14.047 [2024-11-17 14:04:52.196515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.047 [2024-11-17 14:04:52.198278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.047 [2024-11-17 14:04:52.198414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:14.047 [2024-11-17 14:04:52.198479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.735 ms 00:16:14.047 [2024-11-17 14:04:52.198506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.047 [2024-11-17 14:04:52.198643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.047 [2024-11-17 14:04:52.198678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:14.047 [2024-11-17 14:04:52.198774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:16:14.047 [2024-11-17 14:04:52.198801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.047 [2024-11-17 14:04:52.203301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.047 [2024-11-17 14:04:52.203411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:14.047 [2024-11-17 14:04:52.203491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.047 [2024-11-17 14:04:52.203562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.047 [2024-11-17 14:04:52.203630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.047 [2024-11-17 14:04:52.203685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:14.047 [2024-11-17 14:04:52.203734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.047 [2024-11-17 14:04:52.203757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.047 [2024-11-17 14:04:52.203848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.047 [2024-11-17 14:04:52.203911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:14.048 [2024-11-17 14:04:52.203958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.048 [2024-11-17 14:04:52.203983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.048 [2024-11-17 14:04:52.204103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.048 [2024-11-17 14:04:52.204137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:14.048 [2024-11-17 14:04:52.204220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.048 [2024-11-17 14:04:52.204266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.048 [2024-11-17 14:04:52.212689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.048 [2024-11-17 14:04:52.212821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:14.048 [2024-11-17 14:04:52.212879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.048 [2024-11-17 14:04:52.212904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.048 [2024-11-17 14:04:52.220422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.048 [2024-11-17 14:04:52.220565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:14.048 [2024-11-17 14:04:52.220619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.048 [2024-11-17 14:04:52.220643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.048 [2024-11-17 14:04:52.220697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.048 [2024-11-17 14:04:52.220759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:14.048 [2024-11-17 14:04:52.220782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.048 [2024-11-17 14:04:52.220803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.048 [2024-11-17 14:04:52.220886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.048 [2024-11-17 14:04:52.220949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:14.048 [2024-11-17 14:04:52.220995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.048 [2024-11-17 14:04:52.221020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.048 [2024-11-17 14:04:52.221101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.048 [2024-11-17 14:04:52.221252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:14.048 [2024-11-17 14:04:52.221264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.048 [2024-11-17 14:04:52.221273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.048 [2024-11-17 14:04:52.221311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.048 [2024-11-17 14:04:52.221322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:14.048 [2024-11-17 14:04:52.221330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.048 [2024-11-17 14:04:52.221339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.048 [2024-11-17 14:04:52.221370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.048 [2024-11-17 14:04:52.221384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:14.048 [2024-11-17 14:04:52.221394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.048 [2024-11-17 14:04:52.221405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.048 [2024-11-17 14:04:52.221443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.048 [2024-11-17 14:04:52.221453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:14.048 [2024-11-17 14:04:52.221461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.048 [2024-11-17 14:04:52.221472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.048 [2024-11-17 14:04:52.221591] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 187.071 ms, result 0 00:16:14.048 true 00:16:14.048 14:04:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 84932 00:16:14.048 14:04:52 ftl.ftl_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 84932 ']' 00:16:14.048 14:04:52 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # kill -0 84932 00:16:14.048 14:04:52 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # uname 00:16:14.048 14:04:52 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:14.048 14:04:52 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84932 00:16:14.048 killing process with pid 84932 00:16:14.048 Received shutdown signal, test time was about 4.000000 seconds 00:16:14.048 00:16:14.048 Latency(us) 00:16:14.048 [2024-11-17T14:04:52.349Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:14.048 [2024-11-17T14:04:52.349Z] =================================================================================================================== 00:16:14.048 [2024-11-17T14:04:52.349Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:14.048 14:04:52 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:14.048 14:04:52 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:14.048 14:04:52 ftl.ftl_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84932' 00:16:14.048 14:04:52 ftl.ftl_bdevperf -- common/autotest_common.sh@969 -- # kill 84932 00:16:14.048 14:04:52 ftl.ftl_bdevperf -- common/autotest_common.sh@974 -- # wait 84932 00:16:14.307 14:04:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:16:14.307 14:04:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:16:14.307 Remove shared memory files 00:16:14.307 14:04:52 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:14.307 14:04:52 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:16:14.307 14:04:52 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:16:14.307 14:04:52 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:16:14.307 14:04:52 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:14.307 14:04:52 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:16:14.307 ************************************ 00:16:14.307 END TEST ftl_bdevperf 00:16:14.307 ************************************ 00:16:14.307 00:16:14.307 real 0m19.332s 00:16:14.307 user 0m22.033s 00:16:14.307 sys 0m0.767s 00:16:14.307 14:04:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:14.307 14:04:52 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:14.566 14:04:52 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:16:14.566 14:04:52 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:14.566 14:04:52 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:14.566 14:04:52 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:14.566 ************************************ 00:16:14.566 START TEST ftl_trim 00:16:14.566 ************************************ 00:16:14.566 14:04:52 ftl.ftl_trim -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:16:14.566 * Looking for test storage... 00:16:14.566 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:14.566 14:04:52 ftl.ftl_trim -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:14.566 14:04:52 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:14.566 14:04:52 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # lcov --version 00:16:14.566 14:04:52 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:14.566 14:04:52 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:14.566 14:04:52 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:14.566 14:04:52 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:14.566 14:04:52 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:16:14.566 14:04:52 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:16:14.566 14:04:52 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:16:14.566 14:04:52 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:16:14.566 14:04:52 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:16:14.566 14:04:52 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:16:14.566 14:04:52 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:16:14.566 14:04:52 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:14.566 14:04:52 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:16:14.566 14:04:52 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:16:14.566 14:04:52 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:14.566 14:04:52 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:14.566 14:04:52 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:16:14.566 14:04:52 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:16:14.566 14:04:52 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:14.566 14:04:52 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:16:14.566 14:04:52 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:16:14.566 14:04:52 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:16:14.566 14:04:52 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:16:14.566 14:04:52 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:14.566 14:04:52 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:16:14.566 14:04:52 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:16:14.566 14:04:52 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:14.566 14:04:52 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:14.566 14:04:52 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:16:14.566 14:04:52 ftl.ftl_trim -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:14.567 14:04:52 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:14.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.567 --rc genhtml_branch_coverage=1 00:16:14.567 --rc genhtml_function_coverage=1 00:16:14.567 --rc genhtml_legend=1 00:16:14.567 --rc geninfo_all_blocks=1 00:16:14.567 --rc geninfo_unexecuted_blocks=1 00:16:14.567 00:16:14.567 ' 00:16:14.567 14:04:52 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:14.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.567 --rc genhtml_branch_coverage=1 00:16:14.567 --rc genhtml_function_coverage=1 00:16:14.567 --rc genhtml_legend=1 00:16:14.567 --rc geninfo_all_blocks=1 00:16:14.567 --rc geninfo_unexecuted_blocks=1 00:16:14.567 00:16:14.567 ' 00:16:14.567 14:04:52 ftl.ftl_trim -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:14.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.567 --rc genhtml_branch_coverage=1 00:16:14.567 --rc genhtml_function_coverage=1 00:16:14.567 --rc genhtml_legend=1 00:16:14.567 --rc geninfo_all_blocks=1 00:16:14.567 --rc geninfo_unexecuted_blocks=1 00:16:14.567 00:16:14.567 ' 00:16:14.567 14:04:52 ftl.ftl_trim -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:14.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.567 --rc genhtml_branch_coverage=1 00:16:14.567 --rc genhtml_function_coverage=1 00:16:14.567 --rc genhtml_legend=1 00:16:14.567 --rc geninfo_all_blocks=1 00:16:14.567 --rc geninfo_unexecuted_blocks=1 00:16:14.567 00:16:14.567 ' 00:16:14.567 14:04:52 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:14.567 14:04:52 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:16:14.567 14:04:52 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:14.567 14:04:52 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:14.567 14:04:52 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:14.567 14:04:52 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:14.567 14:04:52 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:14.567 14:04:52 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:14.567 14:04:52 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:14.567 14:04:52 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:14.567 14:04:52 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:14.567 14:04:52 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:14.567 14:04:52 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:14.567 14:04:52 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:14.567 14:04:52 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:14.567 14:04:52 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:14.567 14:04:52 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:14.567 14:04:52 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:14.567 14:04:52 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:14.567 14:04:52 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:14.567 14:04:52 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:14.567 14:04:52 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:14.567 14:04:52 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:14.567 14:04:52 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:14.567 14:04:52 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:14.567 14:04:52 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:14.567 14:04:52 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:14.567 14:04:52 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:14.567 14:04:52 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:14.567 14:04:52 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:14.567 14:04:52 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:16:14.567 14:04:52 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:16:14.567 14:04:52 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:16:14.567 14:04:52 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:16:14.567 14:04:52 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:16:14.567 14:04:52 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:16:14.567 14:04:52 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:16:14.567 14:04:52 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:16:14.567 14:04:52 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:14.567 14:04:52 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:14.567 14:04:52 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:16:14.567 14:04:52 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=85251 00:16:14.567 14:04:52 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 85251 00:16:14.567 14:04:52 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 85251 ']' 00:16:14.567 14:04:52 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:16:14.567 14:04:52 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:14.567 14:04:52 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:14.567 14:04:52 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:14.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:14.567 14:04:52 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:14.567 14:04:52 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:16:14.567 [2024-11-17 14:04:52.843386] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:14.567 [2024-11-17 14:04:52.843689] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85251 ] 00:16:14.826 [2024-11-17 14:04:52.983805] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:14.826 [2024-11-17 14:04:53.016838] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:14.826 [2024-11-17 14:04:53.017032] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:14.826 [2024-11-17 14:04:53.017130] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:15.394 14:04:53 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:15.394 14:04:53 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:16:15.394 14:04:53 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:15.394 14:04:53 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:16:15.394 14:04:53 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:15.394 14:04:53 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:16:15.394 14:04:53 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:16:15.394 14:04:53 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:15.652 14:04:53 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:15.652 14:04:53 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:16:15.652 14:04:53 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:15.652 14:04:53 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:16:15.652 14:04:53 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:15.652 14:04:53 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:16:15.652 14:04:53 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:16:15.652 14:04:53 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:15.911 14:04:54 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:15.911 { 00:16:15.911 "name": "nvme0n1", 00:16:15.911 "aliases": [ 00:16:15.911 "f4f7393a-81ad-4b3e-a53a-c1536c99a51f" 00:16:15.911 ], 00:16:15.911 "product_name": "NVMe disk", 00:16:15.911 "block_size": 4096, 00:16:15.911 "num_blocks": 1310720, 00:16:15.911 "uuid": "f4f7393a-81ad-4b3e-a53a-c1536c99a51f", 00:16:15.911 "numa_id": -1, 00:16:15.911 "assigned_rate_limits": { 00:16:15.911 "rw_ios_per_sec": 0, 00:16:15.911 "rw_mbytes_per_sec": 0, 00:16:15.911 "r_mbytes_per_sec": 0, 00:16:15.911 "w_mbytes_per_sec": 0 00:16:15.911 }, 00:16:15.911 "claimed": true, 00:16:15.911 "claim_type": "read_many_write_one", 00:16:15.911 "zoned": false, 00:16:15.911 "supported_io_types": { 00:16:15.911 "read": true, 00:16:15.911 "write": true, 00:16:15.911 "unmap": true, 00:16:15.911 "flush": true, 00:16:15.911 "reset": true, 00:16:15.911 "nvme_admin": true, 00:16:15.911 "nvme_io": true, 00:16:15.911 "nvme_io_md": false, 00:16:15.911 "write_zeroes": true, 00:16:15.911 "zcopy": false, 00:16:15.911 "get_zone_info": false, 00:16:15.911 "zone_management": false, 00:16:15.911 "zone_append": false, 00:16:15.911 "compare": true, 00:16:15.911 "compare_and_write": false, 00:16:15.911 "abort": true, 00:16:15.911 "seek_hole": false, 00:16:15.911 "seek_data": false, 00:16:15.911 "copy": true, 00:16:15.911 "nvme_iov_md": false 00:16:15.911 }, 00:16:15.911 "driver_specific": { 00:16:15.911 "nvme": [ 00:16:15.911 { 00:16:15.911 "pci_address": "0000:00:11.0", 00:16:15.911 "trid": { 00:16:15.911 "trtype": "PCIe", 00:16:15.911 "traddr": "0000:00:11.0" 00:16:15.911 }, 00:16:15.911 "ctrlr_data": { 00:16:15.911 "cntlid": 0, 00:16:15.911 "vendor_id": "0x1b36", 00:16:15.911 "model_number": "QEMU NVMe Ctrl", 00:16:15.911 "serial_number": "12341", 00:16:15.911 "firmware_revision": "8.0.0", 00:16:15.911 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:15.911 "oacs": { 00:16:15.911 "security": 0, 00:16:15.911 "format": 1, 00:16:15.911 "firmware": 0, 00:16:15.911 "ns_manage": 1 00:16:15.911 }, 00:16:15.911 "multi_ctrlr": false, 00:16:15.911 "ana_reporting": false 00:16:15.911 }, 00:16:15.911 "vs": { 00:16:15.911 "nvme_version": "1.4" 00:16:15.911 }, 00:16:15.911 "ns_data": { 00:16:15.912 "id": 1, 00:16:15.912 "can_share": false 00:16:15.912 } 00:16:15.912 } 00:16:15.912 ], 00:16:15.912 "mp_policy": "active_passive" 00:16:15.912 } 00:16:15.912 } 00:16:15.912 ]' 00:16:15.912 14:04:54 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:15.912 14:04:54 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:16:15.912 14:04:54 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:15.912 14:04:54 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=1310720 00:16:15.912 14:04:54 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:16:15.912 14:04:54 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 5120 00:16:15.912 14:04:54 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:16:15.912 14:04:54 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:15.912 14:04:54 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:16:15.912 14:04:54 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:15.912 14:04:54 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:16.170 14:04:54 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=676f6b94-c723-4f75-85ff-b42635aa8172 00:16:16.170 14:04:54 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:16:16.170 14:04:54 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 676f6b94-c723-4f75-85ff-b42635aa8172 00:16:16.429 14:04:54 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:16.689 14:04:54 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=0ba7943d-96dc-4460-aa5e-acdae528943a 00:16:16.689 14:04:54 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 0ba7943d-96dc-4460-aa5e-acdae528943a 00:16:16.951 14:04:55 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=3031c532-daae-461a-835e-afc41d906578 00:16:16.951 14:04:55 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 3031c532-daae-461a-835e-afc41d906578 00:16:16.951 14:04:55 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:16:16.951 14:04:55 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:16.951 14:04:55 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=3031c532-daae-461a-835e-afc41d906578 00:16:16.951 14:04:55 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:16:16.951 14:04:55 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 3031c532-daae-461a-835e-afc41d906578 00:16:16.951 14:04:55 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=3031c532-daae-461a-835e-afc41d906578 00:16:16.951 14:04:55 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:16.951 14:04:55 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:16:16.951 14:04:55 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:16:16.951 14:04:55 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3031c532-daae-461a-835e-afc41d906578 00:16:16.951 14:04:55 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:16.951 { 00:16:16.951 "name": "3031c532-daae-461a-835e-afc41d906578", 00:16:16.951 "aliases": [ 00:16:16.951 "lvs/nvme0n1p0" 00:16:16.951 ], 00:16:16.951 "product_name": "Logical Volume", 00:16:16.951 "block_size": 4096, 00:16:16.951 "num_blocks": 26476544, 00:16:16.951 "uuid": "3031c532-daae-461a-835e-afc41d906578", 00:16:16.951 "assigned_rate_limits": { 00:16:16.951 "rw_ios_per_sec": 0, 00:16:16.951 "rw_mbytes_per_sec": 0, 00:16:16.951 "r_mbytes_per_sec": 0, 00:16:16.951 "w_mbytes_per_sec": 0 00:16:16.951 }, 00:16:16.951 "claimed": false, 00:16:16.951 "zoned": false, 00:16:16.951 "supported_io_types": { 00:16:16.951 "read": true, 00:16:16.951 "write": true, 00:16:16.951 "unmap": true, 00:16:16.951 "flush": false, 00:16:16.951 "reset": true, 00:16:16.951 "nvme_admin": false, 00:16:16.951 "nvme_io": false, 00:16:16.951 "nvme_io_md": false, 00:16:16.951 "write_zeroes": true, 00:16:16.951 "zcopy": false, 00:16:16.951 "get_zone_info": false, 00:16:16.951 "zone_management": false, 00:16:16.951 "zone_append": false, 00:16:16.951 "compare": false, 00:16:16.951 "compare_and_write": false, 00:16:16.951 "abort": false, 00:16:16.951 "seek_hole": true, 00:16:16.951 "seek_data": true, 00:16:16.951 "copy": false, 00:16:16.951 "nvme_iov_md": false 00:16:16.951 }, 00:16:16.951 "driver_specific": { 00:16:16.951 "lvol": { 00:16:16.951 "lvol_store_uuid": "0ba7943d-96dc-4460-aa5e-acdae528943a", 00:16:16.951 "base_bdev": "nvme0n1", 00:16:16.951 "thin_provision": true, 00:16:16.951 "num_allocated_clusters": 0, 00:16:16.951 "snapshot": false, 00:16:16.951 "clone": false, 00:16:16.951 "esnap_clone": false 00:16:16.951 } 00:16:16.951 } 00:16:16.951 } 00:16:16.951 ]' 00:16:16.951 14:04:55 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:17.212 14:04:55 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:16:17.212 14:04:55 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:17.212 14:04:55 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:17.212 14:04:55 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:17.212 14:04:55 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:16:17.212 14:04:55 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:16:17.212 14:04:55 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:16:17.212 14:04:55 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:17.474 14:04:55 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:17.474 14:04:55 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:17.474 14:04:55 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 3031c532-daae-461a-835e-afc41d906578 00:16:17.474 14:04:55 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=3031c532-daae-461a-835e-afc41d906578 00:16:17.474 14:04:55 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:17.474 14:04:55 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:16:17.474 14:04:55 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:16:17.474 14:04:55 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3031c532-daae-461a-835e-afc41d906578 00:16:17.474 14:04:55 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:17.474 { 00:16:17.474 "name": "3031c532-daae-461a-835e-afc41d906578", 00:16:17.474 "aliases": [ 00:16:17.474 "lvs/nvme0n1p0" 00:16:17.474 ], 00:16:17.474 "product_name": "Logical Volume", 00:16:17.474 "block_size": 4096, 00:16:17.474 "num_blocks": 26476544, 00:16:17.474 "uuid": "3031c532-daae-461a-835e-afc41d906578", 00:16:17.474 "assigned_rate_limits": { 00:16:17.474 "rw_ios_per_sec": 0, 00:16:17.474 "rw_mbytes_per_sec": 0, 00:16:17.474 "r_mbytes_per_sec": 0, 00:16:17.474 "w_mbytes_per_sec": 0 00:16:17.474 }, 00:16:17.474 "claimed": false, 00:16:17.474 "zoned": false, 00:16:17.474 "supported_io_types": { 00:16:17.474 "read": true, 00:16:17.474 "write": true, 00:16:17.474 "unmap": true, 00:16:17.474 "flush": false, 00:16:17.474 "reset": true, 00:16:17.474 "nvme_admin": false, 00:16:17.474 "nvme_io": false, 00:16:17.474 "nvme_io_md": false, 00:16:17.474 "write_zeroes": true, 00:16:17.474 "zcopy": false, 00:16:17.474 "get_zone_info": false, 00:16:17.474 "zone_management": false, 00:16:17.474 "zone_append": false, 00:16:17.474 "compare": false, 00:16:17.474 "compare_and_write": false, 00:16:17.474 "abort": false, 00:16:17.474 "seek_hole": true, 00:16:17.474 "seek_data": true, 00:16:17.474 "copy": false, 00:16:17.474 "nvme_iov_md": false 00:16:17.474 }, 00:16:17.474 "driver_specific": { 00:16:17.474 "lvol": { 00:16:17.474 "lvol_store_uuid": "0ba7943d-96dc-4460-aa5e-acdae528943a", 00:16:17.474 "base_bdev": "nvme0n1", 00:16:17.474 "thin_provision": true, 00:16:17.474 "num_allocated_clusters": 0, 00:16:17.474 "snapshot": false, 00:16:17.474 "clone": false, 00:16:17.474 "esnap_clone": false 00:16:17.474 } 00:16:17.474 } 00:16:17.474 } 00:16:17.474 ]' 00:16:17.474 14:04:55 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:17.735 14:04:55 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:16:17.735 14:04:55 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:17.735 14:04:55 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:17.735 14:04:55 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:17.735 14:04:55 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:16:17.735 14:04:55 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:16:17.735 14:04:55 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:17.735 14:04:56 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:16:17.735 14:04:56 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:16:17.735 14:04:56 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 3031c532-daae-461a-835e-afc41d906578 00:16:17.735 14:04:56 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=3031c532-daae-461a-835e-afc41d906578 00:16:17.735 14:04:56 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:17.735 14:04:56 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:16:17.735 14:04:56 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:16:17.735 14:04:56 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3031c532-daae-461a-835e-afc41d906578 00:16:17.997 14:04:56 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:17.997 { 00:16:17.997 "name": "3031c532-daae-461a-835e-afc41d906578", 00:16:17.997 "aliases": [ 00:16:17.997 "lvs/nvme0n1p0" 00:16:17.997 ], 00:16:17.997 "product_name": "Logical Volume", 00:16:17.997 "block_size": 4096, 00:16:17.997 "num_blocks": 26476544, 00:16:17.997 "uuid": "3031c532-daae-461a-835e-afc41d906578", 00:16:17.997 "assigned_rate_limits": { 00:16:17.997 "rw_ios_per_sec": 0, 00:16:17.997 "rw_mbytes_per_sec": 0, 00:16:17.997 "r_mbytes_per_sec": 0, 00:16:17.997 "w_mbytes_per_sec": 0 00:16:17.997 }, 00:16:17.997 "claimed": false, 00:16:17.997 "zoned": false, 00:16:17.997 "supported_io_types": { 00:16:17.997 "read": true, 00:16:17.997 "write": true, 00:16:17.997 "unmap": true, 00:16:17.997 "flush": false, 00:16:17.997 "reset": true, 00:16:17.997 "nvme_admin": false, 00:16:17.997 "nvme_io": false, 00:16:17.997 "nvme_io_md": false, 00:16:17.997 "write_zeroes": true, 00:16:17.997 "zcopy": false, 00:16:17.997 "get_zone_info": false, 00:16:17.997 "zone_management": false, 00:16:17.997 "zone_append": false, 00:16:17.997 "compare": false, 00:16:17.997 "compare_and_write": false, 00:16:17.997 "abort": false, 00:16:17.997 "seek_hole": true, 00:16:17.997 "seek_data": true, 00:16:17.997 "copy": false, 00:16:17.997 "nvme_iov_md": false 00:16:17.997 }, 00:16:17.997 "driver_specific": { 00:16:17.997 "lvol": { 00:16:17.997 "lvol_store_uuid": "0ba7943d-96dc-4460-aa5e-acdae528943a", 00:16:17.997 "base_bdev": "nvme0n1", 00:16:17.997 "thin_provision": true, 00:16:17.997 "num_allocated_clusters": 0, 00:16:17.997 "snapshot": false, 00:16:17.997 "clone": false, 00:16:17.997 "esnap_clone": false 00:16:17.997 } 00:16:17.997 } 00:16:17.997 } 00:16:17.997 ]' 00:16:17.997 14:04:56 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:17.997 14:04:56 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:16:17.997 14:04:56 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:17.997 14:04:56 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:17.997 14:04:56 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:17.997 14:04:56 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:16:17.997 14:04:56 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:16:17.997 14:04:56 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 3031c532-daae-461a-835e-afc41d906578 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:16:18.259 [2024-11-17 14:04:56.424024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.259 [2024-11-17 14:04:56.424488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:18.259 [2024-11-17 14:04:56.424512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:18.259 [2024-11-17 14:04:56.424532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.259 [2024-11-17 14:04:56.426632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.259 [2024-11-17 14:04:56.426746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:18.259 [2024-11-17 14:04:56.426830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.073 ms 00:16:18.259 [2024-11-17 14:04:56.426916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.259 [2024-11-17 14:04:56.427122] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:18.259 [2024-11-17 14:04:56.427411] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:18.259 [2024-11-17 14:04:56.427549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.259 [2024-11-17 14:04:56.427593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:18.259 [2024-11-17 14:04:56.427648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.435 ms 00:16:18.259 [2024-11-17 14:04:56.427696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.259 [2024-11-17 14:04:56.427931] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID ddfd3108-0b08-42c1-8a1f-32ad0e417f42 00:16:18.259 [2024-11-17 14:04:56.429028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.259 [2024-11-17 14:04:56.429144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:18.259 [2024-11-17 14:04:56.429231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:16:18.259 [2024-11-17 14:04:56.429323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.259 [2024-11-17 14:04:56.434488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.259 [2024-11-17 14:04:56.434606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:18.259 [2024-11-17 14:04:56.434693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.055 ms 00:16:18.259 [2024-11-17 14:04:56.434739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.259 [2024-11-17 14:04:56.434947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.260 [2024-11-17 14:04:56.435036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:18.260 [2024-11-17 14:04:56.435116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:16:18.260 [2024-11-17 14:04:56.435174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.260 [2024-11-17 14:04:56.435292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.260 [2024-11-17 14:04:56.435342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:18.260 [2024-11-17 14:04:56.435461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:18.260 [2024-11-17 14:04:56.435523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.260 [2024-11-17 14:04:56.435584] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:18.260 [2024-11-17 14:04:56.436971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.260 [2024-11-17 14:04:56.437091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:18.260 [2024-11-17 14:04:56.437172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.395 ms 00:16:18.260 [2024-11-17 14:04:56.437298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.260 [2024-11-17 14:04:56.437368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.260 [2024-11-17 14:04:56.437468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:18.260 [2024-11-17 14:04:56.437516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:18.260 [2024-11-17 14:04:56.437587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.260 [2024-11-17 14:04:56.437685] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:18.260 [2024-11-17 14:04:56.437867] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:18.260 [2024-11-17 14:04:56.437958] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:18.260 [2024-11-17 14:04:56.438043] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:18.260 [2024-11-17 14:04:56.438144] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:18.260 [2024-11-17 14:04:56.438257] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:18.260 [2024-11-17 14:04:56.438310] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:18.260 [2024-11-17 14:04:56.438357] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:18.260 [2024-11-17 14:04:56.438461] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:18.260 [2024-11-17 14:04:56.438506] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:18.260 [2024-11-17 14:04:56.438575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.260 [2024-11-17 14:04:56.438642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:18.260 [2024-11-17 14:04:56.438719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.891 ms 00:16:18.260 [2024-11-17 14:04:56.438760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.260 [2024-11-17 14:04:56.438892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.260 [2024-11-17 14:04:56.438933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:18.260 [2024-11-17 14:04:56.438966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:16:18.260 [2024-11-17 14:04:56.438995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.260 [2024-11-17 14:04:56.439169] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:18.260 [2024-11-17 14:04:56.439218] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:18.260 [2024-11-17 14:04:56.439262] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:18.260 [2024-11-17 14:04:56.439301] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:18.260 [2024-11-17 14:04:56.439391] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:18.260 [2024-11-17 14:04:56.439429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:18.260 [2024-11-17 14:04:56.439459] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:18.260 [2024-11-17 14:04:56.439557] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:18.260 [2024-11-17 14:04:56.439594] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:18.260 [2024-11-17 14:04:56.439627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:18.260 [2024-11-17 14:04:56.439658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:18.260 [2024-11-17 14:04:56.439692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:18.260 [2024-11-17 14:04:56.439723] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:18.260 [2024-11-17 14:04:56.439787] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:18.260 [2024-11-17 14:04:56.439823] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:18.260 [2024-11-17 14:04:56.439853] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:18.260 [2024-11-17 14:04:56.439876] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:18.260 [2024-11-17 14:04:56.439909] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:18.260 [2024-11-17 14:04:56.439980] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:18.260 [2024-11-17 14:04:56.440016] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:18.260 [2024-11-17 14:04:56.440046] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:18.260 [2024-11-17 14:04:56.440077] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:18.260 [2024-11-17 14:04:56.440107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:18.260 [2024-11-17 14:04:56.440173] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:18.260 [2024-11-17 14:04:56.440202] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:18.260 [2024-11-17 14:04:56.440230] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:18.260 [2024-11-17 14:04:56.440273] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:18.260 [2024-11-17 14:04:56.440349] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:18.260 [2024-11-17 14:04:56.440384] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:18.260 [2024-11-17 14:04:56.440417] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:18.260 [2024-11-17 14:04:56.440448] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:18.260 [2024-11-17 14:04:56.440475] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:18.260 [2024-11-17 14:04:56.440539] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:18.260 [2024-11-17 14:04:56.440578] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:18.260 [2024-11-17 14:04:56.440612] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:18.260 [2024-11-17 14:04:56.440644] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:18.260 [2024-11-17 14:04:56.440709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:18.260 [2024-11-17 14:04:56.440747] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:18.260 [2024-11-17 14:04:56.440775] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:18.260 [2024-11-17 14:04:56.440805] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:18.260 [2024-11-17 14:04:56.440864] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:18.260 [2024-11-17 14:04:56.440906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:18.260 [2024-11-17 14:04:56.440936] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:18.260 [2024-11-17 14:04:56.440963] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:18.260 [2024-11-17 14:04:56.441027] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:18.260 [2024-11-17 14:04:56.441070] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:18.260 [2024-11-17 14:04:56.441095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:18.260 [2024-11-17 14:04:56.441123] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:18.260 [2024-11-17 14:04:56.441181] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:18.260 [2024-11-17 14:04:56.441222] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:18.260 [2024-11-17 14:04:56.441263] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:18.260 [2024-11-17 14:04:56.441297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:18.260 [2024-11-17 14:04:56.441361] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:18.260 [2024-11-17 14:04:56.441400] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:18.260 [2024-11-17 14:04:56.441430] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:18.260 [2024-11-17 14:04:56.441504] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:18.260 [2024-11-17 14:04:56.441543] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:18.260 [2024-11-17 14:04:56.441576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:18.260 [2024-11-17 14:04:56.441606] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:18.260 [2024-11-17 14:04:56.441673] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:18.260 [2024-11-17 14:04:56.441710] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:18.260 [2024-11-17 14:04:56.441744] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:18.260 [2024-11-17 14:04:56.441775] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:18.260 [2024-11-17 14:04:56.441847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:18.260 [2024-11-17 14:04:56.441883] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:18.260 [2024-11-17 14:04:56.441913] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:18.261 [2024-11-17 14:04:56.441946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:18.261 [2024-11-17 14:04:56.441976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:18.261 [2024-11-17 14:04:56.442053] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:18.261 [2024-11-17 14:04:56.442093] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:18.261 [2024-11-17 14:04:56.442127] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:18.261 [2024-11-17 14:04:56.442159] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:18.261 [2024-11-17 14:04:56.442191] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:18.261 [2024-11-17 14:04:56.442273] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:18.261 [2024-11-17 14:04:56.442315] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:18.261 [2024-11-17 14:04:56.442350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:18.261 [2024-11-17 14:04:56.442378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:18.261 [2024-11-17 14:04:56.442411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.231 ms 00:16:18.261 [2024-11-17 14:04:56.442478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:18.261 [2024-11-17 14:04:56.442589] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:18.261 [2024-11-17 14:04:56.442662] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:20.825 [2024-11-17 14:04:58.575091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.825 [2024-11-17 14:04:58.575443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:20.825 [2024-11-17 14:04:58.575642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2132.488 ms 00:16:20.825 [2024-11-17 14:04:58.575720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.825 [2024-11-17 14:04:58.593704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.825 [2024-11-17 14:04:58.593948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:20.825 [2024-11-17 14:04:58.594032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.837 ms 00:16:20.825 [2024-11-17 14:04:58.594090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.825 [2024-11-17 14:04:58.594285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.825 [2024-11-17 14:04:58.594412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:20.825 [2024-11-17 14:04:58.594493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:16:20.825 [2024-11-17 14:04:58.594598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.825 [2024-11-17 14:04:58.605721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.825 [2024-11-17 14:04:58.605972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:20.825 [2024-11-17 14:04:58.606155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.063 ms 00:16:20.825 [2024-11-17 14:04:58.606380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.825 [2024-11-17 14:04:58.606633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.825 [2024-11-17 14:04:58.606818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:20.825 [2024-11-17 14:04:58.607048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:20.825 [2024-11-17 14:04:58.607220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.825 [2024-11-17 14:04:58.607733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.825 [2024-11-17 14:04:58.607867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:20.825 [2024-11-17 14:04:58.607969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:16:20.825 [2024-11-17 14:04:58.608057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.825 [2024-11-17 14:04:58.608251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.825 [2024-11-17 14:04:58.608313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:20.825 [2024-11-17 14:04:58.608576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:16:20.825 [2024-11-17 14:04:58.608633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.825 [2024-11-17 14:04:58.613931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.825 [2024-11-17 14:04:58.614083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:20.825 [2024-11-17 14:04:58.614188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.184 ms 00:16:20.825 [2024-11-17 14:04:58.614214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.825 [2024-11-17 14:04:58.622535] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:20.825 [2024-11-17 14:04:58.636571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.825 [2024-11-17 14:04:58.636721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:20.825 [2024-11-17 14:04:58.636890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.165 ms 00:16:20.825 [2024-11-17 14:04:58.636952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.825 [2024-11-17 14:04:58.684166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.825 [2024-11-17 14:04:58.684487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:20.825 [2024-11-17 14:04:58.684592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.122 ms 00:16:20.826 [2024-11-17 14:04:58.684679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.826 [2024-11-17 14:04:58.684984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.826 [2024-11-17 14:04:58.685151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:20.826 [2024-11-17 14:04:58.685299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.204 ms 00:16:20.826 [2024-11-17 14:04:58.685436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.826 [2024-11-17 14:04:58.689185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.826 [2024-11-17 14:04:58.689422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:20.826 [2024-11-17 14:04:58.689499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.564 ms 00:16:20.826 [2024-11-17 14:04:58.689564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.826 [2024-11-17 14:04:58.692750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.826 [2024-11-17 14:04:58.692908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:20.826 [2024-11-17 14:04:58.693003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.072 ms 00:16:20.826 [2024-11-17 14:04:58.693113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.826 [2024-11-17 14:04:58.693509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.826 [2024-11-17 14:04:58.693630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:20.826 [2024-11-17 14:04:58.693732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:16:20.826 [2024-11-17 14:04:58.693820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.826 [2024-11-17 14:04:58.716651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.826 [2024-11-17 14:04:58.716819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:20.826 [2024-11-17 14:04:58.716922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.716 ms 00:16:20.826 [2024-11-17 14:04:58.716984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.826 [2024-11-17 14:04:58.720611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.826 [2024-11-17 14:04:58.720772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:20.826 [2024-11-17 14:04:58.720874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.513 ms 00:16:20.826 [2024-11-17 14:04:58.720951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.826 [2024-11-17 14:04:58.723714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.826 [2024-11-17 14:04:58.723867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:20.826 [2024-11-17 14:04:58.723967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.642 ms 00:16:20.826 [2024-11-17 14:04:58.724025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.826 [2024-11-17 14:04:58.727278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.826 [2024-11-17 14:04:58.727432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:20.826 [2024-11-17 14:04:58.727582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.141 ms 00:16:20.826 [2024-11-17 14:04:58.727646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.826 [2024-11-17 14:04:58.727773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.826 [2024-11-17 14:04:58.727883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:20.826 [2024-11-17 14:04:58.727938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:20.826 [2024-11-17 14:04:58.728031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.826 [2024-11-17 14:04:58.728191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:20.826 [2024-11-17 14:04:58.728313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:20.826 [2024-11-17 14:04:58.728398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:16:20.826 [2024-11-17 14:04:58.728462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:20.826 [2024-11-17 14:04:58.729337] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:20.826 [2024-11-17 14:04:58.730474] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2305.014 ms, result 0 00:16:20.826 [2024-11-17 14:04:58.731061] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:20.826 { 00:16:20.826 "name": "ftl0", 00:16:20.826 "uuid": "ddfd3108-0b08-42c1-8a1f-32ad0e417f42" 00:16:20.826 } 00:16:20.826 14:04:58 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:16:20.826 14:04:58 ftl.ftl_trim -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:16:20.826 14:04:58 ftl.ftl_trim -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:20.826 14:04:58 ftl.ftl_trim -- common/autotest_common.sh@901 -- # local i 00:16:20.826 14:04:58 ftl.ftl_trim -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:20.826 14:04:58 ftl.ftl_trim -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:20.826 14:04:58 ftl.ftl_trim -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:20.826 14:04:58 ftl.ftl_trim -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:16:21.087 [ 00:16:21.087 { 00:16:21.087 "name": "ftl0", 00:16:21.087 "aliases": [ 00:16:21.087 "ddfd3108-0b08-42c1-8a1f-32ad0e417f42" 00:16:21.087 ], 00:16:21.087 "product_name": "FTL disk", 00:16:21.087 "block_size": 4096, 00:16:21.087 "num_blocks": 23592960, 00:16:21.087 "uuid": "ddfd3108-0b08-42c1-8a1f-32ad0e417f42", 00:16:21.087 "assigned_rate_limits": { 00:16:21.087 "rw_ios_per_sec": 0, 00:16:21.087 "rw_mbytes_per_sec": 0, 00:16:21.087 "r_mbytes_per_sec": 0, 00:16:21.087 "w_mbytes_per_sec": 0 00:16:21.087 }, 00:16:21.087 "claimed": false, 00:16:21.087 "zoned": false, 00:16:21.087 "supported_io_types": { 00:16:21.087 "read": true, 00:16:21.087 "write": true, 00:16:21.087 "unmap": true, 00:16:21.087 "flush": true, 00:16:21.087 "reset": false, 00:16:21.087 "nvme_admin": false, 00:16:21.087 "nvme_io": false, 00:16:21.087 "nvme_io_md": false, 00:16:21.087 "write_zeroes": true, 00:16:21.087 "zcopy": false, 00:16:21.087 "get_zone_info": false, 00:16:21.087 "zone_management": false, 00:16:21.087 "zone_append": false, 00:16:21.087 "compare": false, 00:16:21.087 "compare_and_write": false, 00:16:21.087 "abort": false, 00:16:21.087 "seek_hole": false, 00:16:21.087 "seek_data": false, 00:16:21.087 "copy": false, 00:16:21.087 "nvme_iov_md": false 00:16:21.087 }, 00:16:21.087 "driver_specific": { 00:16:21.087 "ftl": { 00:16:21.087 "base_bdev": "3031c532-daae-461a-835e-afc41d906578", 00:16:21.087 "cache": "nvc0n1p0" 00:16:21.087 } 00:16:21.087 } 00:16:21.087 } 00:16:21.087 ] 00:16:21.087 14:04:59 ftl.ftl_trim -- common/autotest_common.sh@907 -- # return 0 00:16:21.087 14:04:59 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:16:21.087 14:04:59 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:16:21.087 14:04:59 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:16:21.087 14:04:59 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:16:21.346 14:04:59 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:16:21.346 { 00:16:21.346 "name": "ftl0", 00:16:21.346 "aliases": [ 00:16:21.346 "ddfd3108-0b08-42c1-8a1f-32ad0e417f42" 00:16:21.346 ], 00:16:21.346 "product_name": "FTL disk", 00:16:21.346 "block_size": 4096, 00:16:21.346 "num_blocks": 23592960, 00:16:21.346 "uuid": "ddfd3108-0b08-42c1-8a1f-32ad0e417f42", 00:16:21.346 "assigned_rate_limits": { 00:16:21.346 "rw_ios_per_sec": 0, 00:16:21.346 "rw_mbytes_per_sec": 0, 00:16:21.346 "r_mbytes_per_sec": 0, 00:16:21.346 "w_mbytes_per_sec": 0 00:16:21.346 }, 00:16:21.346 "claimed": false, 00:16:21.346 "zoned": false, 00:16:21.346 "supported_io_types": { 00:16:21.346 "read": true, 00:16:21.346 "write": true, 00:16:21.346 "unmap": true, 00:16:21.346 "flush": true, 00:16:21.346 "reset": false, 00:16:21.346 "nvme_admin": false, 00:16:21.346 "nvme_io": false, 00:16:21.346 "nvme_io_md": false, 00:16:21.346 "write_zeroes": true, 00:16:21.346 "zcopy": false, 00:16:21.346 "get_zone_info": false, 00:16:21.346 "zone_management": false, 00:16:21.346 "zone_append": false, 00:16:21.346 "compare": false, 00:16:21.346 "compare_and_write": false, 00:16:21.346 "abort": false, 00:16:21.346 "seek_hole": false, 00:16:21.346 "seek_data": false, 00:16:21.346 "copy": false, 00:16:21.346 "nvme_iov_md": false 00:16:21.346 }, 00:16:21.346 "driver_specific": { 00:16:21.346 "ftl": { 00:16:21.346 "base_bdev": "3031c532-daae-461a-835e-afc41d906578", 00:16:21.346 "cache": "nvc0n1p0" 00:16:21.346 } 00:16:21.346 } 00:16:21.346 } 00:16:21.346 ]' 00:16:21.346 14:04:59 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:16:21.346 14:04:59 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:16:21.346 14:04:59 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:16:21.608 [2024-11-17 14:04:59.758470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.608 [2024-11-17 14:04:59.758858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:21.608 [2024-11-17 14:04:59.758952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:21.608 [2024-11-17 14:04:59.759002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.608 [2024-11-17 14:04:59.759080] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:21.608 [2024-11-17 14:04:59.759620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.608 [2024-11-17 14:04:59.759650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:21.608 [2024-11-17 14:04:59.759660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.427 ms 00:16:21.608 [2024-11-17 14:04:59.759669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.608 [2024-11-17 14:04:59.760142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.608 [2024-11-17 14:04:59.760156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:21.608 [2024-11-17 14:04:59.760178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.444 ms 00:16:21.608 [2024-11-17 14:04:59.760191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.608 [2024-11-17 14:04:59.764052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.608 [2024-11-17 14:04:59.764199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:21.609 [2024-11-17 14:04:59.764257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.833 ms 00:16:21.609 [2024-11-17 14:04:59.764303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.609 [2024-11-17 14:04:59.771298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.609 [2024-11-17 14:04:59.771451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:21.609 [2024-11-17 14:04:59.771529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.908 ms 00:16:21.609 [2024-11-17 14:04:59.771572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.609 [2024-11-17 14:04:59.772986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.609 [2024-11-17 14:04:59.773141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:21.609 [2024-11-17 14:04:59.773195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.307 ms 00:16:21.609 [2024-11-17 14:04:59.773235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.609 [2024-11-17 14:04:59.777077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.609 [2024-11-17 14:04:59.777227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:21.609 [2024-11-17 14:04:59.777366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.754 ms 00:16:21.609 [2024-11-17 14:04:59.777431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.609 [2024-11-17 14:04:59.777682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.609 [2024-11-17 14:04:59.777790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:21.609 [2024-11-17 14:04:59.777887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:16:21.609 [2024-11-17 14:04:59.777968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.609 [2024-11-17 14:04:59.779335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.609 [2024-11-17 14:04:59.779475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:21.609 [2024-11-17 14:04:59.779573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.303 ms 00:16:21.609 [2024-11-17 14:04:59.779668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.609 [2024-11-17 14:04:59.780886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.609 [2024-11-17 14:04:59.781019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:21.609 [2024-11-17 14:04:59.781122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.125 ms 00:16:21.609 [2024-11-17 14:04:59.781214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.609 [2024-11-17 14:04:59.782258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.609 [2024-11-17 14:04:59.782396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:21.609 [2024-11-17 14:04:59.782494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.886 ms 00:16:21.609 [2024-11-17 14:04:59.782559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.609 [2024-11-17 14:04:59.783548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.609 [2024-11-17 14:04:59.783680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:21.609 [2024-11-17 14:04:59.783776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.834 ms 00:16:21.609 [2024-11-17 14:04:59.783866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.609 [2024-11-17 14:04:59.783964] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:21.609 [2024-11-17 14:04:59.784082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:21.609 [2024-11-17 14:04:59.784152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:21.609 [2024-11-17 14:04:59.784222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:21.609 [2024-11-17 14:04:59.784391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:21.609 [2024-11-17 14:04:59.784502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:21.609 [2024-11-17 14:04:59.784595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:21.609 [2024-11-17 14:04:59.784662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:21.609 [2024-11-17 14:04:59.784760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:21.609 [2024-11-17 14:04:59.784908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:21.609 [2024-11-17 14:04:59.785004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:21.609 [2024-11-17 14:04:59.785165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:21.609 [2024-11-17 14:04:59.785281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:21.609 [2024-11-17 14:04:59.785389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:21.609 [2024-11-17 14:04:59.785549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:21.609 [2024-11-17 14:04:59.785612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:21.609 [2024-11-17 14:04:59.785678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:21.609 [2024-11-17 14:04:59.785742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:21.609 [2024-11-17 14:04:59.785814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:21.609 [2024-11-17 14:04:59.785899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:21.609 [2024-11-17 14:04:59.785909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:21.609 [2024-11-17 14:04:59.785918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:21.609 [2024-11-17 14:04:59.785926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:21.609 [2024-11-17 14:04:59.785941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:21.609 [2024-11-17 14:04:59.785949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:21.609 [2024-11-17 14:04:59.785957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:21.609 [2024-11-17 14:04:59.785964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:21.609 [2024-11-17 14:04:59.785973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:21.609 [2024-11-17 14:04:59.785980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:21.609 [2024-11-17 14:04:59.786003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:21.609 [2024-11-17 14:04:59.786011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:21.609 [2024-11-17 14:04:59.786020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:21.609 [2024-11-17 14:04:59.786027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:21.609 [2024-11-17 14:04:59.786036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:21.609 [2024-11-17 14:04:59.786043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:21.609 [2024-11-17 14:04:59.786054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:21.609 [2024-11-17 14:04:59.786061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:21.609 [2024-11-17 14:04:59.786070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:21.609 [2024-11-17 14:04:59.786077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:21.609 [2024-11-17 14:04:59.786086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:21.609 [2024-11-17 14:04:59.786093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:21.609 [2024-11-17 14:04:59.786102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:21.609 [2024-11-17 14:04:59.786109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:21.609 [2024-11-17 14:04:59.786118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:21.609 [2024-11-17 14:04:59.786125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:21.609 [2024-11-17 14:04:59.786134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:21.609 [2024-11-17 14:04:59.786141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:21.609 [2024-11-17 14:04:59.786150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:21.609 [2024-11-17 14:04:59.786157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:21.609 [2024-11-17 14:04:59.786166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:21.609 [2024-11-17 14:04:59.786175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:21.609 [2024-11-17 14:04:59.786187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:21.609 [2024-11-17 14:04:59.786194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:21.609 [2024-11-17 14:04:59.786203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:21.609 [2024-11-17 14:04:59.786210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:21.609 [2024-11-17 14:04:59.786220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:21.609 [2024-11-17 14:04:59.786227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:21.609 [2024-11-17 14:04:59.786253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:21.609 [2024-11-17 14:04:59.786261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:21.610 [2024-11-17 14:04:59.786270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:21.610 [2024-11-17 14:04:59.786278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:21.610 [2024-11-17 14:04:59.786286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:21.610 [2024-11-17 14:04:59.786294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:21.610 [2024-11-17 14:04:59.786302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:21.610 [2024-11-17 14:04:59.786310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:21.610 [2024-11-17 14:04:59.786318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:21.610 [2024-11-17 14:04:59.786325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:21.610 [2024-11-17 14:04:59.786336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:21.610 [2024-11-17 14:04:59.786343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:21.610 [2024-11-17 14:04:59.786351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:21.610 [2024-11-17 14:04:59.786359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:21.610 [2024-11-17 14:04:59.786367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:21.610 [2024-11-17 14:04:59.786375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:21.610 [2024-11-17 14:04:59.786383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:21.610 [2024-11-17 14:04:59.786390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:21.610 [2024-11-17 14:04:59.786399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:21.610 [2024-11-17 14:04:59.786406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:21.610 [2024-11-17 14:04:59.786415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:21.610 [2024-11-17 14:04:59.786422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:21.610 [2024-11-17 14:04:59.786431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:21.610 [2024-11-17 14:04:59.786438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:21.610 [2024-11-17 14:04:59.786446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:21.610 [2024-11-17 14:04:59.786454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:21.610 [2024-11-17 14:04:59.786466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:21.610 [2024-11-17 14:04:59.786473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:21.610 [2024-11-17 14:04:59.786482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:21.610 [2024-11-17 14:04:59.786489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:21.610 [2024-11-17 14:04:59.786497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:21.610 [2024-11-17 14:04:59.786505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:21.610 [2024-11-17 14:04:59.786513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:21.610 [2024-11-17 14:04:59.786520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:21.610 [2024-11-17 14:04:59.786529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:21.610 [2024-11-17 14:04:59.786536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:21.610 [2024-11-17 14:04:59.786545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:21.610 [2024-11-17 14:04:59.786552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:21.610 [2024-11-17 14:04:59.786560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:21.610 [2024-11-17 14:04:59.786567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:21.610 [2024-11-17 14:04:59.786576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:21.610 [2024-11-17 14:04:59.786583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:21.610 [2024-11-17 14:04:59.786593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:21.610 [2024-11-17 14:04:59.786600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:21.610 [2024-11-17 14:04:59.786617] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:21.610 [2024-11-17 14:04:59.786624] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ddfd3108-0b08-42c1-8a1f-32ad0e417f42 00:16:21.610 [2024-11-17 14:04:59.786634] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:21.610 [2024-11-17 14:04:59.786650] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:21.610 [2024-11-17 14:04:59.786657] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:21.610 [2024-11-17 14:04:59.786665] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:21.610 [2024-11-17 14:04:59.786674] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:21.610 [2024-11-17 14:04:59.786684] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:21.610 [2024-11-17 14:04:59.786692] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:21.610 [2024-11-17 14:04:59.786698] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:21.610 [2024-11-17 14:04:59.786706] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:21.610 [2024-11-17 14:04:59.786713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.610 [2024-11-17 14:04:59.786722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:21.610 [2024-11-17 14:04:59.786731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.751 ms 00:16:21.610 [2024-11-17 14:04:59.786741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.610 [2024-11-17 14:04:59.788154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.610 [2024-11-17 14:04:59.788173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:21.610 [2024-11-17 14:04:59.788181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.384 ms 00:16:21.610 [2024-11-17 14:04:59.788192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.610 [2024-11-17 14:04:59.790465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:21.610 [2024-11-17 14:04:59.790513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:21.610 [2024-11-17 14:04:59.790551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.235 ms 00:16:21.610 [2024-11-17 14:04:59.790590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.610 [2024-11-17 14:04:59.795677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:21.610 [2024-11-17 14:04:59.795849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:21.610 [2024-11-17 14:04:59.795900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:21.610 [2024-11-17 14:04:59.795946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.610 [2024-11-17 14:04:59.796062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:21.610 [2024-11-17 14:04:59.796171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:21.610 [2024-11-17 14:04:59.796219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:21.610 [2024-11-17 14:04:59.796343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.610 [2024-11-17 14:04:59.796444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:21.610 [2024-11-17 14:04:59.796543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:21.610 [2024-11-17 14:04:59.796584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:21.610 [2024-11-17 14:04:59.796616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.610 [2024-11-17 14:04:59.796667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:21.610 [2024-11-17 14:04:59.796759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:21.610 [2024-11-17 14:04:59.796806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:21.610 [2024-11-17 14:04:59.796839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.610 [2024-11-17 14:04:59.805618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:21.610 [2024-11-17 14:04:59.805799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:21.610 [2024-11-17 14:04:59.805944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:21.610 [2024-11-17 14:04:59.806041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.610 [2024-11-17 14:04:59.813389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:21.610 [2024-11-17 14:04:59.813562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:21.610 [2024-11-17 14:04:59.813648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:21.610 [2024-11-17 14:04:59.813715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.610 [2024-11-17 14:04:59.813847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:21.610 [2024-11-17 14:04:59.813911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:21.610 [2024-11-17 14:04:59.814058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:21.610 [2024-11-17 14:04:59.814110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.610 [2024-11-17 14:04:59.814257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:21.610 [2024-11-17 14:04:59.814360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:21.610 [2024-11-17 14:04:59.814437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:21.610 [2024-11-17 14:04:59.814485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.610 [2024-11-17 14:04:59.814723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:21.610 [2024-11-17 14:04:59.814842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:21.610 [2024-11-17 14:04:59.814955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:21.610 [2024-11-17 14:04:59.815013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.611 [2024-11-17 14:04:59.815139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:21.611 [2024-11-17 14:04:59.815249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:21.611 [2024-11-17 14:04:59.815345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:21.611 [2024-11-17 14:04:59.815398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.611 [2024-11-17 14:04:59.815590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:21.611 [2024-11-17 14:04:59.815651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:21.611 [2024-11-17 14:04:59.815786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:21.611 [2024-11-17 14:04:59.815836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.611 [2024-11-17 14:04:59.816020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:21.611 [2024-11-17 14:04:59.816124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:21.611 [2024-11-17 14:04:59.816171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:21.611 [2024-11-17 14:04:59.816208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:21.611 [2024-11-17 14:04:59.816442] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 57.947 ms, result 0 00:16:21.611 true 00:16:21.611 14:04:59 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 85251 00:16:21.611 14:04:59 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 85251 ']' 00:16:21.611 14:04:59 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 85251 00:16:21.611 14:04:59 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:16:21.611 14:04:59 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:21.611 14:04:59 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85251 00:16:21.611 killing process with pid 85251 00:16:21.611 14:04:59 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:21.611 14:04:59 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:21.611 14:04:59 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85251' 00:16:21.611 14:04:59 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 85251 00:16:21.611 14:04:59 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 85251 00:16:26.897 14:05:04 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:16:27.841 65536+0 records in 00:16:27.841 65536+0 records out 00:16:27.841 268435456 bytes (268 MB, 256 MiB) copied, 0.801893 s, 335 MB/s 00:16:27.841 14:05:05 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:27.841 [2024-11-17 14:05:05.846532] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:27.841 [2024-11-17 14:05:05.846652] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85412 ] 00:16:27.841 [2024-11-17 14:05:05.995650] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:27.841 [2024-11-17 14:05:06.046661] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:28.106 [2024-11-17 14:05:06.161766] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:28.106 [2024-11-17 14:05:06.162105] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:28.106 [2024-11-17 14:05:06.322495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.106 [2024-11-17 14:05:06.322706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:28.106 [2024-11-17 14:05:06.322732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:28.106 [2024-11-17 14:05:06.322749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.106 [2024-11-17 14:05:06.325325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.106 [2024-11-17 14:05:06.325371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:28.106 [2024-11-17 14:05:06.325384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.549 ms 00:16:28.106 [2024-11-17 14:05:06.325392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.106 [2024-11-17 14:05:06.325500] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:28.106 [2024-11-17 14:05:06.325760] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:28.106 [2024-11-17 14:05:06.325778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.106 [2024-11-17 14:05:06.325786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:28.106 [2024-11-17 14:05:06.325800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:16:28.106 [2024-11-17 14:05:06.325811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.106 [2024-11-17 14:05:06.327505] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:28.106 [2024-11-17 14:05:06.331206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.106 [2024-11-17 14:05:06.331266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:28.106 [2024-11-17 14:05:06.331277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.701 ms 00:16:28.106 [2024-11-17 14:05:06.331288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.106 [2024-11-17 14:05:06.331366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.106 [2024-11-17 14:05:06.331377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:28.106 [2024-11-17 14:05:06.331387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:16:28.106 [2024-11-17 14:05:06.331403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.106 [2024-11-17 14:05:06.339575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.106 [2024-11-17 14:05:06.339627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:28.106 [2024-11-17 14:05:06.339638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.126 ms 00:16:28.106 [2024-11-17 14:05:06.339646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.106 [2024-11-17 14:05:06.339786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.106 [2024-11-17 14:05:06.339798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:28.106 [2024-11-17 14:05:06.339807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:16:28.106 [2024-11-17 14:05:06.339815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.106 [2024-11-17 14:05:06.339843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.106 [2024-11-17 14:05:06.339851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:28.106 [2024-11-17 14:05:06.339859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:28.106 [2024-11-17 14:05:06.339866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.106 [2024-11-17 14:05:06.339897] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:28.106 [2024-11-17 14:05:06.342048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.106 [2024-11-17 14:05:06.342224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:28.106 [2024-11-17 14:05:06.342258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.156 ms 00:16:28.106 [2024-11-17 14:05:06.342266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.106 [2024-11-17 14:05:06.342318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.106 [2024-11-17 14:05:06.342333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:28.106 [2024-11-17 14:05:06.342348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:16:28.106 [2024-11-17 14:05:06.342356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.106 [2024-11-17 14:05:06.342374] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:28.106 [2024-11-17 14:05:06.342396] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:16:28.106 [2024-11-17 14:05:06.342434] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:28.106 [2024-11-17 14:05:06.342453] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:16:28.106 [2024-11-17 14:05:06.342566] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:28.106 [2024-11-17 14:05:06.342578] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:28.106 [2024-11-17 14:05:06.342589] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:28.106 [2024-11-17 14:05:06.342603] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:28.106 [2024-11-17 14:05:06.342616] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:28.106 [2024-11-17 14:05:06.342629] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:28.106 [2024-11-17 14:05:06.342636] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:28.106 [2024-11-17 14:05:06.342648] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:28.106 [2024-11-17 14:05:06.342655] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:28.106 [2024-11-17 14:05:06.342667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.106 [2024-11-17 14:05:06.342681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:28.106 [2024-11-17 14:05:06.342690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:16:28.106 [2024-11-17 14:05:06.342698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.106 [2024-11-17 14:05:06.342789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.106 [2024-11-17 14:05:06.342799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:28.106 [2024-11-17 14:05:06.342809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:16:28.106 [2024-11-17 14:05:06.342817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.106 [2024-11-17 14:05:06.342924] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:28.106 [2024-11-17 14:05:06.342941] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:28.106 [2024-11-17 14:05:06.342954] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:28.106 [2024-11-17 14:05:06.342964] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:28.106 [2024-11-17 14:05:06.342973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:28.106 [2024-11-17 14:05:06.342980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:28.106 [2024-11-17 14:05:06.342988] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:28.106 [2024-11-17 14:05:06.342996] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:28.106 [2024-11-17 14:05:06.343009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:28.106 [2024-11-17 14:05:06.343017] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:28.106 [2024-11-17 14:05:06.343025] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:28.107 [2024-11-17 14:05:06.343034] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:28.107 [2024-11-17 14:05:06.343042] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:28.107 [2024-11-17 14:05:06.343049] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:28.107 [2024-11-17 14:05:06.343057] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:28.107 [2024-11-17 14:05:06.343066] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:28.107 [2024-11-17 14:05:06.343074] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:28.107 [2024-11-17 14:05:06.343081] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:28.107 [2024-11-17 14:05:06.343089] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:28.107 [2024-11-17 14:05:06.343098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:28.107 [2024-11-17 14:05:06.343106] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:28.107 [2024-11-17 14:05:06.343115] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:28.107 [2024-11-17 14:05:06.343123] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:28.107 [2024-11-17 14:05:06.343131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:28.107 [2024-11-17 14:05:06.343144] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:28.107 [2024-11-17 14:05:06.343151] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:28.107 [2024-11-17 14:05:06.343158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:28.107 [2024-11-17 14:05:06.343165] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:28.107 [2024-11-17 14:05:06.343173] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:28.107 [2024-11-17 14:05:06.343179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:28.107 [2024-11-17 14:05:06.343186] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:28.107 [2024-11-17 14:05:06.343193] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:28.107 [2024-11-17 14:05:06.343199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:28.107 [2024-11-17 14:05:06.343206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:28.107 [2024-11-17 14:05:06.343213] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:28.107 [2024-11-17 14:05:06.343220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:28.107 [2024-11-17 14:05:06.343226] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:28.107 [2024-11-17 14:05:06.343233] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:28.107 [2024-11-17 14:05:06.343264] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:28.107 [2024-11-17 14:05:06.343271] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:28.107 [2024-11-17 14:05:06.343281] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:28.107 [2024-11-17 14:05:06.343288] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:28.107 [2024-11-17 14:05:06.343295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:28.107 [2024-11-17 14:05:06.343303] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:28.107 [2024-11-17 14:05:06.343312] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:28.107 [2024-11-17 14:05:06.343320] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:28.107 [2024-11-17 14:05:06.343327] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:28.107 [2024-11-17 14:05:06.343336] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:28.107 [2024-11-17 14:05:06.343343] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:28.107 [2024-11-17 14:05:06.343350] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:28.107 [2024-11-17 14:05:06.343357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:28.107 [2024-11-17 14:05:06.343363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:28.107 [2024-11-17 14:05:06.343370] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:28.107 [2024-11-17 14:05:06.343379] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:28.107 [2024-11-17 14:05:06.343390] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:28.107 [2024-11-17 14:05:06.343398] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:28.107 [2024-11-17 14:05:06.343408] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:28.107 [2024-11-17 14:05:06.343416] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:28.107 [2024-11-17 14:05:06.343424] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:28.107 [2024-11-17 14:05:06.343432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:28.107 [2024-11-17 14:05:06.343439] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:28.107 [2024-11-17 14:05:06.343447] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:28.107 [2024-11-17 14:05:06.343454] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:28.107 [2024-11-17 14:05:06.343477] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:28.107 [2024-11-17 14:05:06.343485] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:28.107 [2024-11-17 14:05:06.343492] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:28.107 [2024-11-17 14:05:06.343499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:28.107 [2024-11-17 14:05:06.343507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:28.107 [2024-11-17 14:05:06.343514] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:28.107 [2024-11-17 14:05:06.343521] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:28.107 [2024-11-17 14:05:06.343534] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:28.107 [2024-11-17 14:05:06.343544] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:28.107 [2024-11-17 14:05:06.343555] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:28.107 [2024-11-17 14:05:06.343571] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:28.107 [2024-11-17 14:05:06.343579] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:28.107 [2024-11-17 14:05:06.343587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.107 [2024-11-17 14:05:06.343600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:28.107 [2024-11-17 14:05:06.343614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.731 ms 00:16:28.107 [2024-11-17 14:05:06.343622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.107 [2024-11-17 14:05:06.367191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.107 [2024-11-17 14:05:06.367274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:28.107 [2024-11-17 14:05:06.367291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.493 ms 00:16:28.107 [2024-11-17 14:05:06.367301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.107 [2024-11-17 14:05:06.367483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.107 [2024-11-17 14:05:06.367497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:28.107 [2024-11-17 14:05:06.367514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:16:28.107 [2024-11-17 14:05:06.367523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.107 [2024-11-17 14:05:06.379182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.107 [2024-11-17 14:05:06.379233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:28.107 [2024-11-17 14:05:06.379266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.631 ms 00:16:28.107 [2024-11-17 14:05:06.379275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.107 [2024-11-17 14:05:06.379347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.107 [2024-11-17 14:05:06.379361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:28.107 [2024-11-17 14:05:06.379370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:28.107 [2024-11-17 14:05:06.379378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.107 [2024-11-17 14:05:06.379931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.107 [2024-11-17 14:05:06.379968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:28.107 [2024-11-17 14:05:06.379988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.532 ms 00:16:28.107 [2024-11-17 14:05:06.380001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.107 [2024-11-17 14:05:06.380160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.107 [2024-11-17 14:05:06.380171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:28.107 [2024-11-17 14:05:06.380185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:16:28.107 [2024-11-17 14:05:06.380194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.107 [2024-11-17 14:05:06.387525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.107 [2024-11-17 14:05:06.387577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:28.107 [2024-11-17 14:05:06.387587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.307 ms 00:16:28.107 [2024-11-17 14:05:06.387595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.107 [2024-11-17 14:05:06.391323] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:16:28.107 [2024-11-17 14:05:06.391373] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:28.107 [2024-11-17 14:05:06.391386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.108 [2024-11-17 14:05:06.391394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:28.108 [2024-11-17 14:05:06.391404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.694 ms 00:16:28.108 [2024-11-17 14:05:06.391411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.369 [2024-11-17 14:05:06.406966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.369 [2024-11-17 14:05:06.407014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:28.369 [2024-11-17 14:05:06.407026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.479 ms 00:16:28.369 [2024-11-17 14:05:06.407034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.369 [2024-11-17 14:05:06.410029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.369 [2024-11-17 14:05:06.410203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:28.369 [2024-11-17 14:05:06.410222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.906 ms 00:16:28.369 [2024-11-17 14:05:06.410230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.369 [2024-11-17 14:05:06.412723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.369 [2024-11-17 14:05:06.412769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:28.369 [2024-11-17 14:05:06.412788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.433 ms 00:16:28.369 [2024-11-17 14:05:06.412795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.369 [2024-11-17 14:05:06.413135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.369 [2024-11-17 14:05:06.413147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:28.369 [2024-11-17 14:05:06.413156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:16:28.369 [2024-11-17 14:05:06.413167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.369 [2024-11-17 14:05:06.436141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.369 [2024-11-17 14:05:06.436400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:28.369 [2024-11-17 14:05:06.436476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.951 ms 00:16:28.369 [2024-11-17 14:05:06.436502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.370 [2024-11-17 14:05:06.444965] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:28.370 [2024-11-17 14:05:06.464590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.370 [2024-11-17 14:05:06.464772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:28.370 [2024-11-17 14:05:06.464792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.976 ms 00:16:28.370 [2024-11-17 14:05:06.464801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.370 [2024-11-17 14:05:06.464899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.370 [2024-11-17 14:05:06.464911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:28.370 [2024-11-17 14:05:06.464921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:16:28.370 [2024-11-17 14:05:06.464929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.370 [2024-11-17 14:05:06.464995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.370 [2024-11-17 14:05:06.465005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:28.370 [2024-11-17 14:05:06.465014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:16:28.370 [2024-11-17 14:05:06.465023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.370 [2024-11-17 14:05:06.465046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.370 [2024-11-17 14:05:06.465055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:28.370 [2024-11-17 14:05:06.465064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:28.370 [2024-11-17 14:05:06.465072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.370 [2024-11-17 14:05:06.465109] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:28.370 [2024-11-17 14:05:06.465121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.370 [2024-11-17 14:05:06.465135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:28.370 [2024-11-17 14:05:06.465147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:16:28.370 [2024-11-17 14:05:06.465155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.370 [2024-11-17 14:05:06.471033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.370 [2024-11-17 14:05:06.471084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:28.370 [2024-11-17 14:05:06.471098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.855 ms 00:16:28.370 [2024-11-17 14:05:06.471107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.370 [2024-11-17 14:05:06.471207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.370 [2024-11-17 14:05:06.471218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:28.370 [2024-11-17 14:05:06.471231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:16:28.370 [2024-11-17 14:05:06.471269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.370 [2024-11-17 14:05:06.472337] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:28.370 [2024-11-17 14:05:06.473679] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 149.484 ms, result 0 00:16:28.370 [2024-11-17 14:05:06.475061] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:28.370 [2024-11-17 14:05:06.482365] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:29.314  [2024-11-17T14:05:08.559Z] Copying: 19/256 [MB] (19 MBps) [2024-11-17T14:05:09.499Z] Copying: 44/256 [MB] (24 MBps) [2024-11-17T14:05:10.886Z] Copying: 95/256 [MB] (50 MBps) [2024-11-17T14:05:11.831Z] Copying: 131/256 [MB] (36 MBps) [2024-11-17T14:05:12.775Z] Copying: 152/256 [MB] (20 MBps) [2024-11-17T14:05:13.718Z] Copying: 169/256 [MB] (16 MBps) [2024-11-17T14:05:14.662Z] Copying: 183/256 [MB] (14 MBps) [2024-11-17T14:05:15.605Z] Copying: 199/256 [MB] (15 MBps) [2024-11-17T14:05:16.542Z] Copying: 211/256 [MB] (12 MBps) [2024-11-17T14:05:17.916Z] Copying: 222/256 [MB] (10 MBps) [2024-11-17T14:05:18.532Z] Copying: 238/256 [MB] (16 MBps) [2024-11-17T14:05:18.532Z] Copying: 256/256 [MB] (average 21 MBps)[2024-11-17 14:05:18.443427] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:40.231 [2024-11-17 14:05:18.444497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.231 [2024-11-17 14:05:18.444594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:40.231 [2024-11-17 14:05:18.444648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:40.231 [2024-11-17 14:05:18.444673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.231 [2024-11-17 14:05:18.444706] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:40.231 [2024-11-17 14:05:18.445191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.231 [2024-11-17 14:05:18.445296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:40.231 [2024-11-17 14:05:18.445345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.338 ms 00:16:40.231 [2024-11-17 14:05:18.445363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.231 [2024-11-17 14:05:18.446677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.231 [2024-11-17 14:05:18.446766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:40.231 [2024-11-17 14:05:18.446814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.285 ms 00:16:40.231 [2024-11-17 14:05:18.446832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.231 [2024-11-17 14:05:18.451687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.231 [2024-11-17 14:05:18.451780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:40.231 [2024-11-17 14:05:18.451830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.831 ms 00:16:40.231 [2024-11-17 14:05:18.451847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.231 [2024-11-17 14:05:18.457286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.231 [2024-11-17 14:05:18.457366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:40.231 [2024-11-17 14:05:18.457405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.406 ms 00:16:40.231 [2024-11-17 14:05:18.457422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.231 [2024-11-17 14:05:18.458508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.231 [2024-11-17 14:05:18.458590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:40.231 [2024-11-17 14:05:18.458600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.026 ms 00:16:40.231 [2024-11-17 14:05:18.458606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.232 [2024-11-17 14:05:18.461989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.232 [2024-11-17 14:05:18.462017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:40.232 [2024-11-17 14:05:18.462028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.352 ms 00:16:40.232 [2024-11-17 14:05:18.462033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.232 [2024-11-17 14:05:18.462124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.232 [2024-11-17 14:05:18.462132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:40.232 [2024-11-17 14:05:18.462138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:16:40.232 [2024-11-17 14:05:18.462144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.232 [2024-11-17 14:05:18.463675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.232 [2024-11-17 14:05:18.463701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:40.232 [2024-11-17 14:05:18.463708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.519 ms 00:16:40.232 [2024-11-17 14:05:18.463714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.232 [2024-11-17 14:05:18.464849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.232 [2024-11-17 14:05:18.464935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:40.232 [2024-11-17 14:05:18.464945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.110 ms 00:16:40.232 [2024-11-17 14:05:18.464950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.232 [2024-11-17 14:05:18.465876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.232 [2024-11-17 14:05:18.465898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:40.232 [2024-11-17 14:05:18.465905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.904 ms 00:16:40.232 [2024-11-17 14:05:18.465910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.232 [2024-11-17 14:05:18.466674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.232 [2024-11-17 14:05:18.466699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:40.232 [2024-11-17 14:05:18.466705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.722 ms 00:16:40.232 [2024-11-17 14:05:18.466710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.232 [2024-11-17 14:05:18.466733] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:40.232 [2024-11-17 14:05:18.466747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:40.232 [2024-11-17 14:05:18.466754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:40.232 [2024-11-17 14:05:18.466760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:40.232 [2024-11-17 14:05:18.466766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:40.232 [2024-11-17 14:05:18.466771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:40.232 [2024-11-17 14:05:18.466777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:40.232 [2024-11-17 14:05:18.466782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:40.232 [2024-11-17 14:05:18.466788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:40.232 [2024-11-17 14:05:18.466793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:40.232 [2024-11-17 14:05:18.466799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:40.232 [2024-11-17 14:05:18.466805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:40.232 [2024-11-17 14:05:18.466810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:40.232 [2024-11-17 14:05:18.466815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:40.232 [2024-11-17 14:05:18.466821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:40.232 [2024-11-17 14:05:18.466826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:40.232 [2024-11-17 14:05:18.466832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:40.232 [2024-11-17 14:05:18.466837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:40.232 [2024-11-17 14:05:18.466843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:40.232 [2024-11-17 14:05:18.466848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:40.232 [2024-11-17 14:05:18.466853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:40.232 [2024-11-17 14:05:18.466859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:40.232 [2024-11-17 14:05:18.466865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:40.232 [2024-11-17 14:05:18.466870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:40.232 [2024-11-17 14:05:18.466875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:40.232 [2024-11-17 14:05:18.466880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:40.232 [2024-11-17 14:05:18.466886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:40.232 [2024-11-17 14:05:18.466893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:40.232 [2024-11-17 14:05:18.466898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:40.232 [2024-11-17 14:05:18.466904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:40.232 [2024-11-17 14:05:18.466909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:40.232 [2024-11-17 14:05:18.466914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:40.232 [2024-11-17 14:05:18.466920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:40.232 [2024-11-17 14:05:18.466926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:40.232 [2024-11-17 14:05:18.466932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:40.232 [2024-11-17 14:05:18.466937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:40.232 [2024-11-17 14:05:18.466942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:40.232 [2024-11-17 14:05:18.466948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:40.232 [2024-11-17 14:05:18.466953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:40.232 [2024-11-17 14:05:18.466959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:40.232 [2024-11-17 14:05:18.466964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:40.232 [2024-11-17 14:05:18.466970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:40.232 [2024-11-17 14:05:18.466975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:40.232 [2024-11-17 14:05:18.466981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:40.232 [2024-11-17 14:05:18.466986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:40.232 [2024-11-17 14:05:18.466992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:40.232 [2024-11-17 14:05:18.466997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:40.232 [2024-11-17 14:05:18.467003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:40.232 [2024-11-17 14:05:18.467008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:40.232 [2024-11-17 14:05:18.467013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:40.232 [2024-11-17 14:05:18.467019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:40.232 [2024-11-17 14:05:18.467024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:40.232 [2024-11-17 14:05:18.467030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:40.232 [2024-11-17 14:05:18.467035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:40.233 [2024-11-17 14:05:18.467041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:40.233 [2024-11-17 14:05:18.467047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:40.233 [2024-11-17 14:05:18.467053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:40.233 [2024-11-17 14:05:18.467058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:40.233 [2024-11-17 14:05:18.467064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:40.233 [2024-11-17 14:05:18.467069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:40.233 [2024-11-17 14:05:18.467074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:40.233 [2024-11-17 14:05:18.467079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:40.233 [2024-11-17 14:05:18.467085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:40.233 [2024-11-17 14:05:18.467090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:40.233 [2024-11-17 14:05:18.467096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:40.233 [2024-11-17 14:05:18.467102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:40.233 [2024-11-17 14:05:18.467108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:40.233 [2024-11-17 14:05:18.467113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:40.233 [2024-11-17 14:05:18.467119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:40.233 [2024-11-17 14:05:18.467124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:40.233 [2024-11-17 14:05:18.467130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:40.233 [2024-11-17 14:05:18.467136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:40.233 [2024-11-17 14:05:18.467141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:40.233 [2024-11-17 14:05:18.467147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:40.233 [2024-11-17 14:05:18.467152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:40.233 [2024-11-17 14:05:18.467157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:40.233 [2024-11-17 14:05:18.467163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:40.233 [2024-11-17 14:05:18.467168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:40.233 [2024-11-17 14:05:18.467174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:40.233 [2024-11-17 14:05:18.467179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:40.233 [2024-11-17 14:05:18.467184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:40.233 [2024-11-17 14:05:18.467190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:40.233 [2024-11-17 14:05:18.467195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:40.233 [2024-11-17 14:05:18.467201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:40.233 [2024-11-17 14:05:18.467206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:40.233 [2024-11-17 14:05:18.467212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:40.233 [2024-11-17 14:05:18.467217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:40.233 [2024-11-17 14:05:18.467223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:40.233 [2024-11-17 14:05:18.467228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:40.233 [2024-11-17 14:05:18.467234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:40.233 [2024-11-17 14:05:18.467248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:40.233 [2024-11-17 14:05:18.467254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:40.233 [2024-11-17 14:05:18.467259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:40.233 [2024-11-17 14:05:18.467265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:40.233 [2024-11-17 14:05:18.467273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:40.233 [2024-11-17 14:05:18.467279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:40.233 [2024-11-17 14:05:18.467284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:40.233 [2024-11-17 14:05:18.467290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:40.233 [2024-11-17 14:05:18.467295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:40.233 [2024-11-17 14:05:18.467301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:40.233 [2024-11-17 14:05:18.467307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:40.233 [2024-11-17 14:05:18.467318] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:40.233 [2024-11-17 14:05:18.467325] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ddfd3108-0b08-42c1-8a1f-32ad0e417f42 00:16:40.233 [2024-11-17 14:05:18.467331] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:40.233 [2024-11-17 14:05:18.467340] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:40.233 [2024-11-17 14:05:18.467345] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:40.233 [2024-11-17 14:05:18.467351] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:40.233 [2024-11-17 14:05:18.467356] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:40.233 [2024-11-17 14:05:18.467362] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:40.233 [2024-11-17 14:05:18.467368] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:40.233 [2024-11-17 14:05:18.467373] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:40.233 [2024-11-17 14:05:18.467377] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:40.233 [2024-11-17 14:05:18.467382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.233 [2024-11-17 14:05:18.467388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:40.233 [2024-11-17 14:05:18.467398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.650 ms 00:16:40.233 [2024-11-17 14:05:18.467403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.233 [2024-11-17 14:05:18.468797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.233 [2024-11-17 14:05:18.468868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:40.233 [2024-11-17 14:05:18.468906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.382 ms 00:16:40.233 [2024-11-17 14:05:18.468929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.233 [2024-11-17 14:05:18.469010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.233 [2024-11-17 14:05:18.469033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:40.233 [2024-11-17 14:05:18.469077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:16:40.233 [2024-11-17 14:05:18.469093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.233 [2024-11-17 14:05:18.473079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.233 [2024-11-17 14:05:18.473160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:40.233 [2024-11-17 14:05:18.473199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.233 [2024-11-17 14:05:18.473222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.233 [2024-11-17 14:05:18.473283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.233 [2024-11-17 14:05:18.473304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:40.233 [2024-11-17 14:05:18.473341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.233 [2024-11-17 14:05:18.473357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.233 [2024-11-17 14:05:18.473396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.233 [2024-11-17 14:05:18.473492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:40.233 [2024-11-17 14:05:18.473510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.233 [2024-11-17 14:05:18.473524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.234 [2024-11-17 14:05:18.473546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.234 [2024-11-17 14:05:18.473635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:40.234 [2024-11-17 14:05:18.473656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.234 [2024-11-17 14:05:18.473670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.234 [2024-11-17 14:05:18.481120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.234 [2024-11-17 14:05:18.481256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:40.234 [2024-11-17 14:05:18.481298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.234 [2024-11-17 14:05:18.481315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.234 [2024-11-17 14:05:18.487399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.234 [2024-11-17 14:05:18.487527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:40.234 [2024-11-17 14:05:18.487570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.234 [2024-11-17 14:05:18.487588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.234 [2024-11-17 14:05:18.487622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.234 [2024-11-17 14:05:18.487701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:40.234 [2024-11-17 14:05:18.487719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.234 [2024-11-17 14:05:18.487733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.234 [2024-11-17 14:05:18.487766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.234 [2024-11-17 14:05:18.487844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:40.234 [2024-11-17 14:05:18.487861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.234 [2024-11-17 14:05:18.487879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.234 [2024-11-17 14:05:18.487945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.234 [2024-11-17 14:05:18.487966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:40.234 [2024-11-17 14:05:18.487986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.234 [2024-11-17 14:05:18.488000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.234 [2024-11-17 14:05:18.488069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.234 [2024-11-17 14:05:18.488088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:40.234 [2024-11-17 14:05:18.488169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.234 [2024-11-17 14:05:18.488186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.234 [2024-11-17 14:05:18.488227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.234 [2024-11-17 14:05:18.488321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:40.234 [2024-11-17 14:05:18.488340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.234 [2024-11-17 14:05:18.488354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.234 [2024-11-17 14:05:18.488399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:40.234 [2024-11-17 14:05:18.488470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:40.234 [2024-11-17 14:05:18.488488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:40.234 [2024-11-17 14:05:18.488506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.234 [2024-11-17 14:05:18.488624] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 44.108 ms, result 0 00:16:40.820 00:16:40.820 00:16:40.820 14:05:18 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=85555 00:16:40.820 14:05:18 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 85555 00:16:40.820 14:05:18 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 85555 ']' 00:16:40.820 14:05:18 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:16:40.820 14:05:18 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:40.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:40.820 14:05:18 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:40.820 14:05:18 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:40.820 14:05:18 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:40.820 14:05:18 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:16:40.820 [2024-11-17 14:05:19.060153] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:40.820 [2024-11-17 14:05:19.060295] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85555 ] 00:16:41.078 [2024-11-17 14:05:19.204711] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:41.078 [2024-11-17 14:05:19.234101] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:41.644 14:05:19 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:41.644 14:05:19 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:16:41.644 14:05:19 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:16:41.902 [2024-11-17 14:05:20.095533] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:41.902 [2024-11-17 14:05:20.095586] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:42.162 [2024-11-17 14:05:20.253710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.162 [2024-11-17 14:05:20.253750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:42.162 [2024-11-17 14:05:20.253760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:42.162 [2024-11-17 14:05:20.253768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.163 [2024-11-17 14:05:20.255490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.163 [2024-11-17 14:05:20.255521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:42.163 [2024-11-17 14:05:20.255529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.707 ms 00:16:42.163 [2024-11-17 14:05:20.255535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.163 [2024-11-17 14:05:20.255591] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:42.163 [2024-11-17 14:05:20.255760] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:42.163 [2024-11-17 14:05:20.255770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.163 [2024-11-17 14:05:20.255780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:42.163 [2024-11-17 14:05:20.255787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.185 ms 00:16:42.163 [2024-11-17 14:05:20.255793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.163 [2024-11-17 14:05:20.256717] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:42.163 [2024-11-17 14:05:20.258607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.163 [2024-11-17 14:05:20.258723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:42.163 [2024-11-17 14:05:20.258737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.888 ms 00:16:42.163 [2024-11-17 14:05:20.258743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.163 [2024-11-17 14:05:20.258785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.163 [2024-11-17 14:05:20.258793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:42.163 [2024-11-17 14:05:20.258805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:16:42.163 [2024-11-17 14:05:20.258811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.163 [2024-11-17 14:05:20.263065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.163 [2024-11-17 14:05:20.263089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:42.163 [2024-11-17 14:05:20.263097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.215 ms 00:16:42.163 [2024-11-17 14:05:20.263103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.163 [2024-11-17 14:05:20.263182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.163 [2024-11-17 14:05:20.263189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:42.163 [2024-11-17 14:05:20.263199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:16:42.163 [2024-11-17 14:05:20.263205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.163 [2024-11-17 14:05:20.263226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.163 [2024-11-17 14:05:20.263235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:42.163 [2024-11-17 14:05:20.263252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:42.163 [2024-11-17 14:05:20.263259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.163 [2024-11-17 14:05:20.263277] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:42.163 [2024-11-17 14:05:20.264412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.163 [2024-11-17 14:05:20.264440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:42.163 [2024-11-17 14:05:20.264446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.139 ms 00:16:42.163 [2024-11-17 14:05:20.264456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.163 [2024-11-17 14:05:20.264484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.163 [2024-11-17 14:05:20.264491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:42.163 [2024-11-17 14:05:20.264498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:42.163 [2024-11-17 14:05:20.264504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.163 [2024-11-17 14:05:20.264519] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:42.163 [2024-11-17 14:05:20.264534] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:16:42.163 [2024-11-17 14:05:20.264563] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:42.163 [2024-11-17 14:05:20.264581] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:16:42.163 [2024-11-17 14:05:20.264661] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:42.163 [2024-11-17 14:05:20.264672] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:42.163 [2024-11-17 14:05:20.264680] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:42.163 [2024-11-17 14:05:20.264689] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:42.163 [2024-11-17 14:05:20.264696] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:42.163 [2024-11-17 14:05:20.264707] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:42.163 [2024-11-17 14:05:20.264712] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:42.163 [2024-11-17 14:05:20.264718] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:42.163 [2024-11-17 14:05:20.264724] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:42.163 [2024-11-17 14:05:20.264731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.163 [2024-11-17 14:05:20.264738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:42.163 [2024-11-17 14:05:20.264745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.212 ms 00:16:42.163 [2024-11-17 14:05:20.264750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.163 [2024-11-17 14:05:20.264818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.163 [2024-11-17 14:05:20.264824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:42.163 [2024-11-17 14:05:20.264833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:16:42.163 [2024-11-17 14:05:20.264838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.163 [2024-11-17 14:05:20.264916] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:42.163 [2024-11-17 14:05:20.264926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:42.163 [2024-11-17 14:05:20.264936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:42.163 [2024-11-17 14:05:20.264941] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:42.163 [2024-11-17 14:05:20.264950] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:42.163 [2024-11-17 14:05:20.264955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:42.163 [2024-11-17 14:05:20.264961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:42.163 [2024-11-17 14:05:20.264967] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:42.163 [2024-11-17 14:05:20.264977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:42.163 [2024-11-17 14:05:20.264982] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:42.163 [2024-11-17 14:05:20.264988] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:42.163 [2024-11-17 14:05:20.264993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:42.163 [2024-11-17 14:05:20.264999] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:42.163 [2024-11-17 14:05:20.265004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:42.163 [2024-11-17 14:05:20.265011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:42.163 [2024-11-17 14:05:20.265015] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:42.163 [2024-11-17 14:05:20.265022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:42.163 [2024-11-17 14:05:20.265027] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:42.163 [2024-11-17 14:05:20.265033] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:42.163 [2024-11-17 14:05:20.265038] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:42.163 [2024-11-17 14:05:20.265046] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:42.163 [2024-11-17 14:05:20.265052] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:42.163 [2024-11-17 14:05:20.265059] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:42.163 [2024-11-17 14:05:20.265065] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:42.163 [2024-11-17 14:05:20.265072] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:42.163 [2024-11-17 14:05:20.265078] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:42.163 [2024-11-17 14:05:20.265085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:42.163 [2024-11-17 14:05:20.265091] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:42.163 [2024-11-17 14:05:20.265098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:42.163 [2024-11-17 14:05:20.265104] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:42.163 [2024-11-17 14:05:20.265111] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:42.163 [2024-11-17 14:05:20.265117] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:42.163 [2024-11-17 14:05:20.265124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:42.163 [2024-11-17 14:05:20.265129] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:42.163 [2024-11-17 14:05:20.265136] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:42.163 [2024-11-17 14:05:20.265142] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:42.163 [2024-11-17 14:05:20.265150] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:42.163 [2024-11-17 14:05:20.265156] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:42.163 [2024-11-17 14:05:20.265163] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:42.163 [2024-11-17 14:05:20.265169] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:42.163 [2024-11-17 14:05:20.265176] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:42.163 [2024-11-17 14:05:20.265182] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:42.164 [2024-11-17 14:05:20.265189] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:42.164 [2024-11-17 14:05:20.265195] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:42.164 [2024-11-17 14:05:20.265203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:42.164 [2024-11-17 14:05:20.265209] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:42.164 [2024-11-17 14:05:20.265217] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:42.164 [2024-11-17 14:05:20.265223] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:42.164 [2024-11-17 14:05:20.265230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:42.164 [2024-11-17 14:05:20.265252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:42.164 [2024-11-17 14:05:20.265260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:42.164 [2024-11-17 14:05:20.265266] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:42.164 [2024-11-17 14:05:20.265275] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:42.164 [2024-11-17 14:05:20.265282] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:42.164 [2024-11-17 14:05:20.265292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:42.164 [2024-11-17 14:05:20.265300] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:42.164 [2024-11-17 14:05:20.265308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:42.164 [2024-11-17 14:05:20.265314] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:42.164 [2024-11-17 14:05:20.265321] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:42.164 [2024-11-17 14:05:20.265328] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:42.164 [2024-11-17 14:05:20.265335] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:42.164 [2024-11-17 14:05:20.265342] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:42.164 [2024-11-17 14:05:20.265349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:42.164 [2024-11-17 14:05:20.265356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:42.164 [2024-11-17 14:05:20.265363] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:42.164 [2024-11-17 14:05:20.265370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:42.164 [2024-11-17 14:05:20.265377] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:42.164 [2024-11-17 14:05:20.265383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:42.164 [2024-11-17 14:05:20.265393] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:42.164 [2024-11-17 14:05:20.265399] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:42.164 [2024-11-17 14:05:20.265411] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:42.164 [2024-11-17 14:05:20.265418] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:42.164 [2024-11-17 14:05:20.265426] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:42.164 [2024-11-17 14:05:20.265432] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:42.164 [2024-11-17 14:05:20.265440] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:42.164 [2024-11-17 14:05:20.265447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.164 [2024-11-17 14:05:20.265459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:42.164 [2024-11-17 14:05:20.265465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.586 ms 00:16:42.164 [2024-11-17 14:05:20.265473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.164 [2024-11-17 14:05:20.273586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.164 [2024-11-17 14:05:20.273701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:42.164 [2024-11-17 14:05:20.273753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.071 ms 00:16:42.164 [2024-11-17 14:05:20.273774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.164 [2024-11-17 14:05:20.273887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.164 [2024-11-17 14:05:20.273928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:42.164 [2024-11-17 14:05:20.273999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:16:42.164 [2024-11-17 14:05:20.274018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.164 [2024-11-17 14:05:20.280970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.164 [2024-11-17 14:05:20.281070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:42.164 [2024-11-17 14:05:20.281118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.923 ms 00:16:42.164 [2024-11-17 14:05:20.281141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.164 [2024-11-17 14:05:20.281187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.164 [2024-11-17 14:05:20.281274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:42.164 [2024-11-17 14:05:20.281292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:16:42.164 [2024-11-17 14:05:20.281333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.164 [2024-11-17 14:05:20.281625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.164 [2024-11-17 14:05:20.281701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:42.164 [2024-11-17 14:05:20.281745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:16:42.164 [2024-11-17 14:05:20.281764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.164 [2024-11-17 14:05:20.281873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.164 [2024-11-17 14:05:20.281899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:42.164 [2024-11-17 14:05:20.281943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:16:42.164 [2024-11-17 14:05:20.281962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.164 [2024-11-17 14:05:20.303168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.164 [2024-11-17 14:05:20.303563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:42.164 [2024-11-17 14:05:20.303685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.174 ms 00:16:42.164 [2024-11-17 14:05:20.303715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.164 [2024-11-17 14:05:20.305885] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:16:42.164 [2024-11-17 14:05:20.306005] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:42.164 [2024-11-17 14:05:20.306070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.164 [2024-11-17 14:05:20.306093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:42.164 [2024-11-17 14:05:20.306168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.199 ms 00:16:42.164 [2024-11-17 14:05:20.306195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.164 [2024-11-17 14:05:20.319567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.164 [2024-11-17 14:05:20.319658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:42.164 [2024-11-17 14:05:20.319700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.212 ms 00:16:42.164 [2024-11-17 14:05:20.319734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.164 [2024-11-17 14:05:20.321141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.164 [2024-11-17 14:05:20.321228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:42.164 [2024-11-17 14:05:20.321252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.345 ms 00:16:42.164 [2024-11-17 14:05:20.321260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.164 [2024-11-17 14:05:20.322421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.164 [2024-11-17 14:05:20.322443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:42.164 [2024-11-17 14:05:20.322451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.134 ms 00:16:42.164 [2024-11-17 14:05:20.322458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.164 [2024-11-17 14:05:20.322755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.164 [2024-11-17 14:05:20.322776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:42.164 [2024-11-17 14:05:20.322784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:16:42.164 [2024-11-17 14:05:20.322791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.164 [2024-11-17 14:05:20.336117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.164 [2024-11-17 14:05:20.336229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:42.164 [2024-11-17 14:05:20.336251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.310 ms 00:16:42.164 [2024-11-17 14:05:20.336262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.164 [2024-11-17 14:05:20.341968] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:42.164 [2024-11-17 14:05:20.353255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.164 [2024-11-17 14:05:20.353284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:42.164 [2024-11-17 14:05:20.353296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.945 ms 00:16:42.164 [2024-11-17 14:05:20.353305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.164 [2024-11-17 14:05:20.353378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.164 [2024-11-17 14:05:20.353387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:42.164 [2024-11-17 14:05:20.353395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:42.164 [2024-11-17 14:05:20.353405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.164 [2024-11-17 14:05:20.353442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.164 [2024-11-17 14:05:20.353448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:42.165 [2024-11-17 14:05:20.353460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:16:42.165 [2024-11-17 14:05:20.353466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.165 [2024-11-17 14:05:20.353488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.165 [2024-11-17 14:05:20.353494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:42.165 [2024-11-17 14:05:20.353502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:42.165 [2024-11-17 14:05:20.353507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.165 [2024-11-17 14:05:20.353534] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:42.165 [2024-11-17 14:05:20.353541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.165 [2024-11-17 14:05:20.353548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:42.165 [2024-11-17 14:05:20.353554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:42.165 [2024-11-17 14:05:20.353560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.165 [2024-11-17 14:05:20.356672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.165 [2024-11-17 14:05:20.356705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:42.165 [2024-11-17 14:05:20.356714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.096 ms 00:16:42.165 [2024-11-17 14:05:20.356722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.165 [2024-11-17 14:05:20.356783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.165 [2024-11-17 14:05:20.356792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:42.165 [2024-11-17 14:05:20.356800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:16:42.165 [2024-11-17 14:05:20.356808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.165 [2024-11-17 14:05:20.357464] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:42.165 [2024-11-17 14:05:20.358266] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 103.523 ms, result 0 00:16:42.165 [2024-11-17 14:05:20.358961] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:42.165 Some configs were skipped because the RPC state that can call them passed over. 00:16:42.165 14:05:20 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:16:42.423 [2024-11-17 14:05:20.579692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.423 [2024-11-17 14:05:20.579854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:16:42.423 [2024-11-17 14:05:20.579904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.921 ms 00:16:42.423 [2024-11-17 14:05:20.579924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.423 [2024-11-17 14:05:20.579965] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.203 ms, result 0 00:16:42.423 true 00:16:42.423 14:05:20 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:16:42.682 [2024-11-17 14:05:20.787918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.682 [2024-11-17 14:05:20.788075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:16:42.682 [2024-11-17 14:05:20.788177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.969 ms 00:16:42.682 [2024-11-17 14:05:20.788204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.682 [2024-11-17 14:05:20.788260] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.299 ms, result 0 00:16:42.682 true 00:16:42.682 14:05:20 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 85555 00:16:42.682 14:05:20 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 85555 ']' 00:16:42.682 14:05:20 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 85555 00:16:42.682 14:05:20 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:16:42.682 14:05:20 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:42.682 14:05:20 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85555 00:16:42.682 14:05:20 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:42.682 killing process with pid 85555 00:16:42.682 14:05:20 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:42.682 14:05:20 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85555' 00:16:42.682 14:05:20 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 85555 00:16:42.682 14:05:20 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 85555 00:16:42.682 [2024-11-17 14:05:20.910529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.682 [2024-11-17 14:05:20.910574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:42.682 [2024-11-17 14:05:20.910584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:42.682 [2024-11-17 14:05:20.910590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.682 [2024-11-17 14:05:20.910609] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:42.682 [2024-11-17 14:05:20.910979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.682 [2024-11-17 14:05:20.910994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:42.682 [2024-11-17 14:05:20.911001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.359 ms 00:16:42.682 [2024-11-17 14:05:20.911008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.682 [2024-11-17 14:05:20.911292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.682 [2024-11-17 14:05:20.911308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:42.682 [2024-11-17 14:05:20.911316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:16:42.682 [2024-11-17 14:05:20.911324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.682 [2024-11-17 14:05:20.914414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.682 [2024-11-17 14:05:20.914439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:42.682 [2024-11-17 14:05:20.914446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.076 ms 00:16:42.682 [2024-11-17 14:05:20.914455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.682 [2024-11-17 14:05:20.919606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.682 [2024-11-17 14:05:20.919635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:42.682 [2024-11-17 14:05:20.919642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.130 ms 00:16:42.682 [2024-11-17 14:05:20.919651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.682 [2024-11-17 14:05:20.921003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.682 [2024-11-17 14:05:20.921032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:42.682 [2024-11-17 14:05:20.921039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.303 ms 00:16:42.682 [2024-11-17 14:05:20.921046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.682 [2024-11-17 14:05:20.924307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.682 [2024-11-17 14:05:20.924335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:42.682 [2024-11-17 14:05:20.924342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.234 ms 00:16:42.682 [2024-11-17 14:05:20.924349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.682 [2024-11-17 14:05:20.924446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.682 [2024-11-17 14:05:20.924455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:42.682 [2024-11-17 14:05:20.924462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:16:42.682 [2024-11-17 14:05:20.924469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.682 [2024-11-17 14:05:20.926045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.682 [2024-11-17 14:05:20.926153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:42.682 [2024-11-17 14:05:20.926164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.563 ms 00:16:42.682 [2024-11-17 14:05:20.926173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.682 [2024-11-17 14:05:20.927544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.682 [2024-11-17 14:05:20.927570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:42.682 [2024-11-17 14:05:20.927577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.344 ms 00:16:42.682 [2024-11-17 14:05:20.927584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.682 [2024-11-17 14:05:20.928473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.682 [2024-11-17 14:05:20.928503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:42.682 [2024-11-17 14:05:20.928510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.864 ms 00:16:42.682 [2024-11-17 14:05:20.928518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.682 [2024-11-17 14:05:20.929315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.683 [2024-11-17 14:05:20.929343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:42.683 [2024-11-17 14:05:20.929350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.751 ms 00:16:42.683 [2024-11-17 14:05:20.929356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.683 [2024-11-17 14:05:20.929380] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:42.683 [2024-11-17 14:05:20.929396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:42.683 [2024-11-17 14:05:20.929404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:42.683 [2024-11-17 14:05:20.929413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:42.683 [2024-11-17 14:05:20.929419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:42.683 [2024-11-17 14:05:20.929426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:42.683 [2024-11-17 14:05:20.929432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:42.683 [2024-11-17 14:05:20.929439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:42.683 [2024-11-17 14:05:20.929445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:42.683 [2024-11-17 14:05:20.929452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:42.683 [2024-11-17 14:05:20.929458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:42.683 [2024-11-17 14:05:20.929465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:42.683 [2024-11-17 14:05:20.929471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:42.683 [2024-11-17 14:05:20.929479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:42.683 [2024-11-17 14:05:20.929485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:42.683 [2024-11-17 14:05:20.929492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:42.683 [2024-11-17 14:05:20.929497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:42.683 [2024-11-17 14:05:20.929505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:42.683 [2024-11-17 14:05:20.929511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:42.683 [2024-11-17 14:05:20.929519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:42.683 [2024-11-17 14:05:20.929525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:42.683 [2024-11-17 14:05:20.929531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:42.683 [2024-11-17 14:05:20.929537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:42.683 [2024-11-17 14:05:20.929544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:42.683 [2024-11-17 14:05:20.929550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:42.683 [2024-11-17 14:05:20.929557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:42.683 [2024-11-17 14:05:20.929563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:42.683 [2024-11-17 14:05:20.929571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:42.683 [2024-11-17 14:05:20.929576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:42.683 [2024-11-17 14:05:20.929583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:42.683 [2024-11-17 14:05:20.929588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:42.683 [2024-11-17 14:05:20.929595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:42.683 [2024-11-17 14:05:20.929601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:42.683 [2024-11-17 14:05:20.929608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:42.683 [2024-11-17 14:05:20.929614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:42.683 [2024-11-17 14:05:20.929624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:42.683 [2024-11-17 14:05:20.929630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:42.683 [2024-11-17 14:05:20.929637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:42.683 [2024-11-17 14:05:20.929642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:42.683 [2024-11-17 14:05:20.929650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:42.683 [2024-11-17 14:05:20.929656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:42.683 [2024-11-17 14:05:20.929664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:42.683 [2024-11-17 14:05:20.929670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:42.683 [2024-11-17 14:05:20.929677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:42.683 [2024-11-17 14:05:20.929682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:42.683 [2024-11-17 14:05:20.929689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:42.683 [2024-11-17 14:05:20.929695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:42.683 [2024-11-17 14:05:20.929702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:42.683 [2024-11-17 14:05:20.929708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:42.683 [2024-11-17 14:05:20.929715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:42.683 [2024-11-17 14:05:20.929720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:42.683 [2024-11-17 14:05:20.929728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:42.683 [2024-11-17 14:05:20.929734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:42.683 [2024-11-17 14:05:20.929741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:42.683 [2024-11-17 14:05:20.929747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:42.683 [2024-11-17 14:05:20.929754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:42.683 [2024-11-17 14:05:20.929760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:42.683 [2024-11-17 14:05:20.929767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:42.683 [2024-11-17 14:05:20.929772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:42.683 [2024-11-17 14:05:20.929779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:42.683 [2024-11-17 14:05:20.929785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:42.683 [2024-11-17 14:05:20.929792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:42.683 [2024-11-17 14:05:20.929798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:42.683 [2024-11-17 14:05:20.929805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:42.683 [2024-11-17 14:05:20.929811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:42.683 [2024-11-17 14:05:20.929818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:42.683 [2024-11-17 14:05:20.929824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:42.683 [2024-11-17 14:05:20.929835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:42.683 [2024-11-17 14:05:20.929841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:42.683 [2024-11-17 14:05:20.929848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:42.683 [2024-11-17 14:05:20.929853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:42.684 [2024-11-17 14:05:20.929860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:42.684 [2024-11-17 14:05:20.929866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:42.684 [2024-11-17 14:05:20.929873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:42.684 [2024-11-17 14:05:20.929879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:42.684 [2024-11-17 14:05:20.929886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:42.684 [2024-11-17 14:05:20.929891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:42.684 [2024-11-17 14:05:20.929898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:42.684 [2024-11-17 14:05:20.929904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:42.684 [2024-11-17 14:05:20.929911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:42.684 [2024-11-17 14:05:20.929917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:42.684 [2024-11-17 14:05:20.929923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:42.684 [2024-11-17 14:05:20.929929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:42.684 [2024-11-17 14:05:20.929937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:42.684 [2024-11-17 14:05:20.929943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:42.684 [2024-11-17 14:05:20.929950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:42.684 [2024-11-17 14:05:20.929956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:42.684 [2024-11-17 14:05:20.929963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:42.684 [2024-11-17 14:05:20.929972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:42.684 [2024-11-17 14:05:20.929980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:42.684 [2024-11-17 14:05:20.929986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:42.684 [2024-11-17 14:05:20.929993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:42.684 [2024-11-17 14:05:20.929999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:42.684 [2024-11-17 14:05:20.930006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:42.684 [2024-11-17 14:05:20.930011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:42.684 [2024-11-17 14:05:20.930019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:42.684 [2024-11-17 14:05:20.930025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:42.684 [2024-11-17 14:05:20.930032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:42.684 [2024-11-17 14:05:20.930037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:42.684 [2024-11-17 14:05:20.930046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:42.684 [2024-11-17 14:05:20.930052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:42.684 [2024-11-17 14:05:20.930065] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:42.684 [2024-11-17 14:05:20.930071] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ddfd3108-0b08-42c1-8a1f-32ad0e417f42 00:16:42.684 [2024-11-17 14:05:20.930079] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:42.684 [2024-11-17 14:05:20.930084] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:42.684 [2024-11-17 14:05:20.930094] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:42.684 [2024-11-17 14:05:20.930102] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:42.684 [2024-11-17 14:05:20.930108] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:42.684 [2024-11-17 14:05:20.930114] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:42.684 [2024-11-17 14:05:20.930125] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:42.684 [2024-11-17 14:05:20.930130] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:42.684 [2024-11-17 14:05:20.930137] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:42.684 [2024-11-17 14:05:20.930143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.684 [2024-11-17 14:05:20.930150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:42.684 [2024-11-17 14:05:20.930156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.763 ms 00:16:42.684 [2024-11-17 14:05:20.930164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.684 [2024-11-17 14:05:20.931436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.684 [2024-11-17 14:05:20.931464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:42.684 [2024-11-17 14:05:20.931471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.248 ms 00:16:42.684 [2024-11-17 14:05:20.931482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.684 [2024-11-17 14:05:20.931551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:42.684 [2024-11-17 14:05:20.931559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:42.684 [2024-11-17 14:05:20.931565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:16:42.684 [2024-11-17 14:05:20.931572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.684 [2024-11-17 14:05:20.936104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:42.684 [2024-11-17 14:05:20.936133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:42.684 [2024-11-17 14:05:20.936143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:42.684 [2024-11-17 14:05:20.936151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.684 [2024-11-17 14:05:20.936199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:42.684 [2024-11-17 14:05:20.936209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:42.684 [2024-11-17 14:05:20.936215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:42.684 [2024-11-17 14:05:20.936224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.684 [2024-11-17 14:05:20.936269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:42.684 [2024-11-17 14:05:20.936281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:42.684 [2024-11-17 14:05:20.936287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:42.684 [2024-11-17 14:05:20.936294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.684 [2024-11-17 14:05:20.936308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:42.684 [2024-11-17 14:05:20.936315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:42.684 [2024-11-17 14:05:20.936321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:42.684 [2024-11-17 14:05:20.936328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.684 [2024-11-17 14:05:20.944051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:42.684 [2024-11-17 14:05:20.944185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:42.684 [2024-11-17 14:05:20.944197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:42.684 [2024-11-17 14:05:20.944205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.684 [2024-11-17 14:05:20.950162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:42.684 [2024-11-17 14:05:20.950195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:42.684 [2024-11-17 14:05:20.950203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:42.684 [2024-11-17 14:05:20.950212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.684 [2024-11-17 14:05:20.950304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:42.684 [2024-11-17 14:05:20.950314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:42.684 [2024-11-17 14:05:20.950321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:42.684 [2024-11-17 14:05:20.950330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.684 [2024-11-17 14:05:20.950353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:42.684 [2024-11-17 14:05:20.950371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:42.684 [2024-11-17 14:05:20.950377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:42.684 [2024-11-17 14:05:20.950384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.684 [2024-11-17 14:05:20.950434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:42.684 [2024-11-17 14:05:20.950443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:42.684 [2024-11-17 14:05:20.950451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:42.684 [2024-11-17 14:05:20.950461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.684 [2024-11-17 14:05:20.950486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:42.684 [2024-11-17 14:05:20.950497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:42.684 [2024-11-17 14:05:20.950502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:42.684 [2024-11-17 14:05:20.950510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.684 [2024-11-17 14:05:20.950538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:42.684 [2024-11-17 14:05:20.950546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:42.684 [2024-11-17 14:05:20.950553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:42.684 [2024-11-17 14:05:20.950559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.684 [2024-11-17 14:05:20.950593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:42.684 [2024-11-17 14:05:20.950602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:42.684 [2024-11-17 14:05:20.950608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:42.684 [2024-11-17 14:05:20.950615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:42.684 [2024-11-17 14:05:20.950718] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 40.172 ms, result 0 00:16:42.943 14:05:21 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:16:42.943 14:05:21 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:43.202 [2024-11-17 14:05:21.295764] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:43.202 [2024-11-17 14:05:21.296032] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85597 ] 00:16:43.202 [2024-11-17 14:05:21.440789] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:43.202 [2024-11-17 14:05:21.469599] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:43.461 [2024-11-17 14:05:21.550123] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:43.461 [2024-11-17 14:05:21.550180] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:43.461 [2024-11-17 14:05:21.692087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.461 [2024-11-17 14:05:21.692219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:43.461 [2024-11-17 14:05:21.692235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:43.461 [2024-11-17 14:05:21.692255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.461 [2024-11-17 14:05:21.693967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.461 [2024-11-17 14:05:21.693995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:43.461 [2024-11-17 14:05:21.694004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.695 ms 00:16:43.461 [2024-11-17 14:05:21.694009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.461 [2024-11-17 14:05:21.694065] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:43.461 [2024-11-17 14:05:21.694229] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:43.461 [2024-11-17 14:05:21.694250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.461 [2024-11-17 14:05:21.694256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:43.461 [2024-11-17 14:05:21.694264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.190 ms 00:16:43.461 [2024-11-17 14:05:21.694269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.461 [2024-11-17 14:05:21.695178] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:43.461 [2024-11-17 14:05:21.697191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.461 [2024-11-17 14:05:21.697222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:43.461 [2024-11-17 14:05:21.697233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.014 ms 00:16:43.461 [2024-11-17 14:05:21.697251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.461 [2024-11-17 14:05:21.697297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.461 [2024-11-17 14:05:21.697305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:43.461 [2024-11-17 14:05:21.697311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:16:43.461 [2024-11-17 14:05:21.697316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.461 [2024-11-17 14:05:21.701563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.461 [2024-11-17 14:05:21.701588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:43.461 [2024-11-17 14:05:21.701596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.217 ms 00:16:43.461 [2024-11-17 14:05:21.701601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.461 [2024-11-17 14:05:21.701688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.461 [2024-11-17 14:05:21.701696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:43.461 [2024-11-17 14:05:21.701707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:16:43.461 [2024-11-17 14:05:21.701715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.461 [2024-11-17 14:05:21.701733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.461 [2024-11-17 14:05:21.701741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:43.461 [2024-11-17 14:05:21.701747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:43.461 [2024-11-17 14:05:21.701752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.461 [2024-11-17 14:05:21.701770] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:43.461 [2024-11-17 14:05:21.702896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.461 [2024-11-17 14:05:21.702998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:43.461 [2024-11-17 14:05:21.703009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.130 ms 00:16:43.461 [2024-11-17 14:05:21.703015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.461 [2024-11-17 14:05:21.703048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.461 [2024-11-17 14:05:21.703057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:43.461 [2024-11-17 14:05:21.703063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:43.461 [2024-11-17 14:05:21.703071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.461 [2024-11-17 14:05:21.703083] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:43.461 [2024-11-17 14:05:21.703099] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:16:43.461 [2024-11-17 14:05:21.703125] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:43.461 [2024-11-17 14:05:21.703139] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:16:43.461 [2024-11-17 14:05:21.703221] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:43.461 [2024-11-17 14:05:21.703229] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:43.461 [2024-11-17 14:05:21.703251] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:43.461 [2024-11-17 14:05:21.703259] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:43.461 [2024-11-17 14:05:21.703266] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:43.461 [2024-11-17 14:05:21.703275] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:43.461 [2024-11-17 14:05:21.703280] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:43.461 [2024-11-17 14:05:21.703286] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:43.461 [2024-11-17 14:05:21.703291] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:43.461 [2024-11-17 14:05:21.703297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.461 [2024-11-17 14:05:21.703303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:43.461 [2024-11-17 14:05:21.703312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.214 ms 00:16:43.461 [2024-11-17 14:05:21.703319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.461 [2024-11-17 14:05:21.703388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.461 [2024-11-17 14:05:21.703395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:43.461 [2024-11-17 14:05:21.703401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:16:43.461 [2024-11-17 14:05:21.703406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.461 [2024-11-17 14:05:21.703496] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:43.462 [2024-11-17 14:05:21.703507] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:43.462 [2024-11-17 14:05:21.703517] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:43.462 [2024-11-17 14:05:21.703524] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:43.462 [2024-11-17 14:05:21.703530] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:43.462 [2024-11-17 14:05:21.703535] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:43.462 [2024-11-17 14:05:21.703540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:43.462 [2024-11-17 14:05:21.703546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:43.462 [2024-11-17 14:05:21.703552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:43.462 [2024-11-17 14:05:21.703557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:43.462 [2024-11-17 14:05:21.703564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:43.462 [2024-11-17 14:05:21.703569] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:43.462 [2024-11-17 14:05:21.703575] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:43.462 [2024-11-17 14:05:21.703580] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:43.462 [2024-11-17 14:05:21.703586] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:43.462 [2024-11-17 14:05:21.703592] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:43.462 [2024-11-17 14:05:21.703598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:43.462 [2024-11-17 14:05:21.703604] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:43.462 [2024-11-17 14:05:21.703610] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:43.462 [2024-11-17 14:05:21.703616] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:43.462 [2024-11-17 14:05:21.703621] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:43.462 [2024-11-17 14:05:21.703627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:43.462 [2024-11-17 14:05:21.703633] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:43.462 [2024-11-17 14:05:21.703639] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:43.462 [2024-11-17 14:05:21.703644] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:43.462 [2024-11-17 14:05:21.703650] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:43.462 [2024-11-17 14:05:21.703659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:43.462 [2024-11-17 14:05:21.703664] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:43.462 [2024-11-17 14:05:21.703670] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:43.462 [2024-11-17 14:05:21.703676] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:43.462 [2024-11-17 14:05:21.703681] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:43.462 [2024-11-17 14:05:21.703687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:43.462 [2024-11-17 14:05:21.703693] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:43.462 [2024-11-17 14:05:21.703698] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:43.462 [2024-11-17 14:05:21.703704] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:43.462 [2024-11-17 14:05:21.703710] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:43.462 [2024-11-17 14:05:21.703715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:43.462 [2024-11-17 14:05:21.703721] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:43.462 [2024-11-17 14:05:21.703726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:43.462 [2024-11-17 14:05:21.703732] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:43.462 [2024-11-17 14:05:21.703737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:43.462 [2024-11-17 14:05:21.703743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:43.462 [2024-11-17 14:05:21.703750] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:43.462 [2024-11-17 14:05:21.703756] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:43.462 [2024-11-17 14:05:21.703764] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:43.462 [2024-11-17 14:05:21.703771] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:43.462 [2024-11-17 14:05:21.703777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:43.462 [2024-11-17 14:05:21.703783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:43.462 [2024-11-17 14:05:21.703789] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:43.462 [2024-11-17 14:05:21.703794] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:43.462 [2024-11-17 14:05:21.703800] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:43.462 [2024-11-17 14:05:21.703806] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:43.462 [2024-11-17 14:05:21.703812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:43.462 [2024-11-17 14:05:21.703818] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:43.462 [2024-11-17 14:05:21.703828] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:43.462 [2024-11-17 14:05:21.703837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:43.462 [2024-11-17 14:05:21.703843] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:43.462 [2024-11-17 14:05:21.703850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:43.462 [2024-11-17 14:05:21.703857] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:43.462 [2024-11-17 14:05:21.703863] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:43.462 [2024-11-17 14:05:21.703869] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:43.462 [2024-11-17 14:05:21.703875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:43.462 [2024-11-17 14:05:21.703881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:43.462 [2024-11-17 14:05:21.703887] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:43.462 [2024-11-17 14:05:21.703893] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:43.462 [2024-11-17 14:05:21.703899] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:43.462 [2024-11-17 14:05:21.703905] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:43.462 [2024-11-17 14:05:21.703911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:43.462 [2024-11-17 14:05:21.703917] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:43.462 [2024-11-17 14:05:21.703923] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:43.462 [2024-11-17 14:05:21.703930] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:43.462 [2024-11-17 14:05:21.703937] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:43.462 [2024-11-17 14:05:21.703943] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:43.462 [2024-11-17 14:05:21.703950] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:43.462 [2024-11-17 14:05:21.703957] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:43.462 [2024-11-17 14:05:21.703964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.462 [2024-11-17 14:05:21.703970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:43.462 [2024-11-17 14:05:21.703977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.536 ms 00:16:43.462 [2024-11-17 14:05:21.703982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.462 [2024-11-17 14:05:21.723255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.462 [2024-11-17 14:05:21.723400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:43.462 [2024-11-17 14:05:21.723502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.228 ms 00:16:43.462 [2024-11-17 14:05:21.723538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.462 [2024-11-17 14:05:21.723698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.462 [2024-11-17 14:05:21.723781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:43.462 [2024-11-17 14:05:21.723808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:16:43.462 [2024-11-17 14:05:21.723839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.462 [2024-11-17 14:05:21.731782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.462 [2024-11-17 14:05:21.731897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:43.462 [2024-11-17 14:05:21.731954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.874 ms 00:16:43.462 [2024-11-17 14:05:21.731979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.462 [2024-11-17 14:05:21.732039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.462 [2024-11-17 14:05:21.732198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:43.462 [2024-11-17 14:05:21.732232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:43.462 [2024-11-17 14:05:21.732269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.462 [2024-11-17 14:05:21.732595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.462 [2024-11-17 14:05:21.732691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:43.462 [2024-11-17 14:05:21.732742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:16:43.462 [2024-11-17 14:05:21.732766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.463 [2024-11-17 14:05:21.732919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.463 [2024-11-17 14:05:21.732952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:43.463 [2024-11-17 14:05:21.733001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:16:43.463 [2024-11-17 14:05:21.733107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.463 [2024-11-17 14:05:21.737870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.463 [2024-11-17 14:05:21.737979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:43.463 [2024-11-17 14:05:21.738037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.717 ms 00:16:43.463 [2024-11-17 14:05:21.738061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.463 [2024-11-17 14:05:21.740365] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:16:43.463 [2024-11-17 14:05:21.740469] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:43.463 [2024-11-17 14:05:21.740517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.463 [2024-11-17 14:05:21.740534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:43.463 [2024-11-17 14:05:21.740549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.344 ms 00:16:43.463 [2024-11-17 14:05:21.740562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.463 [2024-11-17 14:05:21.751860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.463 [2024-11-17 14:05:21.751972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:43.463 [2024-11-17 14:05:21.752030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.255 ms 00:16:43.463 [2024-11-17 14:05:21.752064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.463 [2024-11-17 14:05:21.753614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.463 [2024-11-17 14:05:21.753704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:43.463 [2024-11-17 14:05:21.753746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.437 ms 00:16:43.463 [2024-11-17 14:05:21.753763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.463 [2024-11-17 14:05:21.754940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.463 [2024-11-17 14:05:21.755020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:43.463 [2024-11-17 14:05:21.755057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.143 ms 00:16:43.463 [2024-11-17 14:05:21.755073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.463 [2024-11-17 14:05:21.755334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.463 [2024-11-17 14:05:21.755399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:43.463 [2024-11-17 14:05:21.755438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.206 ms 00:16:43.463 [2024-11-17 14:05:21.755468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.723 [2024-11-17 14:05:21.768446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.723 [2024-11-17 14:05:21.768550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:43.723 [2024-11-17 14:05:21.768594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.950 ms 00:16:43.723 [2024-11-17 14:05:21.768611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.723 [2024-11-17 14:05:21.774217] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:43.723 [2024-11-17 14:05:21.785648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.723 [2024-11-17 14:05:21.785745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:43.723 [2024-11-17 14:05:21.785786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.987 ms 00:16:43.723 [2024-11-17 14:05:21.785803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.723 [2024-11-17 14:05:21.785888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.723 [2024-11-17 14:05:21.785942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:43.723 [2024-11-17 14:05:21.785966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:43.723 [2024-11-17 14:05:21.785981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.723 [2024-11-17 14:05:21.786051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.723 [2024-11-17 14:05:21.786377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:43.723 [2024-11-17 14:05:21.786622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:16:43.723 [2024-11-17 14:05:21.786816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.723 [2024-11-17 14:05:21.787075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.723 [2024-11-17 14:05:21.787266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:43.723 [2024-11-17 14:05:21.787403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:16:43.723 [2024-11-17 14:05:21.787517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.724 [2024-11-17 14:05:21.787642] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:43.724 [2024-11-17 14:05:21.787694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.724 [2024-11-17 14:05:21.787715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:43.724 [2024-11-17 14:05:21.787736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:16:43.724 [2024-11-17 14:05:21.787755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.724 [2024-11-17 14:05:21.793967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.724 [2024-11-17 14:05:21.794043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:43.724 [2024-11-17 14:05:21.794069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.154 ms 00:16:43.724 [2024-11-17 14:05:21.794105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.724 [2024-11-17 14:05:21.794305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.724 [2024-11-17 14:05:21.794348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:43.724 [2024-11-17 14:05:21.794370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:16:43.724 [2024-11-17 14:05:21.794390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.724 [2024-11-17 14:05:21.796172] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:43.724 [2024-11-17 14:05:21.798634] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 103.321 ms, result 0 00:16:43.724 [2024-11-17 14:05:21.799980] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:43.724 [2024-11-17 14:05:21.807247] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:44.659  [2024-11-17T14:05:23.894Z] Copying: 21/256 [MB] (21 MBps) [2024-11-17T14:05:24.829Z] Copying: 42/256 [MB] (20 MBps) [2024-11-17T14:05:26.206Z] Copying: 63/256 [MB] (21 MBps) [2024-11-17T14:05:27.142Z] Copying: 89/256 [MB] (25 MBps) [2024-11-17T14:05:28.077Z] Copying: 111/256 [MB] (22 MBps) [2024-11-17T14:05:29.015Z] Copying: 127/256 [MB] (15 MBps) [2024-11-17T14:05:29.960Z] Copying: 151/256 [MB] (23 MBps) [2024-11-17T14:05:30.931Z] Copying: 169/256 [MB] (18 MBps) [2024-11-17T14:05:31.885Z] Copying: 192/256 [MB] (22 MBps) [2024-11-17T14:05:32.829Z] Copying: 207/256 [MB] (15 MBps) [2024-11-17T14:05:34.216Z] Copying: 219/256 [MB] (11 MBps) [2024-11-17T14:05:34.790Z] Copying: 233/256 [MB] (14 MBps) [2024-11-17T14:05:34.790Z] Copying: 256/256 [MB] (average 19 MBps)[2024-11-17 14:05:34.724948] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:56.489 [2024-11-17 14:05:34.726844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.489 [2024-11-17 14:05:34.726893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:56.489 [2024-11-17 14:05:34.726919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:56.489 [2024-11-17 14:05:34.726929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.489 [2024-11-17 14:05:34.726951] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:56.489 [2024-11-17 14:05:34.727709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.489 [2024-11-17 14:05:34.727746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:56.489 [2024-11-17 14:05:34.727759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.742 ms 00:16:56.489 [2024-11-17 14:05:34.727768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.489 [2024-11-17 14:05:34.728036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.489 [2024-11-17 14:05:34.728208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:56.489 [2024-11-17 14:05:34.728224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.242 ms 00:16:56.489 [2024-11-17 14:05:34.728233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.489 [2024-11-17 14:05:34.731951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.489 [2024-11-17 14:05:34.731973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:56.489 [2024-11-17 14:05:34.731983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.676 ms 00:16:56.489 [2024-11-17 14:05:34.731991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.489 [2024-11-17 14:05:34.739516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.489 [2024-11-17 14:05:34.739554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:56.489 [2024-11-17 14:05:34.739566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.501 ms 00:16:56.489 [2024-11-17 14:05:34.739574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.489 [2024-11-17 14:05:34.742181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.489 [2024-11-17 14:05:34.742229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:56.489 [2024-11-17 14:05:34.742255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.535 ms 00:16:56.489 [2024-11-17 14:05:34.742273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.489 [2024-11-17 14:05:34.747284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.489 [2024-11-17 14:05:34.747330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:56.489 [2024-11-17 14:05:34.747349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.965 ms 00:16:56.489 [2024-11-17 14:05:34.747357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.489 [2024-11-17 14:05:34.747502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.489 [2024-11-17 14:05:34.747513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:56.489 [2024-11-17 14:05:34.747522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:16:56.489 [2024-11-17 14:05:34.747530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.489 [2024-11-17 14:05:34.750752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.489 [2024-11-17 14:05:34.750796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:56.489 [2024-11-17 14:05:34.750805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.204 ms 00:16:56.489 [2024-11-17 14:05:34.750812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.489 [2024-11-17 14:05:34.753489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.489 [2024-11-17 14:05:34.753655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:56.489 [2024-11-17 14:05:34.753672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.633 ms 00:16:56.489 [2024-11-17 14:05:34.753678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.489 [2024-11-17 14:05:34.755746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.489 [2024-11-17 14:05:34.755792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:56.489 [2024-11-17 14:05:34.755802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.985 ms 00:16:56.489 [2024-11-17 14:05:34.755809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.489 [2024-11-17 14:05:34.757662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.489 [2024-11-17 14:05:34.757718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:56.489 [2024-11-17 14:05:34.757728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.780 ms 00:16:56.489 [2024-11-17 14:05:34.757735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.489 [2024-11-17 14:05:34.757775] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:56.489 [2024-11-17 14:05:34.757800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:56.489 [2024-11-17 14:05:34.757811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:56.489 [2024-11-17 14:05:34.757819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:56.489 [2024-11-17 14:05:34.757827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:56.489 [2024-11-17 14:05:34.757837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:56.489 [2024-11-17 14:05:34.757845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:56.489 [2024-11-17 14:05:34.757852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:56.489 [2024-11-17 14:05:34.757860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:56.489 [2024-11-17 14:05:34.757867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:56.489 [2024-11-17 14:05:34.757875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:56.489 [2024-11-17 14:05:34.757883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:56.489 [2024-11-17 14:05:34.757891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:56.489 [2024-11-17 14:05:34.757898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:56.489 [2024-11-17 14:05:34.757906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:56.489 [2024-11-17 14:05:34.757913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:56.489 [2024-11-17 14:05:34.757920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:56.489 [2024-11-17 14:05:34.757927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:56.489 [2024-11-17 14:05:34.757935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:56.489 [2024-11-17 14:05:34.757943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:56.489 [2024-11-17 14:05:34.757950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:56.490 [2024-11-17 14:05:34.757957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:56.490 [2024-11-17 14:05:34.757964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:56.490 [2024-11-17 14:05:34.757972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:56.490 [2024-11-17 14:05:34.757979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:56.490 [2024-11-17 14:05:34.757987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:56.490 [2024-11-17 14:05:34.757994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:56.490 [2024-11-17 14:05:34.758003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:56.490 [2024-11-17 14:05:34.758010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:56.490 [2024-11-17 14:05:34.758017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:56.490 [2024-11-17 14:05:34.758025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:56.490 [2024-11-17 14:05:34.758034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:56.490 [2024-11-17 14:05:34.758041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:56.490 [2024-11-17 14:05:34.758049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:56.490 [2024-11-17 14:05:34.758056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:56.490 [2024-11-17 14:05:34.758064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:56.490 [2024-11-17 14:05:34.758071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:56.490 [2024-11-17 14:05:34.758079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:56.490 [2024-11-17 14:05:34.758087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:56.490 [2024-11-17 14:05:34.758094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:56.490 [2024-11-17 14:05:34.758102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:56.490 [2024-11-17 14:05:34.758109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:56.490 [2024-11-17 14:05:34.758116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:56.490 [2024-11-17 14:05:34.758125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:56.490 [2024-11-17 14:05:34.758133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:56.490 [2024-11-17 14:05:34.758141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:56.490 [2024-11-17 14:05:34.758149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:56.490 [2024-11-17 14:05:34.758156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:56.490 [2024-11-17 14:05:34.758163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:56.490 [2024-11-17 14:05:34.758171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:56.490 [2024-11-17 14:05:34.758179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:56.490 [2024-11-17 14:05:34.758187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:56.490 [2024-11-17 14:05:34.758194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:56.490 [2024-11-17 14:05:34.758201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:56.490 [2024-11-17 14:05:34.758209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:56.490 [2024-11-17 14:05:34.758217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:56.490 [2024-11-17 14:05:34.758226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:56.490 [2024-11-17 14:05:34.758233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:56.490 [2024-11-17 14:05:34.758263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:56.490 [2024-11-17 14:05:34.758271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:56.490 [2024-11-17 14:05:34.758278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:56.490 [2024-11-17 14:05:34.758285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:56.490 [2024-11-17 14:05:34.758293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:56.490 [2024-11-17 14:05:34.758301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:56.490 [2024-11-17 14:05:34.758308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:56.490 [2024-11-17 14:05:34.758316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:56.490 [2024-11-17 14:05:34.758324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:56.490 [2024-11-17 14:05:34.758332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:56.490 [2024-11-17 14:05:34.758340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:56.490 [2024-11-17 14:05:34.758348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:56.490 [2024-11-17 14:05:34.758355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:56.490 [2024-11-17 14:05:34.758363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:56.490 [2024-11-17 14:05:34.758370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:56.490 [2024-11-17 14:05:34.758378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:56.490 [2024-11-17 14:05:34.758386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:56.490 [2024-11-17 14:05:34.758396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:56.490 [2024-11-17 14:05:34.758404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:56.490 [2024-11-17 14:05:34.758411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:56.490 [2024-11-17 14:05:34.758420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:56.490 [2024-11-17 14:05:34.758428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:56.490 [2024-11-17 14:05:34.758435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:56.490 [2024-11-17 14:05:34.758443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:56.490 [2024-11-17 14:05:34.758450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:56.490 [2024-11-17 14:05:34.758457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:56.490 [2024-11-17 14:05:34.758465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:56.490 [2024-11-17 14:05:34.758473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:56.490 [2024-11-17 14:05:34.758481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:56.490 [2024-11-17 14:05:34.758488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:56.490 [2024-11-17 14:05:34.758495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:56.490 [2024-11-17 14:05:34.758502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:56.490 [2024-11-17 14:05:34.758509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:56.490 [2024-11-17 14:05:34.758517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:56.490 [2024-11-17 14:05:34.758525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:56.490 [2024-11-17 14:05:34.758532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:56.490 [2024-11-17 14:05:34.758539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:56.490 [2024-11-17 14:05:34.758546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:56.490 [2024-11-17 14:05:34.758553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:56.490 [2024-11-17 14:05:34.758561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:56.490 [2024-11-17 14:05:34.758568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:56.490 [2024-11-17 14:05:34.758575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:56.490 [2024-11-17 14:05:34.758582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:56.490 [2024-11-17 14:05:34.758598] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:56.490 [2024-11-17 14:05:34.758607] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ddfd3108-0b08-42c1-8a1f-32ad0e417f42 00:16:56.491 [2024-11-17 14:05:34.758622] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:56.491 [2024-11-17 14:05:34.758629] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:56.491 [2024-11-17 14:05:34.758636] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:56.491 [2024-11-17 14:05:34.758644] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:56.491 [2024-11-17 14:05:34.758652] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:56.491 [2024-11-17 14:05:34.758661] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:56.491 [2024-11-17 14:05:34.758668] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:56.491 [2024-11-17 14:05:34.758678] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:56.491 [2024-11-17 14:05:34.758684] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:56.491 [2024-11-17 14:05:34.758694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.491 [2024-11-17 14:05:34.758702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:56.491 [2024-11-17 14:05:34.758714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.919 ms 00:16:56.491 [2024-11-17 14:05:34.758721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.491 [2024-11-17 14:05:34.760939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.491 [2024-11-17 14:05:34.761004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:56.491 [2024-11-17 14:05:34.761016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.199 ms 00:16:56.491 [2024-11-17 14:05:34.761026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.491 [2024-11-17 14:05:34.761158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.491 [2024-11-17 14:05:34.761175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:56.491 [2024-11-17 14:05:34.761184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:16:56.491 [2024-11-17 14:05:34.761191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.491 [2024-11-17 14:05:34.768653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.491 [2024-11-17 14:05:34.768813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:56.491 [2024-11-17 14:05:34.768831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.491 [2024-11-17 14:05:34.768840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.491 [2024-11-17 14:05:34.768914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.491 [2024-11-17 14:05:34.768929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:56.491 [2024-11-17 14:05:34.768937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.491 [2024-11-17 14:05:34.768945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.491 [2024-11-17 14:05:34.768995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.491 [2024-11-17 14:05:34.769005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:56.491 [2024-11-17 14:05:34.769012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.491 [2024-11-17 14:05:34.769020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.491 [2024-11-17 14:05:34.769037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.491 [2024-11-17 14:05:34.769046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:56.491 [2024-11-17 14:05:34.769056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.491 [2024-11-17 14:05:34.769067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.491 [2024-11-17 14:05:34.782491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.491 [2024-11-17 14:05:34.782547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:56.491 [2024-11-17 14:05:34.782558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.491 [2024-11-17 14:05:34.782566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.753 [2024-11-17 14:05:34.792949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.753 [2024-11-17 14:05:34.793011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:56.753 [2024-11-17 14:05:34.793023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.753 [2024-11-17 14:05:34.793031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.753 [2024-11-17 14:05:34.793114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.753 [2024-11-17 14:05:34.793126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:56.753 [2024-11-17 14:05:34.793134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.753 [2024-11-17 14:05:34.793143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.753 [2024-11-17 14:05:34.793175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.753 [2024-11-17 14:05:34.793184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:56.753 [2024-11-17 14:05:34.793193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.753 [2024-11-17 14:05:34.793204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.753 [2024-11-17 14:05:34.793310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.753 [2024-11-17 14:05:34.793321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:56.753 [2024-11-17 14:05:34.793330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.753 [2024-11-17 14:05:34.793338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.753 [2024-11-17 14:05:34.793371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.753 [2024-11-17 14:05:34.793381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:56.753 [2024-11-17 14:05:34.793392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.753 [2024-11-17 14:05:34.793401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.753 [2024-11-17 14:05:34.793446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.753 [2024-11-17 14:05:34.793455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:56.753 [2024-11-17 14:05:34.793464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.753 [2024-11-17 14:05:34.793472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.753 [2024-11-17 14:05:34.793518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.753 [2024-11-17 14:05:34.793534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:56.753 [2024-11-17 14:05:34.793543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.753 [2024-11-17 14:05:34.793554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.753 [2024-11-17 14:05:34.793708] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 66.835 ms, result 0 00:16:56.753 00:16:56.753 00:16:56.753 14:05:35 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:16:56.753 14:05:35 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:16:57.325 14:05:35 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:57.586 [2024-11-17 14:05:35.679331] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:57.586 [2024-11-17 14:05:35.679501] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85751 ] 00:16:57.586 [2024-11-17 14:05:35.830672] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.586 [2024-11-17 14:05:35.881020] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:57.847 [2024-11-17 14:05:35.994575] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:57.847 [2024-11-17 14:05:35.994658] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:58.110 [2024-11-17 14:05:36.155294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.110 [2024-11-17 14:05:36.155358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:58.110 [2024-11-17 14:05:36.155373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:58.110 [2024-11-17 14:05:36.155382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.110 [2024-11-17 14:05:36.157944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.110 [2024-11-17 14:05:36.157993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:58.110 [2024-11-17 14:05:36.158008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.541 ms 00:16:58.110 [2024-11-17 14:05:36.158016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.110 [2024-11-17 14:05:36.158119] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:58.110 [2024-11-17 14:05:36.158397] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:58.110 [2024-11-17 14:05:36.158416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.110 [2024-11-17 14:05:36.158424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:58.110 [2024-11-17 14:05:36.158436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:16:58.110 [2024-11-17 14:05:36.158444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.110 [2024-11-17 14:05:36.160147] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:58.110 [2024-11-17 14:05:36.163838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.110 [2024-11-17 14:05:36.164033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:58.110 [2024-11-17 14:05:36.164053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.693 ms 00:16:58.110 [2024-11-17 14:05:36.164066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.110 [2024-11-17 14:05:36.164273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.110 [2024-11-17 14:05:36.164299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:58.110 [2024-11-17 14:05:36.164310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:16:58.110 [2024-11-17 14:05:36.164317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.110 [2024-11-17 14:05:36.172265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.110 [2024-11-17 14:05:36.172304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:58.110 [2024-11-17 14:05:36.172315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.894 ms 00:16:58.110 [2024-11-17 14:05:36.172322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.110 [2024-11-17 14:05:36.172461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.110 [2024-11-17 14:05:36.172472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:58.110 [2024-11-17 14:05:36.172482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:16:58.110 [2024-11-17 14:05:36.172493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.110 [2024-11-17 14:05:36.172519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.110 [2024-11-17 14:05:36.172532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:58.110 [2024-11-17 14:05:36.172540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:58.110 [2024-11-17 14:05:36.172551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.110 [2024-11-17 14:05:36.172576] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:58.110 [2024-11-17 14:05:36.174570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.110 [2024-11-17 14:05:36.174730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:58.110 [2024-11-17 14:05:36.174747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.002 ms 00:16:58.110 [2024-11-17 14:05:36.174763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.110 [2024-11-17 14:05:36.174808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.110 [2024-11-17 14:05:36.174820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:58.110 [2024-11-17 14:05:36.174832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:16:58.110 [2024-11-17 14:05:36.174840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.110 [2024-11-17 14:05:36.174859] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:58.111 [2024-11-17 14:05:36.174881] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:16:58.111 [2024-11-17 14:05:36.174919] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:58.111 [2024-11-17 14:05:36.174939] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:16:58.111 [2024-11-17 14:05:36.175048] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:58.111 [2024-11-17 14:05:36.175059] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:58.111 [2024-11-17 14:05:36.175070] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:58.111 [2024-11-17 14:05:36.175084] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:58.111 [2024-11-17 14:05:36.175093] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:58.111 [2024-11-17 14:05:36.175101] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:58.111 [2024-11-17 14:05:36.175112] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:58.111 [2024-11-17 14:05:36.175119] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:58.111 [2024-11-17 14:05:36.175128] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:58.111 [2024-11-17 14:05:36.175136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.111 [2024-11-17 14:05:36.175146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:58.111 [2024-11-17 14:05:36.175156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:16:58.111 [2024-11-17 14:05:36.175164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.111 [2024-11-17 14:05:36.175284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.111 [2024-11-17 14:05:36.175296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:58.111 [2024-11-17 14:05:36.175305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:16:58.111 [2024-11-17 14:05:36.175314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.111 [2024-11-17 14:05:36.175420] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:58.111 [2024-11-17 14:05:36.175450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:58.111 [2024-11-17 14:05:36.175460] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:58.111 [2024-11-17 14:05:36.175471] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:58.111 [2024-11-17 14:05:36.175481] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:58.111 [2024-11-17 14:05:36.175490] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:58.111 [2024-11-17 14:05:36.175499] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:58.111 [2024-11-17 14:05:36.175506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:58.111 [2024-11-17 14:05:36.175520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:58.111 [2024-11-17 14:05:36.175528] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:58.111 [2024-11-17 14:05:36.175535] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:58.111 [2024-11-17 14:05:36.175544] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:58.111 [2024-11-17 14:05:36.175551] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:58.111 [2024-11-17 14:05:36.175559] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:58.111 [2024-11-17 14:05:36.175570] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:58.111 [2024-11-17 14:05:36.175579] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:58.111 [2024-11-17 14:05:36.175587] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:58.111 [2024-11-17 14:05:36.175596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:58.111 [2024-11-17 14:05:36.175604] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:58.111 [2024-11-17 14:05:36.175611] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:58.111 [2024-11-17 14:05:36.175620] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:58.111 [2024-11-17 14:05:36.175628] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:58.111 [2024-11-17 14:05:36.175635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:58.111 [2024-11-17 14:05:36.175643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:58.111 [2024-11-17 14:05:36.175658] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:58.111 [2024-11-17 14:05:36.175666] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:58.111 [2024-11-17 14:05:36.175674] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:58.111 [2024-11-17 14:05:36.175681] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:58.111 [2024-11-17 14:05:36.175689] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:58.111 [2024-11-17 14:05:36.175697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:58.111 [2024-11-17 14:05:36.175703] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:58.111 [2024-11-17 14:05:36.175710] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:58.111 [2024-11-17 14:05:36.175717] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:58.111 [2024-11-17 14:05:36.175723] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:58.111 [2024-11-17 14:05:36.175730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:58.111 [2024-11-17 14:05:36.175736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:58.111 [2024-11-17 14:05:36.175743] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:58.111 [2024-11-17 14:05:36.175750] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:58.111 [2024-11-17 14:05:36.175757] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:58.111 [2024-11-17 14:05:36.175763] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:58.111 [2024-11-17 14:05:36.175772] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:58.111 [2024-11-17 14:05:36.175779] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:58.111 [2024-11-17 14:05:36.175786] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:58.111 [2024-11-17 14:05:36.175792] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:58.111 [2024-11-17 14:05:36.175803] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:58.111 [2024-11-17 14:05:36.175811] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:58.111 [2024-11-17 14:05:36.175820] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:58.111 [2024-11-17 14:05:36.175828] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:58.111 [2024-11-17 14:05:36.175835] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:58.111 [2024-11-17 14:05:36.175842] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:58.111 [2024-11-17 14:05:36.175849] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:58.111 [2024-11-17 14:05:36.175855] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:58.111 [2024-11-17 14:05:36.175862] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:58.111 [2024-11-17 14:05:36.175872] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:58.111 [2024-11-17 14:05:36.175882] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:58.111 [2024-11-17 14:05:36.175890] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:58.111 [2024-11-17 14:05:36.175900] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:58.111 [2024-11-17 14:05:36.175907] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:58.111 [2024-11-17 14:05:36.175914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:58.111 [2024-11-17 14:05:36.175922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:58.111 [2024-11-17 14:05:36.175930] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:58.111 [2024-11-17 14:05:36.175937] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:58.111 [2024-11-17 14:05:36.175944] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:58.111 [2024-11-17 14:05:36.175951] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:58.111 [2024-11-17 14:05:36.175958] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:58.111 [2024-11-17 14:05:36.175965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:58.111 [2024-11-17 14:05:36.175972] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:58.112 [2024-11-17 14:05:36.175979] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:58.112 [2024-11-17 14:05:36.175986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:58.112 [2024-11-17 14:05:36.175993] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:58.112 [2024-11-17 14:05:36.176002] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:58.112 [2024-11-17 14:05:36.176009] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:58.112 [2024-11-17 14:05:36.176018] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:58.112 [2024-11-17 14:05:36.176025] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:58.112 [2024-11-17 14:05:36.176033] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:58.112 [2024-11-17 14:05:36.176041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.112 [2024-11-17 14:05:36.176048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:58.112 [2024-11-17 14:05:36.176057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.690 ms 00:16:58.112 [2024-11-17 14:05:36.176067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.112 [2024-11-17 14:05:36.196998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.112 [2024-11-17 14:05:36.197061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:58.112 [2024-11-17 14:05:36.197087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.875 ms 00:16:58.112 [2024-11-17 14:05:36.197097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.112 [2024-11-17 14:05:36.197305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.112 [2024-11-17 14:05:36.197322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:58.112 [2024-11-17 14:05:36.197334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:16:58.112 [2024-11-17 14:05:36.197349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.112 [2024-11-17 14:05:36.209557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.112 [2024-11-17 14:05:36.209604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:58.112 [2024-11-17 14:05:36.209615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.179 ms 00:16:58.112 [2024-11-17 14:05:36.209623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.112 [2024-11-17 14:05:36.209698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.112 [2024-11-17 14:05:36.209708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:58.112 [2024-11-17 14:05:36.209724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:58.112 [2024-11-17 14:05:36.209731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.112 [2024-11-17 14:05:36.210226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.112 [2024-11-17 14:05:36.210293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:58.112 [2024-11-17 14:05:36.210304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.475 ms 00:16:58.112 [2024-11-17 14:05:36.210313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.112 [2024-11-17 14:05:36.210469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.112 [2024-11-17 14:05:36.210486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:58.112 [2024-11-17 14:05:36.210496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:16:58.112 [2024-11-17 14:05:36.210508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.112 [2024-11-17 14:05:36.217775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.112 [2024-11-17 14:05:36.217827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:58.112 [2024-11-17 14:05:36.217838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.242 ms 00:16:58.112 [2024-11-17 14:05:36.217845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.112 [2024-11-17 14:05:36.221603] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:16:58.112 [2024-11-17 14:05:36.221659] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:58.112 [2024-11-17 14:05:36.221671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.112 [2024-11-17 14:05:36.221680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:58.112 [2024-11-17 14:05:36.221689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.737 ms 00:16:58.112 [2024-11-17 14:05:36.221696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.112 [2024-11-17 14:05:36.237570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.112 [2024-11-17 14:05:36.237619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:58.112 [2024-11-17 14:05:36.237631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.814 ms 00:16:58.112 [2024-11-17 14:05:36.237639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.112 [2024-11-17 14:05:36.240408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.112 [2024-11-17 14:05:36.240452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:58.112 [2024-11-17 14:05:36.240461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.679 ms 00:16:58.112 [2024-11-17 14:05:36.240469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.112 [2024-11-17 14:05:36.242900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.112 [2024-11-17 14:05:36.242942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:58.112 [2024-11-17 14:05:36.242961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.379 ms 00:16:58.112 [2024-11-17 14:05:36.242969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.112 [2024-11-17 14:05:36.243342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.112 [2024-11-17 14:05:36.243362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:58.112 [2024-11-17 14:05:36.243373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:16:58.112 [2024-11-17 14:05:36.243405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.112 [2024-11-17 14:05:36.266921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.112 [2024-11-17 14:05:36.267150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:58.112 [2024-11-17 14:05:36.267175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.490 ms 00:16:58.112 [2024-11-17 14:05:36.267184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.112 [2024-11-17 14:05:36.275486] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:58.112 [2024-11-17 14:05:36.294272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.112 [2024-11-17 14:05:36.294326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:58.112 [2024-11-17 14:05:36.294341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.886 ms 00:16:58.112 [2024-11-17 14:05:36.294349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.112 [2024-11-17 14:05:36.294438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.112 [2024-11-17 14:05:36.294449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:58.112 [2024-11-17 14:05:36.294459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:16:58.112 [2024-11-17 14:05:36.294468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.112 [2024-11-17 14:05:36.294534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.112 [2024-11-17 14:05:36.294545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:58.112 [2024-11-17 14:05:36.294554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:16:58.112 [2024-11-17 14:05:36.294562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.112 [2024-11-17 14:05:36.294585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.112 [2024-11-17 14:05:36.294595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:58.112 [2024-11-17 14:05:36.294603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:58.112 [2024-11-17 14:05:36.294615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.113 [2024-11-17 14:05:36.294652] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:58.113 [2024-11-17 14:05:36.294665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.113 [2024-11-17 14:05:36.294674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:58.113 [2024-11-17 14:05:36.294682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:16:58.113 [2024-11-17 14:05:36.294690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.113 [2024-11-17 14:05:36.300323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.113 [2024-11-17 14:05:36.300368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:58.113 [2024-11-17 14:05:36.300379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.612 ms 00:16:58.113 [2024-11-17 14:05:36.300387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.113 [2024-11-17 14:05:36.300484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.113 [2024-11-17 14:05:36.300498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:58.113 [2024-11-17 14:05:36.300507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:16:58.113 [2024-11-17 14:05:36.300516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.113 [2024-11-17 14:05:36.301556] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:58.113 [2024-11-17 14:05:36.302919] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 145.899 ms, result 0 00:16:58.113 [2024-11-17 14:05:36.304371] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:58.113 [2024-11-17 14:05:36.311608] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:58.687  [2024-11-17T14:05:36.988Z] Copying: 4096/4096 [kB] (average 10214 kBps)[2024-11-17 14:05:36.713567] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:58.687 [2024-11-17 14:05:36.714607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.687 [2024-11-17 14:05:36.714647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:58.687 [2024-11-17 14:05:36.714668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:58.687 [2024-11-17 14:05:36.714679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.687 [2024-11-17 14:05:36.714701] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:58.687 [2024-11-17 14:05:36.715382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.687 [2024-11-17 14:05:36.715421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:58.687 [2024-11-17 14:05:36.715433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.667 ms 00:16:58.687 [2024-11-17 14:05:36.715456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.687 [2024-11-17 14:05:36.718386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.687 [2024-11-17 14:05:36.718428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:58.687 [2024-11-17 14:05:36.718440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.901 ms 00:16:58.687 [2024-11-17 14:05:36.718449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.687 [2024-11-17 14:05:36.722912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.687 [2024-11-17 14:05:36.722947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:58.687 [2024-11-17 14:05:36.722958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.440 ms 00:16:58.687 [2024-11-17 14:05:36.722966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.687 [2024-11-17 14:05:36.729937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.687 [2024-11-17 14:05:36.729977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:58.687 [2024-11-17 14:05:36.729987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.939 ms 00:16:58.687 [2024-11-17 14:05:36.729994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.687 [2024-11-17 14:05:36.732645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.687 [2024-11-17 14:05:36.732815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:58.687 [2024-11-17 14:05:36.732832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.581 ms 00:16:58.687 [2024-11-17 14:05:36.732851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.688 [2024-11-17 14:05:36.738359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.688 [2024-11-17 14:05:36.738406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:58.688 [2024-11-17 14:05:36.738423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.435 ms 00:16:58.688 [2024-11-17 14:05:36.738430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.688 [2024-11-17 14:05:36.738560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.688 [2024-11-17 14:05:36.738570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:58.688 [2024-11-17 14:05:36.738578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:16:58.688 [2024-11-17 14:05:36.738586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.688 [2024-11-17 14:05:36.742055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.688 [2024-11-17 14:05:36.742100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:58.688 [2024-11-17 14:05:36.742110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.450 ms 00:16:58.688 [2024-11-17 14:05:36.742118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.688 [2024-11-17 14:05:36.744963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.688 [2024-11-17 14:05:36.745008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:58.688 [2024-11-17 14:05:36.745018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.803 ms 00:16:58.688 [2024-11-17 14:05:36.745024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.688 [2024-11-17 14:05:36.747304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.688 [2024-11-17 14:05:36.747482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:58.688 [2024-11-17 14:05:36.747500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.237 ms 00:16:58.688 [2024-11-17 14:05:36.747509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.688 [2024-11-17 14:05:36.749799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.688 [2024-11-17 14:05:36.749838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:58.688 [2024-11-17 14:05:36.749848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.176 ms 00:16:58.688 [2024-11-17 14:05:36.749854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.688 [2024-11-17 14:05:36.749895] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:58.688 [2024-11-17 14:05:36.749915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:58.688 [2024-11-17 14:05:36.749925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:58.688 [2024-11-17 14:05:36.749932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:58.688 [2024-11-17 14:05:36.749940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:58.688 [2024-11-17 14:05:36.749947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:58.688 [2024-11-17 14:05:36.749954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:58.688 [2024-11-17 14:05:36.749962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:58.688 [2024-11-17 14:05:36.749970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:58.688 [2024-11-17 14:05:36.749978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:58.688 [2024-11-17 14:05:36.749985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:58.688 [2024-11-17 14:05:36.749992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:58.688 [2024-11-17 14:05:36.749999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:58.688 [2024-11-17 14:05:36.750007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:58.688 [2024-11-17 14:05:36.750014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:58.688 [2024-11-17 14:05:36.750021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:58.688 [2024-11-17 14:05:36.750029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:58.688 [2024-11-17 14:05:36.750036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:58.688 [2024-11-17 14:05:36.750044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:58.688 [2024-11-17 14:05:36.750051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:58.688 [2024-11-17 14:05:36.750058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:58.688 [2024-11-17 14:05:36.750065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:58.688 [2024-11-17 14:05:36.750073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:58.688 [2024-11-17 14:05:36.750080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:58.688 [2024-11-17 14:05:36.750087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:58.688 [2024-11-17 14:05:36.750096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:58.688 [2024-11-17 14:05:36.750103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:58.688 [2024-11-17 14:05:36.750110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:58.688 [2024-11-17 14:05:36.750118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:58.688 [2024-11-17 14:05:36.750126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:58.688 [2024-11-17 14:05:36.750135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:58.688 [2024-11-17 14:05:36.750142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:58.688 [2024-11-17 14:05:36.750150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:58.688 [2024-11-17 14:05:36.750158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:58.688 [2024-11-17 14:05:36.750165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:58.688 [2024-11-17 14:05:36.750173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:58.688 [2024-11-17 14:05:36.750181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:58.688 [2024-11-17 14:05:36.750188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:58.688 [2024-11-17 14:05:36.750195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:58.688 [2024-11-17 14:05:36.750203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:58.688 [2024-11-17 14:05:36.750210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:58.688 [2024-11-17 14:05:36.750218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:58.688 [2024-11-17 14:05:36.750225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:58.688 [2024-11-17 14:05:36.750233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:58.689 [2024-11-17 14:05:36.750263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:58.689 [2024-11-17 14:05:36.750271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:58.689 [2024-11-17 14:05:36.750278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:58.689 [2024-11-17 14:05:36.750285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:58.689 [2024-11-17 14:05:36.750293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:58.689 [2024-11-17 14:05:36.750301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:58.689 [2024-11-17 14:05:36.750309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:58.689 [2024-11-17 14:05:36.750317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:58.689 [2024-11-17 14:05:36.750324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:58.689 [2024-11-17 14:05:36.750332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:58.689 [2024-11-17 14:05:36.750341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:58.689 [2024-11-17 14:05:36.750349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:58.689 [2024-11-17 14:05:36.750357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:58.689 [2024-11-17 14:05:36.750364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:58.689 [2024-11-17 14:05:36.750372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:58.689 [2024-11-17 14:05:36.750379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:58.689 [2024-11-17 14:05:36.750388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:58.689 [2024-11-17 14:05:36.750396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:58.689 [2024-11-17 14:05:36.750403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:58.689 [2024-11-17 14:05:36.750411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:58.689 [2024-11-17 14:05:36.750419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:58.689 [2024-11-17 14:05:36.750426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:58.689 [2024-11-17 14:05:36.750433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:58.689 [2024-11-17 14:05:36.750440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:58.689 [2024-11-17 14:05:36.750448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:58.689 [2024-11-17 14:05:36.750455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:58.689 [2024-11-17 14:05:36.750462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:58.689 [2024-11-17 14:05:36.750469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:58.689 [2024-11-17 14:05:36.750476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:58.689 [2024-11-17 14:05:36.750484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:58.689 [2024-11-17 14:05:36.750491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:58.689 [2024-11-17 14:05:36.750498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:58.689 [2024-11-17 14:05:36.750506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:58.689 [2024-11-17 14:05:36.750513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:58.689 [2024-11-17 14:05:36.750520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:58.689 [2024-11-17 14:05:36.750528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:58.689 [2024-11-17 14:05:36.750535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:58.689 [2024-11-17 14:05:36.750544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:58.689 [2024-11-17 14:05:36.750564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:58.689 [2024-11-17 14:05:36.750572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:58.689 [2024-11-17 14:05:36.750579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:58.689 [2024-11-17 14:05:36.750587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:58.689 [2024-11-17 14:05:36.750594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:58.689 [2024-11-17 14:05:36.750601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:58.689 [2024-11-17 14:05:36.750608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:58.689 [2024-11-17 14:05:36.750615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:58.689 [2024-11-17 14:05:36.750623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:58.689 [2024-11-17 14:05:36.750630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:58.689 [2024-11-17 14:05:36.750637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:58.689 [2024-11-17 14:05:36.750645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:58.689 [2024-11-17 14:05:36.750653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:58.689 [2024-11-17 14:05:36.750660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:58.689 [2024-11-17 14:05:36.750667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:58.689 [2024-11-17 14:05:36.750675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:58.689 [2024-11-17 14:05:36.750682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:58.689 [2024-11-17 14:05:36.750690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:58.689 [2024-11-17 14:05:36.750698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:58.689 [2024-11-17 14:05:36.750713] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:58.689 [2024-11-17 14:05:36.750721] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ddfd3108-0b08-42c1-8a1f-32ad0e417f42 00:16:58.689 [2024-11-17 14:05:36.750739] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:58.689 [2024-11-17 14:05:36.750746] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:58.689 [2024-11-17 14:05:36.750754] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:58.689 [2024-11-17 14:05:36.750762] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:58.689 [2024-11-17 14:05:36.750770] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:58.690 [2024-11-17 14:05:36.750778] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:58.690 [2024-11-17 14:05:36.750785] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:58.690 [2024-11-17 14:05:36.750792] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:58.690 [2024-11-17 14:05:36.750798] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:58.690 [2024-11-17 14:05:36.750806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.690 [2024-11-17 14:05:36.750819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:58.690 [2024-11-17 14:05:36.750831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.912 ms 00:16:58.690 [2024-11-17 14:05:36.750838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.690 [2024-11-17 14:05:36.752764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.690 [2024-11-17 14:05:36.752794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:58.690 [2024-11-17 14:05:36.752804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.893 ms 00:16:58.690 [2024-11-17 14:05:36.752812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.690 [2024-11-17 14:05:36.752924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.690 [2024-11-17 14:05:36.752933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:58.690 [2024-11-17 14:05:36.752942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:16:58.690 [2024-11-17 14:05:36.752949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.690 [2024-11-17 14:05:36.760066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:58.690 [2024-11-17 14:05:36.760112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:58.690 [2024-11-17 14:05:36.760123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:58.690 [2024-11-17 14:05:36.760130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.690 [2024-11-17 14:05:36.760190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:58.690 [2024-11-17 14:05:36.760206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:58.690 [2024-11-17 14:05:36.760214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:58.690 [2024-11-17 14:05:36.760222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.690 [2024-11-17 14:05:36.760303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:58.690 [2024-11-17 14:05:36.760314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:58.690 [2024-11-17 14:05:36.760328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:58.690 [2024-11-17 14:05:36.760336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.690 [2024-11-17 14:05:36.760354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:58.690 [2024-11-17 14:05:36.760365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:58.690 [2024-11-17 14:05:36.760376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:58.690 [2024-11-17 14:05:36.760384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.690 [2024-11-17 14:05:36.773797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:58.690 [2024-11-17 14:05:36.773853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:58.690 [2024-11-17 14:05:36.773865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:58.690 [2024-11-17 14:05:36.773873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.690 [2024-11-17 14:05:36.784956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:58.690 [2024-11-17 14:05:36.785019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:58.690 [2024-11-17 14:05:36.785036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:58.690 [2024-11-17 14:05:36.785045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.690 [2024-11-17 14:05:36.785096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:58.690 [2024-11-17 14:05:36.785105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:58.690 [2024-11-17 14:05:36.785114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:58.690 [2024-11-17 14:05:36.785127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.690 [2024-11-17 14:05:36.785164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:58.690 [2024-11-17 14:05:36.785173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:58.690 [2024-11-17 14:05:36.785182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:58.690 [2024-11-17 14:05:36.785193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.690 [2024-11-17 14:05:36.785294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:58.690 [2024-11-17 14:05:36.785306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:58.690 [2024-11-17 14:05:36.785314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:58.690 [2024-11-17 14:05:36.785323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.690 [2024-11-17 14:05:36.785355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:58.690 [2024-11-17 14:05:36.785365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:58.690 [2024-11-17 14:05:36.785374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:58.690 [2024-11-17 14:05:36.785383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.690 [2024-11-17 14:05:36.785430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:58.690 [2024-11-17 14:05:36.785438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:58.690 [2024-11-17 14:05:36.785448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:58.690 [2024-11-17 14:05:36.785456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.690 [2024-11-17 14:05:36.785506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:58.690 [2024-11-17 14:05:36.785517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:58.690 [2024-11-17 14:05:36.785526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:58.690 [2024-11-17 14:05:36.785538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.690 [2024-11-17 14:05:36.785698] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 71.051 ms, result 0 00:16:58.952 00:16:58.952 00:16:58.952 14:05:37 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=85770 00:16:58.952 14:05:37 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 85770 00:16:58.952 14:05:37 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 85770 ']' 00:16:58.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:58.952 14:05:37 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:58.952 14:05:37 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:58.952 14:05:37 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:16:58.952 14:05:37 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:58.952 14:05:37 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:58.952 14:05:37 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:16:58.952 [2024-11-17 14:05:37.121899] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:58.952 [2024-11-17 14:05:37.122059] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85770 ] 00:16:59.213 [2024-11-17 14:05:37.272917] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:59.213 [2024-11-17 14:05:37.322930] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:59.785 14:05:37 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:59.786 14:05:37 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:16:59.786 14:05:37 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:00.047 [2024-11-17 14:05:38.184844] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:00.047 [2024-11-17 14:05:38.184924] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:00.310 [2024-11-17 14:05:38.361225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.310 [2024-11-17 14:05:38.362366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:00.310 [2024-11-17 14:05:38.362397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:00.310 [2024-11-17 14:05:38.362415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.310 [2024-11-17 14:05:38.364958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.310 [2024-11-17 14:05:38.365013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:00.310 [2024-11-17 14:05:38.365024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.509 ms 00:17:00.310 [2024-11-17 14:05:38.365034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.310 [2024-11-17 14:05:38.365162] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:00.310 [2024-11-17 14:05:38.365450] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:00.310 [2024-11-17 14:05:38.365467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.310 [2024-11-17 14:05:38.365478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:00.310 [2024-11-17 14:05:38.365488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.318 ms 00:17:00.310 [2024-11-17 14:05:38.365497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.310 [2024-11-17 14:05:38.367357] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:00.310 [2024-11-17 14:05:38.370762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.310 [2024-11-17 14:05:38.370812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:00.310 [2024-11-17 14:05:38.370825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.403 ms 00:17:00.310 [2024-11-17 14:05:38.370833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.310 [2024-11-17 14:05:38.370909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.310 [2024-11-17 14:05:38.370919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:00.310 [2024-11-17 14:05:38.370933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:17:00.310 [2024-11-17 14:05:38.370940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.310 [2024-11-17 14:05:38.378841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.310 [2024-11-17 14:05:38.378884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:00.310 [2024-11-17 14:05:38.378896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.848 ms 00:17:00.310 [2024-11-17 14:05:38.378904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.310 [2024-11-17 14:05:38.379020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.310 [2024-11-17 14:05:38.379031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:00.310 [2024-11-17 14:05:38.379041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:17:00.310 [2024-11-17 14:05:38.379049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.310 [2024-11-17 14:05:38.379079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.310 [2024-11-17 14:05:38.379092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:00.310 [2024-11-17 14:05:38.379103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:00.310 [2024-11-17 14:05:38.379113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.310 [2024-11-17 14:05:38.379139] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:00.310 [2024-11-17 14:05:38.381258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.310 [2024-11-17 14:05:38.381301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:00.310 [2024-11-17 14:05:38.381311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.126 ms 00:17:00.310 [2024-11-17 14:05:38.381321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.310 [2024-11-17 14:05:38.381367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.310 [2024-11-17 14:05:38.381377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:00.310 [2024-11-17 14:05:38.381386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:00.310 [2024-11-17 14:05:38.381395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.310 [2024-11-17 14:05:38.381416] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:00.310 [2024-11-17 14:05:38.381438] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:00.310 [2024-11-17 14:05:38.381480] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:00.310 [2024-11-17 14:05:38.381501] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:00.310 [2024-11-17 14:05:38.381606] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:00.310 [2024-11-17 14:05:38.381620] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:00.310 [2024-11-17 14:05:38.381634] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:00.310 [2024-11-17 14:05:38.381649] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:00.310 [2024-11-17 14:05:38.381658] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:00.310 [2024-11-17 14:05:38.381671] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:00.310 [2024-11-17 14:05:38.381682] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:00.310 [2024-11-17 14:05:38.381693] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:00.310 [2024-11-17 14:05:38.381700] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:00.310 [2024-11-17 14:05:38.381710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.310 [2024-11-17 14:05:38.381723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:00.310 [2024-11-17 14:05:38.381733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:17:00.310 [2024-11-17 14:05:38.381741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.310 [2024-11-17 14:05:38.381831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.310 [2024-11-17 14:05:38.381839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:00.310 [2024-11-17 14:05:38.381850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:00.310 [2024-11-17 14:05:38.381857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.310 [2024-11-17 14:05:38.381960] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:00.310 [2024-11-17 14:05:38.381971] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:00.310 [2024-11-17 14:05:38.381984] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:00.310 [2024-11-17 14:05:38.381993] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:00.310 [2024-11-17 14:05:38.382011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:00.310 [2024-11-17 14:05:38.382019] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:00.310 [2024-11-17 14:05:38.382029] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:00.310 [2024-11-17 14:05:38.382039] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:00.310 [2024-11-17 14:05:38.382055] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:00.310 [2024-11-17 14:05:38.382062] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:00.310 [2024-11-17 14:05:38.382073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:00.310 [2024-11-17 14:05:38.382080] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:00.310 [2024-11-17 14:05:38.382089] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:00.310 [2024-11-17 14:05:38.382098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:00.310 [2024-11-17 14:05:38.382108] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:00.310 [2024-11-17 14:05:38.382115] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:00.310 [2024-11-17 14:05:38.382124] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:00.310 [2024-11-17 14:05:38.382132] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:00.310 [2024-11-17 14:05:38.382142] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:00.310 [2024-11-17 14:05:38.382150] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:00.310 [2024-11-17 14:05:38.382162] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:00.310 [2024-11-17 14:05:38.382170] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:00.310 [2024-11-17 14:05:38.382179] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:00.310 [2024-11-17 14:05:38.382189] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:00.310 [2024-11-17 14:05:38.382199] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:00.310 [2024-11-17 14:05:38.382207] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:00.310 [2024-11-17 14:05:38.382217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:00.311 [2024-11-17 14:05:38.382224] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:00.311 [2024-11-17 14:05:38.382257] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:00.311 [2024-11-17 14:05:38.382266] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:00.311 [2024-11-17 14:05:38.382275] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:00.311 [2024-11-17 14:05:38.382281] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:00.311 [2024-11-17 14:05:38.382291] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:00.311 [2024-11-17 14:05:38.382298] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:00.311 [2024-11-17 14:05:38.382307] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:00.311 [2024-11-17 14:05:38.382313] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:00.311 [2024-11-17 14:05:38.382324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:00.311 [2024-11-17 14:05:38.382331] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:00.311 [2024-11-17 14:05:38.382340] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:00.311 [2024-11-17 14:05:38.382346] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:00.311 [2024-11-17 14:05:38.382355] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:00.311 [2024-11-17 14:05:38.382362] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:00.311 [2024-11-17 14:05:38.382372] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:00.311 [2024-11-17 14:05:38.382378] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:00.311 [2024-11-17 14:05:38.382389] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:00.311 [2024-11-17 14:05:38.382396] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:00.311 [2024-11-17 14:05:38.382406] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:00.311 [2024-11-17 14:05:38.382414] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:00.311 [2024-11-17 14:05:38.382422] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:00.311 [2024-11-17 14:05:38.382429] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:00.311 [2024-11-17 14:05:38.382438] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:00.311 [2024-11-17 14:05:38.382445] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:00.311 [2024-11-17 14:05:38.382456] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:00.311 [2024-11-17 14:05:38.382464] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:00.311 [2024-11-17 14:05:38.382481] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:00.311 [2024-11-17 14:05:38.382492] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:00.311 [2024-11-17 14:05:38.382502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:00.311 [2024-11-17 14:05:38.382517] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:00.311 [2024-11-17 14:05:38.382527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:00.311 [2024-11-17 14:05:38.382534] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:00.311 [2024-11-17 14:05:38.382543] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:00.311 [2024-11-17 14:05:38.382550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:00.311 [2024-11-17 14:05:38.382560] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:00.311 [2024-11-17 14:05:38.382567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:00.311 [2024-11-17 14:05:38.382577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:00.311 [2024-11-17 14:05:38.382584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:00.311 [2024-11-17 14:05:38.382594] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:00.311 [2024-11-17 14:05:38.382601] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:00.311 [2024-11-17 14:05:38.382612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:00.311 [2024-11-17 14:05:38.382618] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:00.311 [2024-11-17 14:05:38.382629] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:00.311 [2024-11-17 14:05:38.382639] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:00.311 [2024-11-17 14:05:38.382648] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:00.311 [2024-11-17 14:05:38.382656] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:00.311 [2024-11-17 14:05:38.382664] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:00.311 [2024-11-17 14:05:38.382673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.311 [2024-11-17 14:05:38.382682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:00.311 [2024-11-17 14:05:38.382689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.783 ms 00:17:00.311 [2024-11-17 14:05:38.382700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.311 [2024-11-17 14:05:38.396368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.311 [2024-11-17 14:05:38.396570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:00.311 [2024-11-17 14:05:38.396590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.609 ms 00:17:00.311 [2024-11-17 14:05:38.396605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.311 [2024-11-17 14:05:38.396743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.311 [2024-11-17 14:05:38.396759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:00.311 [2024-11-17 14:05:38.396770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:17:00.311 [2024-11-17 14:05:38.396781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.311 [2024-11-17 14:05:38.408689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.311 [2024-11-17 14:05:38.408735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:00.311 [2024-11-17 14:05:38.408746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.886 ms 00:17:00.311 [2024-11-17 14:05:38.408756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.311 [2024-11-17 14:05:38.408822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.311 [2024-11-17 14:05:38.408837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:00.311 [2024-11-17 14:05:38.408846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:00.311 [2024-11-17 14:05:38.408856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.311 [2024-11-17 14:05:38.409380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.311 [2024-11-17 14:05:38.409413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:00.311 [2024-11-17 14:05:38.409424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.502 ms 00:17:00.311 [2024-11-17 14:05:38.409435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.311 [2024-11-17 14:05:38.409592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.311 [2024-11-17 14:05:38.409617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:00.311 [2024-11-17 14:05:38.409633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 00:17:00.311 [2024-11-17 14:05:38.409644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.311 [2024-11-17 14:05:38.431706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.311 [2024-11-17 14:05:38.431772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:00.311 [2024-11-17 14:05:38.431789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.035 ms 00:17:00.311 [2024-11-17 14:05:38.431804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.311 [2024-11-17 14:05:38.435988] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:00.311 [2024-11-17 14:05:38.436043] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:00.311 [2024-11-17 14:05:38.436060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.311 [2024-11-17 14:05:38.436074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:00.311 [2024-11-17 14:05:38.436087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.075 ms 00:17:00.311 [2024-11-17 14:05:38.436100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.311 [2024-11-17 14:05:38.451921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.311 [2024-11-17 14:05:38.451967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:00.311 [2024-11-17 14:05:38.451984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.728 ms 00:17:00.311 [2024-11-17 14:05:38.451997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.311 [2024-11-17 14:05:38.454839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.311 [2024-11-17 14:05:38.454884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:00.311 [2024-11-17 14:05:38.454894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.775 ms 00:17:00.311 [2024-11-17 14:05:38.454904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.311 [2024-11-17 14:05:38.457436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.311 [2024-11-17 14:05:38.457481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:00.311 [2024-11-17 14:05:38.457491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.481 ms 00:17:00.311 [2024-11-17 14:05:38.457501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.311 [2024-11-17 14:05:38.457842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.311 [2024-11-17 14:05:38.457858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:00.312 [2024-11-17 14:05:38.457872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:17:00.312 [2024-11-17 14:05:38.457882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.312 [2024-11-17 14:05:38.481740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.312 [2024-11-17 14:05:38.481799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:00.312 [2024-11-17 14:05:38.481816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.836 ms 00:17:00.312 [2024-11-17 14:05:38.481829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.312 [2024-11-17 14:05:38.489884] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:00.312 [2024-11-17 14:05:38.508450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.312 [2024-11-17 14:05:38.508491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:00.312 [2024-11-17 14:05:38.508503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.528 ms 00:17:00.312 [2024-11-17 14:05:38.508512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.312 [2024-11-17 14:05:38.508599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.312 [2024-11-17 14:05:38.508609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:00.312 [2024-11-17 14:05:38.508621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:00.312 [2024-11-17 14:05:38.508631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.312 [2024-11-17 14:05:38.508689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.312 [2024-11-17 14:05:38.508698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:00.312 [2024-11-17 14:05:38.508712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:17:00.312 [2024-11-17 14:05:38.508720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.312 [2024-11-17 14:05:38.508748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.312 [2024-11-17 14:05:38.508756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:00.312 [2024-11-17 14:05:38.508769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:00.312 [2024-11-17 14:05:38.508776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.312 [2024-11-17 14:05:38.508819] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:00.312 [2024-11-17 14:05:38.508828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.312 [2024-11-17 14:05:38.508838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:00.312 [2024-11-17 14:05:38.508846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:00.312 [2024-11-17 14:05:38.508856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.312 [2024-11-17 14:05:38.514432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.312 [2024-11-17 14:05:38.514482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:00.312 [2024-11-17 14:05:38.514493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.554 ms 00:17:00.312 [2024-11-17 14:05:38.514504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.312 [2024-11-17 14:05:38.514598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.312 [2024-11-17 14:05:38.514610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:00.312 [2024-11-17 14:05:38.514625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:17:00.312 [2024-11-17 14:05:38.514636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.312 [2024-11-17 14:05:38.515680] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:00.312 [2024-11-17 14:05:38.516993] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 154.114 ms, result 0 00:17:00.312 [2024-11-17 14:05:38.519054] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:00.312 Some configs were skipped because the RPC state that can call them passed over. 00:17:00.312 14:05:38 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:00.573 [2024-11-17 14:05:38.756834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.573 [2024-11-17 14:05:38.756892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:00.573 [2024-11-17 14:05:38.756914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.198 ms 00:17:00.573 [2024-11-17 14:05:38.756923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.573 [2024-11-17 14:05:38.756966] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.338 ms, result 0 00:17:00.573 true 00:17:00.573 14:05:38 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:00.833 [2024-11-17 14:05:38.972465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:00.833 [2024-11-17 14:05:38.972530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:00.833 [2024-11-17 14:05:38.972544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.555 ms 00:17:00.833 [2024-11-17 14:05:38.972554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:00.833 [2024-11-17 14:05:38.972591] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.682 ms, result 0 00:17:00.833 true 00:17:00.833 14:05:38 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 85770 00:17:00.833 14:05:38 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 85770 ']' 00:17:00.833 14:05:38 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 85770 00:17:00.833 14:05:38 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:17:00.833 14:05:39 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:00.833 14:05:39 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85770 00:17:00.833 killing process with pid 85770 00:17:00.833 14:05:39 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:00.833 14:05:39 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:00.833 14:05:39 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85770' 00:17:00.833 14:05:39 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 85770 00:17:00.833 14:05:39 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 85770 00:17:01.102 [2024-11-17 14:05:39.161654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.102 [2024-11-17 14:05:39.161721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:01.102 [2024-11-17 14:05:39.161737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:01.102 [2024-11-17 14:05:39.161746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.102 [2024-11-17 14:05:39.161773] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:01.102 [2024-11-17 14:05:39.162532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.102 [2024-11-17 14:05:39.162576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:01.102 [2024-11-17 14:05:39.162587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.744 ms 00:17:01.102 [2024-11-17 14:05:39.162597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.102 [2024-11-17 14:05:39.162894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.102 [2024-11-17 14:05:39.162908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:01.102 [2024-11-17 14:05:39.162918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:17:01.102 [2024-11-17 14:05:39.162928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.102 [2024-11-17 14:05:39.167539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.102 [2024-11-17 14:05:39.167583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:01.102 [2024-11-17 14:05:39.167594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.590 ms 00:17:01.102 [2024-11-17 14:05:39.167607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.102 [2024-11-17 14:05:39.175036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.102 [2024-11-17 14:05:39.175086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:01.102 [2024-11-17 14:05:39.175096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.388 ms 00:17:01.102 [2024-11-17 14:05:39.175108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.102 [2024-11-17 14:05:39.177846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.102 [2024-11-17 14:05:39.177897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:01.102 [2024-11-17 14:05:39.177907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.680 ms 00:17:01.102 [2024-11-17 14:05:39.177917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.102 [2024-11-17 14:05:39.183115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.102 [2024-11-17 14:05:39.183179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:01.102 [2024-11-17 14:05:39.183189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.151 ms 00:17:01.102 [2024-11-17 14:05:39.183199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.102 [2024-11-17 14:05:39.183348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.102 [2024-11-17 14:05:39.183362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:01.102 [2024-11-17 14:05:39.183371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:17:01.102 [2024-11-17 14:05:39.183381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.102 [2024-11-17 14:05:39.186342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.102 [2024-11-17 14:05:39.186394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:01.102 [2024-11-17 14:05:39.186403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.941 ms 00:17:01.102 [2024-11-17 14:05:39.186418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.102 [2024-11-17 14:05:39.189099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.102 [2024-11-17 14:05:39.189152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:01.102 [2024-11-17 14:05:39.189161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.635 ms 00:17:01.102 [2024-11-17 14:05:39.189171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.102 [2024-11-17 14:05:39.191336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.102 [2024-11-17 14:05:39.191383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:01.102 [2024-11-17 14:05:39.191395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.121 ms 00:17:01.102 [2024-11-17 14:05:39.191405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.102 [2024-11-17 14:05:39.193605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.102 [2024-11-17 14:05:39.193655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:01.102 [2024-11-17 14:05:39.193664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.099 ms 00:17:01.102 [2024-11-17 14:05:39.193673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.102 [2024-11-17 14:05:39.193714] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:01.102 [2024-11-17 14:05:39.193737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:01.102 [2024-11-17 14:05:39.193748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:01.102 [2024-11-17 14:05:39.193761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:01.102 [2024-11-17 14:05:39.193769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:01.102 [2024-11-17 14:05:39.193779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:01.102 [2024-11-17 14:05:39.193787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:01.102 [2024-11-17 14:05:39.193797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:01.102 [2024-11-17 14:05:39.193804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:01.102 [2024-11-17 14:05:39.193814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:01.102 [2024-11-17 14:05:39.193822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:01.102 [2024-11-17 14:05:39.193833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:01.102 [2024-11-17 14:05:39.193841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:01.102 [2024-11-17 14:05:39.193851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:01.102 [2024-11-17 14:05:39.193858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:01.102 [2024-11-17 14:05:39.193867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:01.102 [2024-11-17 14:05:39.193874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:01.102 [2024-11-17 14:05:39.193883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:01.102 [2024-11-17 14:05:39.193890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:01.102 [2024-11-17 14:05:39.193901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:01.102 [2024-11-17 14:05:39.193909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:01.102 [2024-11-17 14:05:39.193918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:01.102 [2024-11-17 14:05:39.193925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:01.102 [2024-11-17 14:05:39.193935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:01.102 [2024-11-17 14:05:39.193943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:01.102 [2024-11-17 14:05:39.193951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:01.102 [2024-11-17 14:05:39.193959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:01.102 [2024-11-17 14:05:39.193967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:01.102 [2024-11-17 14:05:39.193975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:01.102 [2024-11-17 14:05:39.193984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:01.102 [2024-11-17 14:05:39.193993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:01.102 [2024-11-17 14:05:39.194003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:01.102 [2024-11-17 14:05:39.194011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:01.102 [2024-11-17 14:05:39.194021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:01.102 [2024-11-17 14:05:39.194028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:01.102 [2024-11-17 14:05:39.194039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:01.102 [2024-11-17 14:05:39.194048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:01.102 [2024-11-17 14:05:39.194058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:01.102 [2024-11-17 14:05:39.194065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:01.102 [2024-11-17 14:05:39.194075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:01.102 [2024-11-17 14:05:39.194082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:01.102 [2024-11-17 14:05:39.194092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:01.102 [2024-11-17 14:05:39.194099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:01.102 [2024-11-17 14:05:39.194108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:01.102 [2024-11-17 14:05:39.194115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:01.102 [2024-11-17 14:05:39.194124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:01.102 [2024-11-17 14:05:39.194131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:01.102 [2024-11-17 14:05:39.194141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:01.102 [2024-11-17 14:05:39.194148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:01.102 [2024-11-17 14:05:39.194157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:01.103 [2024-11-17 14:05:39.194164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:01.103 [2024-11-17 14:05:39.194175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:01.103 [2024-11-17 14:05:39.194182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:01.103 [2024-11-17 14:05:39.194192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:01.103 [2024-11-17 14:05:39.194200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:01.103 [2024-11-17 14:05:39.194209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:01.103 [2024-11-17 14:05:39.194216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:01.103 [2024-11-17 14:05:39.194225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:01.103 [2024-11-17 14:05:39.194233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:01.103 [2024-11-17 14:05:39.194274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:01.103 [2024-11-17 14:05:39.194282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:01.103 [2024-11-17 14:05:39.194291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:01.103 [2024-11-17 14:05:39.194302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:01.103 [2024-11-17 14:05:39.194312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:01.103 [2024-11-17 14:05:39.194321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:01.103 [2024-11-17 14:05:39.194332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:01.103 [2024-11-17 14:05:39.194341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:01.103 [2024-11-17 14:05:39.194353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:01.103 [2024-11-17 14:05:39.194361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:01.103 [2024-11-17 14:05:39.194371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:01.103 [2024-11-17 14:05:39.194379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:01.103 [2024-11-17 14:05:39.194390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:01.103 [2024-11-17 14:05:39.194398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:01.103 [2024-11-17 14:05:39.194407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:01.103 [2024-11-17 14:05:39.194415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:01.103 [2024-11-17 14:05:39.194425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:01.103 [2024-11-17 14:05:39.194432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:01.103 [2024-11-17 14:05:39.194442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:01.103 [2024-11-17 14:05:39.194449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:01.103 [2024-11-17 14:05:39.194458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:01.103 [2024-11-17 14:05:39.194465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:01.103 [2024-11-17 14:05:39.194474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:01.103 [2024-11-17 14:05:39.194482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:01.103 [2024-11-17 14:05:39.194494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:01.103 [2024-11-17 14:05:39.194501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:01.103 [2024-11-17 14:05:39.194510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:01.103 [2024-11-17 14:05:39.194519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:01.103 [2024-11-17 14:05:39.194529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:01.103 [2024-11-17 14:05:39.194542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:01.103 [2024-11-17 14:05:39.194553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:01.103 [2024-11-17 14:05:39.194561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:01.103 [2024-11-17 14:05:39.194571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:01.103 [2024-11-17 14:05:39.194578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:01.103 [2024-11-17 14:05:39.194587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:01.103 [2024-11-17 14:05:39.194596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:01.103 [2024-11-17 14:05:39.194606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:01.103 [2024-11-17 14:05:39.194616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:01.103 [2024-11-17 14:05:39.194626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:01.103 [2024-11-17 14:05:39.194633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:01.103 [2024-11-17 14:05:39.194645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:01.103 [2024-11-17 14:05:39.194653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:01.103 [2024-11-17 14:05:39.194671] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:01.103 [2024-11-17 14:05:39.194680] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ddfd3108-0b08-42c1-8a1f-32ad0e417f42 00:17:01.103 [2024-11-17 14:05:39.194690] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:01.103 [2024-11-17 14:05:39.194699] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:01.103 [2024-11-17 14:05:39.194708] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:01.103 [2024-11-17 14:05:39.194719] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:01.103 [2024-11-17 14:05:39.194729] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:01.103 [2024-11-17 14:05:39.194738] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:01.103 [2024-11-17 14:05:39.194750] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:01.103 [2024-11-17 14:05:39.194757] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:01.103 [2024-11-17 14:05:39.194765] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:01.103 [2024-11-17 14:05:39.194772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.103 [2024-11-17 14:05:39.194783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:01.103 [2024-11-17 14:05:39.194792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.059 ms 00:17:01.103 [2024-11-17 14:05:39.194807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.103 [2024-11-17 14:05:39.197008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.103 [2024-11-17 14:05:39.197056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:01.103 [2024-11-17 14:05:39.197066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.181 ms 00:17:01.103 [2024-11-17 14:05:39.197076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.103 [2024-11-17 14:05:39.197220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.103 [2024-11-17 14:05:39.197257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:01.103 [2024-11-17 14:05:39.197267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:17:01.103 [2024-11-17 14:05:39.197278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.103 [2024-11-17 14:05:39.205052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:01.103 [2024-11-17 14:05:39.205107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:01.103 [2024-11-17 14:05:39.205118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:01.103 [2024-11-17 14:05:39.205128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.103 [2024-11-17 14:05:39.205203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:01.103 [2024-11-17 14:05:39.205215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:01.103 [2024-11-17 14:05:39.205223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:01.103 [2024-11-17 14:05:39.205252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.103 [2024-11-17 14:05:39.205300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:01.103 [2024-11-17 14:05:39.205315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:01.103 [2024-11-17 14:05:39.205323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:01.103 [2024-11-17 14:05:39.205334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.103 [2024-11-17 14:05:39.205353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:01.103 [2024-11-17 14:05:39.205363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:01.103 [2024-11-17 14:05:39.205371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:01.103 [2024-11-17 14:05:39.205382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.103 [2024-11-17 14:05:39.219268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:01.103 [2024-11-17 14:05:39.219329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:01.103 [2024-11-17 14:05:39.219340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:01.103 [2024-11-17 14:05:39.219349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.103 [2024-11-17 14:05:39.230293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:01.103 [2024-11-17 14:05:39.230352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:01.103 [2024-11-17 14:05:39.230363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:01.103 [2024-11-17 14:05:39.230377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.103 [2024-11-17 14:05:39.230446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:01.103 [2024-11-17 14:05:39.230459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:01.103 [2024-11-17 14:05:39.230468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:01.103 [2024-11-17 14:05:39.230482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.103 [2024-11-17 14:05:39.230525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:01.103 [2024-11-17 14:05:39.230537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:01.103 [2024-11-17 14:05:39.230545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:01.103 [2024-11-17 14:05:39.230555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.103 [2024-11-17 14:05:39.230630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:01.103 [2024-11-17 14:05:39.230642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:01.103 [2024-11-17 14:05:39.230650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:01.103 [2024-11-17 14:05:39.230666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.103 [2024-11-17 14:05:39.230699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:01.103 [2024-11-17 14:05:39.230711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:01.103 [2024-11-17 14:05:39.230719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:01.103 [2024-11-17 14:05:39.230731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.103 [2024-11-17 14:05:39.230776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:01.103 [2024-11-17 14:05:39.230787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:01.103 [2024-11-17 14:05:39.230796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:01.103 [2024-11-17 14:05:39.230806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.103 [2024-11-17 14:05:39.230860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:01.103 [2024-11-17 14:05:39.230873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:01.103 [2024-11-17 14:05:39.230881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:01.103 [2024-11-17 14:05:39.230892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.103 [2024-11-17 14:05:39.231048] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 69.368 ms, result 0 00:17:01.369 14:05:39 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:01.369 [2024-11-17 14:05:39.593273] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:01.369 [2024-11-17 14:05:39.593415] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85806 ] 00:17:01.630 [2024-11-17 14:05:39.742435] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.630 [2024-11-17 14:05:39.791997] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.630 [2024-11-17 14:05:39.905540] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:01.630 [2024-11-17 14:05:39.905631] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:01.892 [2024-11-17 14:05:40.065778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.892 [2024-11-17 14:05:40.065847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:01.892 [2024-11-17 14:05:40.065866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:01.892 [2024-11-17 14:05:40.065876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.892 [2024-11-17 14:05:40.068431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.892 [2024-11-17 14:05:40.068479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:01.892 [2024-11-17 14:05:40.068494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.535 ms 00:17:01.892 [2024-11-17 14:05:40.068505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.892 [2024-11-17 14:05:40.068607] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:01.892 [2024-11-17 14:05:40.068860] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:01.892 [2024-11-17 14:05:40.068881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.892 [2024-11-17 14:05:40.068893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:01.892 [2024-11-17 14:05:40.068909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:17:01.892 [2024-11-17 14:05:40.068920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.892 [2024-11-17 14:05:40.070983] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:01.892 [2024-11-17 14:05:40.074863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.892 [2024-11-17 14:05:40.074921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:01.892 [2024-11-17 14:05:40.074934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.883 ms 00:17:01.892 [2024-11-17 14:05:40.074946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.892 [2024-11-17 14:05:40.075025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.892 [2024-11-17 14:05:40.075037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:01.892 [2024-11-17 14:05:40.075046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:17:01.892 [2024-11-17 14:05:40.075054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.892 [2024-11-17 14:05:40.082975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.892 [2024-11-17 14:05:40.083021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:01.892 [2024-11-17 14:05:40.083032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.878 ms 00:17:01.892 [2024-11-17 14:05:40.083040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.892 [2024-11-17 14:05:40.083178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.892 [2024-11-17 14:05:40.083190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:01.892 [2024-11-17 14:05:40.083199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:17:01.892 [2024-11-17 14:05:40.083207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.892 [2024-11-17 14:05:40.083258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.892 [2024-11-17 14:05:40.083272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:01.892 [2024-11-17 14:05:40.083281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:17:01.892 [2024-11-17 14:05:40.083289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.892 [2024-11-17 14:05:40.083313] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:01.892 [2024-11-17 14:05:40.085356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.892 [2024-11-17 14:05:40.085393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:01.892 [2024-11-17 14:05:40.085410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.051 ms 00:17:01.892 [2024-11-17 14:05:40.085418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.892 [2024-11-17 14:05:40.085462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.892 [2024-11-17 14:05:40.085474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:01.892 [2024-11-17 14:05:40.085493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:17:01.892 [2024-11-17 14:05:40.085500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.892 [2024-11-17 14:05:40.085520] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:01.892 [2024-11-17 14:05:40.085540] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:01.892 [2024-11-17 14:05:40.085581] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:01.892 [2024-11-17 14:05:40.085598] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:01.892 [2024-11-17 14:05:40.085707] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:01.892 [2024-11-17 14:05:40.085719] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:01.892 [2024-11-17 14:05:40.085731] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:01.892 [2024-11-17 14:05:40.085742] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:01.892 [2024-11-17 14:05:40.085752] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:01.892 [2024-11-17 14:05:40.085760] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:01.892 [2024-11-17 14:05:40.085768] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:01.892 [2024-11-17 14:05:40.085776] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:01.892 [2024-11-17 14:05:40.085788] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:01.892 [2024-11-17 14:05:40.085796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.892 [2024-11-17 14:05:40.085810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:01.892 [2024-11-17 14:05:40.085821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:17:01.892 [2024-11-17 14:05:40.085828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.892 [2024-11-17 14:05:40.085917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.892 [2024-11-17 14:05:40.085934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:01.892 [2024-11-17 14:05:40.085943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:17:01.892 [2024-11-17 14:05:40.085957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.892 [2024-11-17 14:05:40.086063] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:01.892 [2024-11-17 14:05:40.086085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:01.892 [2024-11-17 14:05:40.086098] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:01.892 [2024-11-17 14:05:40.086110] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:01.893 [2024-11-17 14:05:40.086123] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:01.893 [2024-11-17 14:05:40.086131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:01.893 [2024-11-17 14:05:40.086139] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:01.893 [2024-11-17 14:05:40.086146] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:01.893 [2024-11-17 14:05:40.086157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:01.893 [2024-11-17 14:05:40.086165] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:01.893 [2024-11-17 14:05:40.086174] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:01.893 [2024-11-17 14:05:40.086182] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:01.893 [2024-11-17 14:05:40.086190] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:01.893 [2024-11-17 14:05:40.086197] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:01.893 [2024-11-17 14:05:40.086205] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:01.893 [2024-11-17 14:05:40.086214] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:01.893 [2024-11-17 14:05:40.086223] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:01.893 [2024-11-17 14:05:40.086231] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:01.893 [2024-11-17 14:05:40.086254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:01.893 [2024-11-17 14:05:40.086262] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:01.893 [2024-11-17 14:05:40.086270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:01.893 [2024-11-17 14:05:40.086278] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:01.893 [2024-11-17 14:05:40.086286] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:01.893 [2024-11-17 14:05:40.086294] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:01.893 [2024-11-17 14:05:40.086307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:01.893 [2024-11-17 14:05:40.086315] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:01.893 [2024-11-17 14:05:40.086323] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:01.893 [2024-11-17 14:05:40.086331] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:01.893 [2024-11-17 14:05:40.086339] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:01.893 [2024-11-17 14:05:40.086347] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:01.893 [2024-11-17 14:05:40.086355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:01.893 [2024-11-17 14:05:40.086363] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:01.893 [2024-11-17 14:05:40.086372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:01.893 [2024-11-17 14:05:40.086380] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:01.893 [2024-11-17 14:05:40.086388] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:01.893 [2024-11-17 14:05:40.086395] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:01.893 [2024-11-17 14:05:40.086401] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:01.893 [2024-11-17 14:05:40.086410] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:01.893 [2024-11-17 14:05:40.086417] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:01.893 [2024-11-17 14:05:40.086424] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:01.893 [2024-11-17 14:05:40.086433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:01.893 [2024-11-17 14:05:40.086440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:01.893 [2024-11-17 14:05:40.086447] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:01.893 [2024-11-17 14:05:40.086454] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:01.893 [2024-11-17 14:05:40.086469] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:01.893 [2024-11-17 14:05:40.086476] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:01.893 [2024-11-17 14:05:40.086486] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:01.893 [2024-11-17 14:05:40.086496] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:01.893 [2024-11-17 14:05:40.086504] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:01.893 [2024-11-17 14:05:40.086510] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:01.893 [2024-11-17 14:05:40.086517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:01.893 [2024-11-17 14:05:40.086524] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:01.893 [2024-11-17 14:05:40.086531] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:01.893 [2024-11-17 14:05:40.086539] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:01.893 [2024-11-17 14:05:40.086549] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:01.893 [2024-11-17 14:05:40.086558] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:01.893 [2024-11-17 14:05:40.086568] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:01.893 [2024-11-17 14:05:40.086576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:01.893 [2024-11-17 14:05:40.086583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:01.893 [2024-11-17 14:05:40.086592] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:01.893 [2024-11-17 14:05:40.086600] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:01.893 [2024-11-17 14:05:40.086607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:01.893 [2024-11-17 14:05:40.086615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:01.893 [2024-11-17 14:05:40.086623] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:01.893 [2024-11-17 14:05:40.086631] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:01.893 [2024-11-17 14:05:40.086640] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:01.893 [2024-11-17 14:05:40.086648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:01.893 [2024-11-17 14:05:40.086655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:01.893 [2024-11-17 14:05:40.086663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:01.893 [2024-11-17 14:05:40.086671] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:01.893 [2024-11-17 14:05:40.086682] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:01.893 [2024-11-17 14:05:40.086694] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:01.893 [2024-11-17 14:05:40.086704] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:01.893 [2024-11-17 14:05:40.086712] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:01.893 [2024-11-17 14:05:40.086720] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:01.893 [2024-11-17 14:05:40.086728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.893 [2024-11-17 14:05:40.086736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:01.893 [2024-11-17 14:05:40.086747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.734 ms 00:17:01.893 [2024-11-17 14:05:40.086756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.893 [2024-11-17 14:05:40.113305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.893 [2024-11-17 14:05:40.113387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:01.893 [2024-11-17 14:05:40.113413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.492 ms 00:17:01.893 [2024-11-17 14:05:40.113430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.893 [2024-11-17 14:05:40.113710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.893 [2024-11-17 14:05:40.113745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:01.893 [2024-11-17 14:05:40.113764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:17:01.893 [2024-11-17 14:05:40.113798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.893 [2024-11-17 14:05:40.126652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.893 [2024-11-17 14:05:40.126699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:01.893 [2024-11-17 14:05:40.126710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.810 ms 00:17:01.893 [2024-11-17 14:05:40.126718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.893 [2024-11-17 14:05:40.126795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.893 [2024-11-17 14:05:40.126805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:01.893 [2024-11-17 14:05:40.126817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:01.893 [2024-11-17 14:05:40.126825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.893 [2024-11-17 14:05:40.127371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.893 [2024-11-17 14:05:40.127406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:01.893 [2024-11-17 14:05:40.127419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.524 ms 00:17:01.893 [2024-11-17 14:05:40.127428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.893 [2024-11-17 14:05:40.127604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.893 [2024-11-17 14:05:40.127619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:01.893 [2024-11-17 14:05:40.127628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.134 ms 00:17:01.894 [2024-11-17 14:05:40.127640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.894 [2024-11-17 14:05:40.134885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.894 [2024-11-17 14:05:40.134936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:01.894 [2024-11-17 14:05:40.134946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.221 ms 00:17:01.894 [2024-11-17 14:05:40.134953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.894 [2024-11-17 14:05:40.138749] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:01.894 [2024-11-17 14:05:40.138805] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:01.894 [2024-11-17 14:05:40.138816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.894 [2024-11-17 14:05:40.138824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:01.894 [2024-11-17 14:05:40.138833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.766 ms 00:17:01.894 [2024-11-17 14:05:40.138840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.894 [2024-11-17 14:05:40.156997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.894 [2024-11-17 14:05:40.157053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:01.894 [2024-11-17 14:05:40.157065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.093 ms 00:17:01.894 [2024-11-17 14:05:40.157073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.894 [2024-11-17 14:05:40.159742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.894 [2024-11-17 14:05:40.159786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:01.894 [2024-11-17 14:05:40.159796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.581 ms 00:17:01.894 [2024-11-17 14:05:40.159804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.894 [2024-11-17 14:05:40.162134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.894 [2024-11-17 14:05:40.162176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:01.894 [2024-11-17 14:05:40.162195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.277 ms 00:17:01.894 [2024-11-17 14:05:40.162202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.894 [2024-11-17 14:05:40.162568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.894 [2024-11-17 14:05:40.162595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:01.894 [2024-11-17 14:05:40.162611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:17:01.894 [2024-11-17 14:05:40.162620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.894 [2024-11-17 14:05:40.185500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.894 [2024-11-17 14:05:40.185564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:01.894 [2024-11-17 14:05:40.185578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.854 ms 00:17:01.894 [2024-11-17 14:05:40.185588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.155 [2024-11-17 14:05:40.193811] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:02.155 [2024-11-17 14:05:40.213121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.155 [2024-11-17 14:05:40.213177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:02.155 [2024-11-17 14:05:40.213190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.438 ms 00:17:02.155 [2024-11-17 14:05:40.213198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.155 [2024-11-17 14:05:40.213315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.155 [2024-11-17 14:05:40.213328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:02.155 [2024-11-17 14:05:40.213339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:17:02.155 [2024-11-17 14:05:40.213356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.155 [2024-11-17 14:05:40.213416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.155 [2024-11-17 14:05:40.213427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:02.155 [2024-11-17 14:05:40.213435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:17:02.155 [2024-11-17 14:05:40.213443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.155 [2024-11-17 14:05:40.213467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.155 [2024-11-17 14:05:40.213481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:02.155 [2024-11-17 14:05:40.213489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:02.155 [2024-11-17 14:05:40.213497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.155 [2024-11-17 14:05:40.213536] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:02.155 [2024-11-17 14:05:40.213548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.155 [2024-11-17 14:05:40.213557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:02.155 [2024-11-17 14:05:40.213566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:02.155 [2024-11-17 14:05:40.213574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.155 [2024-11-17 14:05:40.219651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.155 [2024-11-17 14:05:40.219702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:02.155 [2024-11-17 14:05:40.219715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.057 ms 00:17:02.155 [2024-11-17 14:05:40.219723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.155 [2024-11-17 14:05:40.219820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:02.155 [2024-11-17 14:05:40.219834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:02.155 [2024-11-17 14:05:40.219844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:17:02.155 [2024-11-17 14:05:40.219858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:02.155 [2024-11-17 14:05:40.221037] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:02.155 [2024-11-17 14:05:40.222369] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 154.941 ms, result 0 00:17:02.155 [2024-11-17 14:05:40.224163] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:02.155 [2024-11-17 14:05:40.230980] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:03.099  [2024-11-17T14:05:42.343Z] Copying: 16/256 [MB] (16 MBps) [2024-11-17T14:05:43.730Z] Copying: 27/256 [MB] (10 MBps) [2024-11-17T14:05:44.304Z] Copying: 48/256 [MB] (21 MBps) [2024-11-17T14:05:45.692Z] Copying: 69/256 [MB] (21 MBps) [2024-11-17T14:05:46.637Z] Copying: 87/256 [MB] (17 MBps) [2024-11-17T14:05:47.581Z] Copying: 104/256 [MB] (17 MBps) [2024-11-17T14:05:48.526Z] Copying: 117/256 [MB] (12 MBps) [2024-11-17T14:05:49.469Z] Copying: 128/256 [MB] (11 MBps) [2024-11-17T14:05:50.412Z] Copying: 143/256 [MB] (15 MBps) [2024-11-17T14:05:51.356Z] Copying: 158/256 [MB] (15 MBps) [2024-11-17T14:05:52.301Z] Copying: 169/256 [MB] (10 MBps) [2024-11-17T14:05:53.685Z] Copying: 179/256 [MB] (10 MBps) [2024-11-17T14:05:54.629Z] Copying: 193/256 [MB] (14 MBps) [2024-11-17T14:05:55.572Z] Copying: 213/256 [MB] (20 MBps) [2024-11-17T14:05:56.516Z] Copying: 224/256 [MB] (10 MBps) [2024-11-17T14:05:57.460Z] Copying: 234/256 [MB] (10 MBps) [2024-11-17T14:05:58.403Z] Copying: 245/256 [MB] (10 MBps) [2024-11-17T14:05:58.403Z] Copying: 256/256 [MB] (average 14 MBps)[2024-11-17 14:05:58.368471] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:20.102 [2024-11-17 14:05:58.371029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.102 [2024-11-17 14:05:58.371090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:20.102 [2024-11-17 14:05:58.371122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:20.102 [2024-11-17 14:05:58.371137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.102 [2024-11-17 14:05:58.371176] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:20.102 [2024-11-17 14:05:58.372010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.102 [2024-11-17 14:05:58.372068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:20.102 [2024-11-17 14:05:58.372098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.810 ms 00:17:20.102 [2024-11-17 14:05:58.372123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.102 [2024-11-17 14:05:58.372849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.102 [2024-11-17 14:05:58.372901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:20.102 [2024-11-17 14:05:58.372925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.665 ms 00:17:20.102 [2024-11-17 14:05:58.372945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.102 [2024-11-17 14:05:58.380657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.102 [2024-11-17 14:05:58.380708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:20.102 [2024-11-17 14:05:58.380719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.662 ms 00:17:20.102 [2024-11-17 14:05:58.380727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.102 [2024-11-17 14:05:58.388547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.102 [2024-11-17 14:05:58.388590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:20.102 [2024-11-17 14:05:58.388601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.769 ms 00:17:20.102 [2024-11-17 14:05:58.388614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.102 [2024-11-17 14:05:58.391571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.102 [2024-11-17 14:05:58.391616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:20.102 [2024-11-17 14:05:58.391629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.890 ms 00:17:20.102 [2024-11-17 14:05:58.391648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.102 [2024-11-17 14:05:58.396688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.102 [2024-11-17 14:05:58.396740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:20.102 [2024-11-17 14:05:58.396759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.991 ms 00:17:20.102 [2024-11-17 14:05:58.396767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.102 [2024-11-17 14:05:58.396952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.102 [2024-11-17 14:05:58.396967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:20.102 [2024-11-17 14:05:58.396980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:17:20.102 [2024-11-17 14:05:58.396992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.102 [2024-11-17 14:05:58.400232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.103 [2024-11-17 14:05:58.400301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:20.103 [2024-11-17 14:05:58.400317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.210 ms 00:17:20.103 [2024-11-17 14:05:58.400328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.365 [2024-11-17 14:05:58.403267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.365 [2024-11-17 14:05:58.403305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:20.365 [2024-11-17 14:05:58.403316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.882 ms 00:17:20.365 [2024-11-17 14:05:58.403323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.365 [2024-11-17 14:05:58.405587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.365 [2024-11-17 14:05:58.405629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:20.365 [2024-11-17 14:05:58.405639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.219 ms 00:17:20.365 [2024-11-17 14:05:58.405646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.365 [2024-11-17 14:05:58.408033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.365 [2024-11-17 14:05:58.408076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:20.365 [2024-11-17 14:05:58.408086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.307 ms 00:17:20.365 [2024-11-17 14:05:58.408094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.365 [2024-11-17 14:05:58.408140] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:20.365 [2024-11-17 14:05:58.408163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:20.365 [2024-11-17 14:05:58.408174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:20.365 [2024-11-17 14:05:58.408183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:20.365 [2024-11-17 14:05:58.408191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:20.365 [2024-11-17 14:05:58.408200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:20.365 [2024-11-17 14:05:58.408208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:20.365 [2024-11-17 14:05:58.408218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:20.365 [2024-11-17 14:05:58.408226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:20.365 [2024-11-17 14:05:58.408251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:20.365 [2024-11-17 14:05:58.408260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:20.365 [2024-11-17 14:05:58.408268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:20.365 [2024-11-17 14:05:58.408276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:20.365 [2024-11-17 14:05:58.408284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:20.365 [2024-11-17 14:05:58.408293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:20.365 [2024-11-17 14:05:58.408301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:20.365 [2024-11-17 14:05:58.408316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:20.365 [2024-11-17 14:05:58.408324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:20.365 [2024-11-17 14:05:58.408331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:20.365 [2024-11-17 14:05:58.408339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:20.365 [2024-11-17 14:05:58.408346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:20.366 [2024-11-17 14:05:58.408354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:20.366 [2024-11-17 14:05:58.408362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:20.366 [2024-11-17 14:05:58.408369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:20.366 [2024-11-17 14:05:58.408377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:20.366 [2024-11-17 14:05:58.408385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:20.366 [2024-11-17 14:05:58.408393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:20.366 [2024-11-17 14:05:58.408400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:20.366 [2024-11-17 14:05:58.408408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:20.366 [2024-11-17 14:05:58.408415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:20.366 [2024-11-17 14:05:58.408426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:20.366 [2024-11-17 14:05:58.408436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:20.366 [2024-11-17 14:05:58.408443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:20.366 [2024-11-17 14:05:58.408451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:20.366 [2024-11-17 14:05:58.408458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:20.366 [2024-11-17 14:05:58.408465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:20.366 [2024-11-17 14:05:58.408475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:20.366 [2024-11-17 14:05:58.408483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:20.366 [2024-11-17 14:05:58.408490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:20.366 [2024-11-17 14:05:58.408497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:20.366 [2024-11-17 14:05:58.408506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:20.366 [2024-11-17 14:05:58.408513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:20.366 [2024-11-17 14:05:58.408521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:20.366 [2024-11-17 14:05:58.408529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:20.366 [2024-11-17 14:05:58.408537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:20.366 [2024-11-17 14:05:58.408544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:20.366 [2024-11-17 14:05:58.408551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:20.366 [2024-11-17 14:05:58.408559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:20.366 [2024-11-17 14:05:58.408572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:20.366 [2024-11-17 14:05:58.408579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:20.366 [2024-11-17 14:05:58.408587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:20.366 [2024-11-17 14:05:58.408594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:20.366 [2024-11-17 14:05:58.408602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:20.366 [2024-11-17 14:05:58.408609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:20.366 [2024-11-17 14:05:58.408616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:20.366 [2024-11-17 14:05:58.408624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:20.366 [2024-11-17 14:05:58.408631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:20.366 [2024-11-17 14:05:58.408640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:20.366 [2024-11-17 14:05:58.408647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:20.366 [2024-11-17 14:05:58.408655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:20.366 [2024-11-17 14:05:58.408663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:20.366 [2024-11-17 14:05:58.408670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:20.366 [2024-11-17 14:05:58.408678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:20.366 [2024-11-17 14:05:58.408688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:20.366 [2024-11-17 14:05:58.408696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:20.366 [2024-11-17 14:05:58.408703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:20.366 [2024-11-17 14:05:58.408711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:20.366 [2024-11-17 14:05:58.408719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:20.366 [2024-11-17 14:05:58.408727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:20.366 [2024-11-17 14:05:58.408735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:20.366 [2024-11-17 14:05:58.408743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:20.366 [2024-11-17 14:05:58.408751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:20.366 [2024-11-17 14:05:58.408759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:20.366 [2024-11-17 14:05:58.408767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:20.366 [2024-11-17 14:05:58.408775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:20.366 [2024-11-17 14:05:58.408783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:20.366 [2024-11-17 14:05:58.408791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:20.366 [2024-11-17 14:05:58.408799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:20.366 [2024-11-17 14:05:58.408807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:20.366 [2024-11-17 14:05:58.408815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:20.366 [2024-11-17 14:05:58.408829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:20.366 [2024-11-17 14:05:58.408837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:20.366 [2024-11-17 14:05:58.408845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:20.366 [2024-11-17 14:05:58.408853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:20.366 [2024-11-17 14:05:58.408861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:20.366 [2024-11-17 14:05:58.408868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:20.366 [2024-11-17 14:05:58.408876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:20.366 [2024-11-17 14:05:58.408884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:20.366 [2024-11-17 14:05:58.408892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:20.366 [2024-11-17 14:05:58.408901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:20.366 [2024-11-17 14:05:58.408908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:20.366 [2024-11-17 14:05:58.408916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:20.366 [2024-11-17 14:05:58.408924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:20.366 [2024-11-17 14:05:58.408931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:20.366 [2024-11-17 14:05:58.408940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:20.366 [2024-11-17 14:05:58.408948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:20.366 [2024-11-17 14:05:58.408957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:20.366 [2024-11-17 14:05:58.408965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:20.366 [2024-11-17 14:05:58.408973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:20.366 [2024-11-17 14:05:58.408980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:20.366 [2024-11-17 14:05:58.408988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:20.366 [2024-11-17 14:05:58.409005] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:20.366 [2024-11-17 14:05:58.409014] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ddfd3108-0b08-42c1-8a1f-32ad0e417f42 00:17:20.366 [2024-11-17 14:05:58.409029] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:20.366 [2024-11-17 14:05:58.409036] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:20.366 [2024-11-17 14:05:58.409050] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:20.366 [2024-11-17 14:05:58.409058] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:20.366 [2024-11-17 14:05:58.409066] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:20.366 [2024-11-17 14:05:58.409075] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:20.366 [2024-11-17 14:05:58.409083] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:20.366 [2024-11-17 14:05:58.409090] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:20.366 [2024-11-17 14:05:58.409096] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:20.367 [2024-11-17 14:05:58.409109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.367 [2024-11-17 14:05:58.409133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:20.367 [2024-11-17 14:05:58.409146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.969 ms 00:17:20.367 [2024-11-17 14:05:58.409155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.367 [2024-11-17 14:05:58.411564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.367 [2024-11-17 14:05:58.411601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:20.367 [2024-11-17 14:05:58.411613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.389 ms 00:17:20.367 [2024-11-17 14:05:58.411631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.367 [2024-11-17 14:05:58.411775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.367 [2024-11-17 14:05:58.411790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:20.367 [2024-11-17 14:05:58.411799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:17:20.367 [2024-11-17 14:05:58.411811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.367 [2024-11-17 14:05:58.419687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.367 [2024-11-17 14:05:58.419733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:20.367 [2024-11-17 14:05:58.419744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.367 [2024-11-17 14:05:58.419753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.367 [2024-11-17 14:05:58.419828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.367 [2024-11-17 14:05:58.419841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:20.367 [2024-11-17 14:05:58.419850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.367 [2024-11-17 14:05:58.419858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.367 [2024-11-17 14:05:58.419903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.367 [2024-11-17 14:05:58.419916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:20.367 [2024-11-17 14:05:58.419927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.367 [2024-11-17 14:05:58.419935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.367 [2024-11-17 14:05:58.419953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.367 [2024-11-17 14:05:58.419962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:20.367 [2024-11-17 14:05:58.419973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.367 [2024-11-17 14:05:58.419981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.367 [2024-11-17 14:05:58.434124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.367 [2024-11-17 14:05:58.434181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:20.367 [2024-11-17 14:05:58.434193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.367 [2024-11-17 14:05:58.434201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.367 [2024-11-17 14:05:58.445938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.367 [2024-11-17 14:05:58.445993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:20.367 [2024-11-17 14:05:58.446005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.367 [2024-11-17 14:05:58.446015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.367 [2024-11-17 14:05:58.446068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.367 [2024-11-17 14:05:58.446078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:20.367 [2024-11-17 14:05:58.446086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.367 [2024-11-17 14:05:58.446094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.367 [2024-11-17 14:05:58.446126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.367 [2024-11-17 14:05:58.446134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:20.367 [2024-11-17 14:05:58.446144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.367 [2024-11-17 14:05:58.446156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.367 [2024-11-17 14:05:58.446227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.367 [2024-11-17 14:05:58.446254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:20.367 [2024-11-17 14:05:58.446263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.367 [2024-11-17 14:05:58.446271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.367 [2024-11-17 14:05:58.446302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.367 [2024-11-17 14:05:58.446312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:20.367 [2024-11-17 14:05:58.446320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.367 [2024-11-17 14:05:58.446328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.367 [2024-11-17 14:05:58.446379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.367 [2024-11-17 14:05:58.446387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:20.367 [2024-11-17 14:05:58.446396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.367 [2024-11-17 14:05:58.446404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.367 [2024-11-17 14:05:58.446452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.367 [2024-11-17 14:05:58.446461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:20.367 [2024-11-17 14:05:58.446474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.367 [2024-11-17 14:05:58.446485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.367 [2024-11-17 14:05:58.446641] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 75.594 ms, result 0 00:17:20.628 00:17:20.628 00:17:20.628 14:05:58 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:21.199 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:17:21.199 14:05:59 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:17:21.199 14:05:59 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:17:21.199 14:05:59 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:21.199 14:05:59 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:21.199 14:05:59 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:17:21.199 14:05:59 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:21.199 14:05:59 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 85770 00:17:21.199 14:05:59 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 85770 ']' 00:17:21.199 14:05:59 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 85770 00:17:21.199 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (85770) - No such process 00:17:21.199 14:05:59 ftl.ftl_trim -- common/autotest_common.sh@977 -- # echo 'Process with pid 85770 is not found' 00:17:21.199 Process with pid 85770 is not found 00:17:21.199 ************************************ 00:17:21.199 END TEST ftl_trim 00:17:21.199 ************************************ 00:17:21.199 00:17:21.199 real 1m6.721s 00:17:21.199 user 1m25.423s 00:17:21.199 sys 0m5.197s 00:17:21.199 14:05:59 ftl.ftl_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:21.199 14:05:59 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:21.199 14:05:59 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:17:21.199 14:05:59 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:21.199 14:05:59 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:21.199 14:05:59 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:21.199 ************************************ 00:17:21.199 START TEST ftl_restore 00:17:21.199 ************************************ 00:17:21.199 14:05:59 ftl.ftl_restore -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:17:21.199 * Looking for test storage... 00:17:21.199 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:21.199 14:05:59 ftl.ftl_restore -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:21.199 14:05:59 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # lcov --version 00:17:21.199 14:05:59 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:21.460 14:05:59 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:21.460 14:05:59 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:21.460 14:05:59 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:21.460 14:05:59 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:21.460 14:05:59 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:17:21.460 14:05:59 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:17:21.460 14:05:59 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:17:21.460 14:05:59 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:17:21.460 14:05:59 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:17:21.460 14:05:59 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:17:21.460 14:05:59 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:17:21.460 14:05:59 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:21.460 14:05:59 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:17:21.460 14:05:59 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:17:21.460 14:05:59 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:21.460 14:05:59 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:21.460 14:05:59 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:17:21.460 14:05:59 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:17:21.460 14:05:59 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:21.460 14:05:59 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:17:21.460 14:05:59 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:17:21.460 14:05:59 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:17:21.460 14:05:59 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:17:21.460 14:05:59 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:21.460 14:05:59 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:17:21.460 14:05:59 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:17:21.460 14:05:59 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:21.460 14:05:59 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:21.460 14:05:59 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:17:21.460 14:05:59 ftl.ftl_restore -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:21.460 14:05:59 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:21.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.460 --rc genhtml_branch_coverage=1 00:17:21.460 --rc genhtml_function_coverage=1 00:17:21.460 --rc genhtml_legend=1 00:17:21.460 --rc geninfo_all_blocks=1 00:17:21.460 --rc geninfo_unexecuted_blocks=1 00:17:21.460 00:17:21.460 ' 00:17:21.460 14:05:59 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:21.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.460 --rc genhtml_branch_coverage=1 00:17:21.460 --rc genhtml_function_coverage=1 00:17:21.460 --rc genhtml_legend=1 00:17:21.460 --rc geninfo_all_blocks=1 00:17:21.460 --rc geninfo_unexecuted_blocks=1 00:17:21.460 00:17:21.460 ' 00:17:21.460 14:05:59 ftl.ftl_restore -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:21.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.460 --rc genhtml_branch_coverage=1 00:17:21.460 --rc genhtml_function_coverage=1 00:17:21.460 --rc genhtml_legend=1 00:17:21.460 --rc geninfo_all_blocks=1 00:17:21.460 --rc geninfo_unexecuted_blocks=1 00:17:21.460 00:17:21.460 ' 00:17:21.460 14:05:59 ftl.ftl_restore -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:21.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.460 --rc genhtml_branch_coverage=1 00:17:21.460 --rc genhtml_function_coverage=1 00:17:21.460 --rc genhtml_legend=1 00:17:21.460 --rc geninfo_all_blocks=1 00:17:21.460 --rc geninfo_unexecuted_blocks=1 00:17:21.460 00:17:21.460 ' 00:17:21.461 14:05:59 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:21.461 14:05:59 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:17:21.461 14:05:59 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:21.461 14:05:59 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:21.461 14:05:59 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:21.461 14:05:59 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:21.461 14:05:59 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:21.461 14:05:59 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:21.461 14:05:59 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:21.461 14:05:59 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:21.461 14:05:59 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:21.461 14:05:59 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:21.461 14:05:59 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:21.461 14:05:59 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:21.461 14:05:59 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:21.461 14:05:59 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:21.461 14:05:59 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:21.461 14:05:59 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:21.461 14:05:59 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:21.461 14:05:59 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:21.461 14:05:59 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:21.461 14:05:59 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:21.461 14:05:59 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:21.461 14:05:59 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:21.461 14:05:59 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:21.461 14:05:59 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:21.461 14:05:59 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:21.461 14:05:59 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:21.461 14:05:59 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:21.461 14:05:59 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:21.461 14:05:59 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:17:21.461 14:05:59 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.V6oaqLhUEq 00:17:21.461 14:05:59 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:21.461 14:05:59 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:17:21.461 14:05:59 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:17:21.461 14:05:59 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:21.461 14:05:59 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:17:21.461 14:05:59 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:17:21.461 14:05:59 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:17:21.461 14:05:59 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:17:21.461 14:05:59 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=86086 00:17:21.461 14:05:59 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 86086 00:17:21.461 14:05:59 ftl.ftl_restore -- common/autotest_common.sh@831 -- # '[' -z 86086 ']' 00:17:21.461 14:05:59 ftl.ftl_restore -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:21.461 14:05:59 ftl.ftl_restore -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:21.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:21.461 14:05:59 ftl.ftl_restore -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:21.461 14:05:59 ftl.ftl_restore -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:21.461 14:05:59 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:17:21.461 14:05:59 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:21.461 [2024-11-17 14:05:59.664277] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:21.461 [2024-11-17 14:05:59.664439] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86086 ] 00:17:21.722 [2024-11-17 14:05:59.816322] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:21.722 [2024-11-17 14:05:59.866360] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:22.295 14:06:00 ftl.ftl_restore -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:22.295 14:06:00 ftl.ftl_restore -- common/autotest_common.sh@864 -- # return 0 00:17:22.295 14:06:00 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:22.295 14:06:00 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:17:22.295 14:06:00 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:22.295 14:06:00 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:17:22.295 14:06:00 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:17:22.296 14:06:00 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:22.557 14:06:00 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:22.557 14:06:00 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:17:22.557 14:06:00 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:22.557 14:06:00 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:17:22.557 14:06:00 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:22.557 14:06:00 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:17:22.557 14:06:00 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:17:22.557 14:06:00 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:22.819 14:06:01 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:22.819 { 00:17:22.819 "name": "nvme0n1", 00:17:22.819 "aliases": [ 00:17:22.819 "8d5f358a-f12e-45c6-807f-076b71ad8fa3" 00:17:22.819 ], 00:17:22.819 "product_name": "NVMe disk", 00:17:22.819 "block_size": 4096, 00:17:22.819 "num_blocks": 1310720, 00:17:22.819 "uuid": "8d5f358a-f12e-45c6-807f-076b71ad8fa3", 00:17:22.819 "numa_id": -1, 00:17:22.820 "assigned_rate_limits": { 00:17:22.820 "rw_ios_per_sec": 0, 00:17:22.820 "rw_mbytes_per_sec": 0, 00:17:22.820 "r_mbytes_per_sec": 0, 00:17:22.820 "w_mbytes_per_sec": 0 00:17:22.820 }, 00:17:22.820 "claimed": true, 00:17:22.820 "claim_type": "read_many_write_one", 00:17:22.820 "zoned": false, 00:17:22.820 "supported_io_types": { 00:17:22.820 "read": true, 00:17:22.820 "write": true, 00:17:22.820 "unmap": true, 00:17:22.820 "flush": true, 00:17:22.820 "reset": true, 00:17:22.820 "nvme_admin": true, 00:17:22.820 "nvme_io": true, 00:17:22.820 "nvme_io_md": false, 00:17:22.820 "write_zeroes": true, 00:17:22.820 "zcopy": false, 00:17:22.820 "get_zone_info": false, 00:17:22.820 "zone_management": false, 00:17:22.820 "zone_append": false, 00:17:22.820 "compare": true, 00:17:22.820 "compare_and_write": false, 00:17:22.820 "abort": true, 00:17:22.820 "seek_hole": false, 00:17:22.820 "seek_data": false, 00:17:22.820 "copy": true, 00:17:22.820 "nvme_iov_md": false 00:17:22.820 }, 00:17:22.820 "driver_specific": { 00:17:22.820 "nvme": [ 00:17:22.820 { 00:17:22.820 "pci_address": "0000:00:11.0", 00:17:22.820 "trid": { 00:17:22.820 "trtype": "PCIe", 00:17:22.820 "traddr": "0000:00:11.0" 00:17:22.820 }, 00:17:22.820 "ctrlr_data": { 00:17:22.820 "cntlid": 0, 00:17:22.820 "vendor_id": "0x1b36", 00:17:22.820 "model_number": "QEMU NVMe Ctrl", 00:17:22.820 "serial_number": "12341", 00:17:22.820 "firmware_revision": "8.0.0", 00:17:22.820 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:22.820 "oacs": { 00:17:22.820 "security": 0, 00:17:22.820 "format": 1, 00:17:22.820 "firmware": 0, 00:17:22.820 "ns_manage": 1 00:17:22.820 }, 00:17:22.820 "multi_ctrlr": false, 00:17:22.820 "ana_reporting": false 00:17:22.820 }, 00:17:22.820 "vs": { 00:17:22.820 "nvme_version": "1.4" 00:17:22.820 }, 00:17:22.820 "ns_data": { 00:17:22.820 "id": 1, 00:17:22.820 "can_share": false 00:17:22.820 } 00:17:22.820 } 00:17:22.820 ], 00:17:22.820 "mp_policy": "active_passive" 00:17:22.820 } 00:17:22.820 } 00:17:22.820 ]' 00:17:22.820 14:06:01 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:22.820 14:06:01 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:17:22.820 14:06:01 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:22.820 14:06:01 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=1310720 00:17:22.820 14:06:01 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:17:22.820 14:06:01 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 5120 00:17:22.820 14:06:01 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:17:22.820 14:06:01 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:22.820 14:06:01 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:17:22.820 14:06:01 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:22.820 14:06:01 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:23.080 14:06:01 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=0ba7943d-96dc-4460-aa5e-acdae528943a 00:17:23.080 14:06:01 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:17:23.080 14:06:01 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0ba7943d-96dc-4460-aa5e-acdae528943a 00:17:23.340 14:06:01 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:23.602 14:06:01 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=bb373656-262d-48a9-9413-b8e6f226da14 00:17:23.602 14:06:01 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u bb373656-262d-48a9-9413-b8e6f226da14 00:17:23.863 14:06:01 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=0d16ec71-5c24-4d56-a640-1b4a6fa55d3f 00:17:23.863 14:06:01 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:17:23.863 14:06:01 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 0d16ec71-5c24-4d56-a640-1b4a6fa55d3f 00:17:23.863 14:06:01 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:17:23.863 14:06:01 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:23.863 14:06:01 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=0d16ec71-5c24-4d56-a640-1b4a6fa55d3f 00:17:23.863 14:06:01 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:17:23.863 14:06:01 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 0d16ec71-5c24-4d56-a640-1b4a6fa55d3f 00:17:23.863 14:06:01 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=0d16ec71-5c24-4d56-a640-1b4a6fa55d3f 00:17:23.863 14:06:01 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:23.863 14:06:01 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:17:23.863 14:06:01 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:17:23.863 14:06:01 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0d16ec71-5c24-4d56-a640-1b4a6fa55d3f 00:17:24.124 14:06:02 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:24.124 { 00:17:24.124 "name": "0d16ec71-5c24-4d56-a640-1b4a6fa55d3f", 00:17:24.124 "aliases": [ 00:17:24.124 "lvs/nvme0n1p0" 00:17:24.124 ], 00:17:24.124 "product_name": "Logical Volume", 00:17:24.124 "block_size": 4096, 00:17:24.125 "num_blocks": 26476544, 00:17:24.125 "uuid": "0d16ec71-5c24-4d56-a640-1b4a6fa55d3f", 00:17:24.125 "assigned_rate_limits": { 00:17:24.125 "rw_ios_per_sec": 0, 00:17:24.125 "rw_mbytes_per_sec": 0, 00:17:24.125 "r_mbytes_per_sec": 0, 00:17:24.125 "w_mbytes_per_sec": 0 00:17:24.125 }, 00:17:24.125 "claimed": false, 00:17:24.125 "zoned": false, 00:17:24.125 "supported_io_types": { 00:17:24.125 "read": true, 00:17:24.125 "write": true, 00:17:24.125 "unmap": true, 00:17:24.125 "flush": false, 00:17:24.125 "reset": true, 00:17:24.125 "nvme_admin": false, 00:17:24.125 "nvme_io": false, 00:17:24.125 "nvme_io_md": false, 00:17:24.125 "write_zeroes": true, 00:17:24.125 "zcopy": false, 00:17:24.125 "get_zone_info": false, 00:17:24.125 "zone_management": false, 00:17:24.125 "zone_append": false, 00:17:24.125 "compare": false, 00:17:24.125 "compare_and_write": false, 00:17:24.125 "abort": false, 00:17:24.125 "seek_hole": true, 00:17:24.125 "seek_data": true, 00:17:24.125 "copy": false, 00:17:24.125 "nvme_iov_md": false 00:17:24.125 }, 00:17:24.125 "driver_specific": { 00:17:24.125 "lvol": { 00:17:24.125 "lvol_store_uuid": "bb373656-262d-48a9-9413-b8e6f226da14", 00:17:24.125 "base_bdev": "nvme0n1", 00:17:24.125 "thin_provision": true, 00:17:24.125 "num_allocated_clusters": 0, 00:17:24.125 "snapshot": false, 00:17:24.125 "clone": false, 00:17:24.125 "esnap_clone": false 00:17:24.125 } 00:17:24.125 } 00:17:24.125 } 00:17:24.125 ]' 00:17:24.125 14:06:02 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:24.125 14:06:02 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:17:24.125 14:06:02 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:24.125 14:06:02 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:24.125 14:06:02 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:24.125 14:06:02 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:17:24.125 14:06:02 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:17:24.125 14:06:02 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:17:24.125 14:06:02 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:24.386 14:06:02 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:24.386 14:06:02 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:24.386 14:06:02 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 0d16ec71-5c24-4d56-a640-1b4a6fa55d3f 00:17:24.386 14:06:02 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=0d16ec71-5c24-4d56-a640-1b4a6fa55d3f 00:17:24.386 14:06:02 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:24.386 14:06:02 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:17:24.386 14:06:02 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:17:24.386 14:06:02 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0d16ec71-5c24-4d56-a640-1b4a6fa55d3f 00:17:24.647 14:06:02 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:24.647 { 00:17:24.647 "name": "0d16ec71-5c24-4d56-a640-1b4a6fa55d3f", 00:17:24.647 "aliases": [ 00:17:24.647 "lvs/nvme0n1p0" 00:17:24.647 ], 00:17:24.647 "product_name": "Logical Volume", 00:17:24.647 "block_size": 4096, 00:17:24.647 "num_blocks": 26476544, 00:17:24.647 "uuid": "0d16ec71-5c24-4d56-a640-1b4a6fa55d3f", 00:17:24.648 "assigned_rate_limits": { 00:17:24.648 "rw_ios_per_sec": 0, 00:17:24.648 "rw_mbytes_per_sec": 0, 00:17:24.648 "r_mbytes_per_sec": 0, 00:17:24.648 "w_mbytes_per_sec": 0 00:17:24.648 }, 00:17:24.648 "claimed": false, 00:17:24.648 "zoned": false, 00:17:24.648 "supported_io_types": { 00:17:24.648 "read": true, 00:17:24.648 "write": true, 00:17:24.648 "unmap": true, 00:17:24.648 "flush": false, 00:17:24.648 "reset": true, 00:17:24.648 "nvme_admin": false, 00:17:24.648 "nvme_io": false, 00:17:24.648 "nvme_io_md": false, 00:17:24.648 "write_zeroes": true, 00:17:24.648 "zcopy": false, 00:17:24.648 "get_zone_info": false, 00:17:24.648 "zone_management": false, 00:17:24.648 "zone_append": false, 00:17:24.648 "compare": false, 00:17:24.648 "compare_and_write": false, 00:17:24.648 "abort": false, 00:17:24.648 "seek_hole": true, 00:17:24.648 "seek_data": true, 00:17:24.648 "copy": false, 00:17:24.648 "nvme_iov_md": false 00:17:24.648 }, 00:17:24.648 "driver_specific": { 00:17:24.648 "lvol": { 00:17:24.648 "lvol_store_uuid": "bb373656-262d-48a9-9413-b8e6f226da14", 00:17:24.648 "base_bdev": "nvme0n1", 00:17:24.648 "thin_provision": true, 00:17:24.648 "num_allocated_clusters": 0, 00:17:24.648 "snapshot": false, 00:17:24.648 "clone": false, 00:17:24.648 "esnap_clone": false 00:17:24.648 } 00:17:24.648 } 00:17:24.648 } 00:17:24.648 ]' 00:17:24.648 14:06:02 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:24.648 14:06:02 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:17:24.648 14:06:02 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:24.648 14:06:02 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:24.648 14:06:02 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:24.648 14:06:02 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:17:24.648 14:06:02 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:17:24.648 14:06:02 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:24.909 14:06:03 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:17:24.909 14:06:03 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 0d16ec71-5c24-4d56-a640-1b4a6fa55d3f 00:17:24.909 14:06:03 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=0d16ec71-5c24-4d56-a640-1b4a6fa55d3f 00:17:24.909 14:06:03 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:24.909 14:06:03 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:17:24.909 14:06:03 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:17:24.909 14:06:03 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0d16ec71-5c24-4d56-a640-1b4a6fa55d3f 00:17:24.909 14:06:03 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:24.909 { 00:17:24.909 "name": "0d16ec71-5c24-4d56-a640-1b4a6fa55d3f", 00:17:24.909 "aliases": [ 00:17:24.909 "lvs/nvme0n1p0" 00:17:24.909 ], 00:17:24.909 "product_name": "Logical Volume", 00:17:24.909 "block_size": 4096, 00:17:24.909 "num_blocks": 26476544, 00:17:24.909 "uuid": "0d16ec71-5c24-4d56-a640-1b4a6fa55d3f", 00:17:24.909 "assigned_rate_limits": { 00:17:24.909 "rw_ios_per_sec": 0, 00:17:24.909 "rw_mbytes_per_sec": 0, 00:17:24.909 "r_mbytes_per_sec": 0, 00:17:24.909 "w_mbytes_per_sec": 0 00:17:24.909 }, 00:17:24.909 "claimed": false, 00:17:24.909 "zoned": false, 00:17:24.909 "supported_io_types": { 00:17:24.909 "read": true, 00:17:24.909 "write": true, 00:17:24.909 "unmap": true, 00:17:24.909 "flush": false, 00:17:24.909 "reset": true, 00:17:24.909 "nvme_admin": false, 00:17:24.909 "nvme_io": false, 00:17:24.909 "nvme_io_md": false, 00:17:24.909 "write_zeroes": true, 00:17:24.909 "zcopy": false, 00:17:24.909 "get_zone_info": false, 00:17:24.909 "zone_management": false, 00:17:24.909 "zone_append": false, 00:17:24.909 "compare": false, 00:17:24.909 "compare_and_write": false, 00:17:24.909 "abort": false, 00:17:24.909 "seek_hole": true, 00:17:24.909 "seek_data": true, 00:17:24.909 "copy": false, 00:17:24.909 "nvme_iov_md": false 00:17:24.909 }, 00:17:24.909 "driver_specific": { 00:17:24.909 "lvol": { 00:17:24.909 "lvol_store_uuid": "bb373656-262d-48a9-9413-b8e6f226da14", 00:17:24.909 "base_bdev": "nvme0n1", 00:17:24.909 "thin_provision": true, 00:17:24.909 "num_allocated_clusters": 0, 00:17:24.909 "snapshot": false, 00:17:24.909 "clone": false, 00:17:24.909 "esnap_clone": false 00:17:24.909 } 00:17:24.909 } 00:17:24.909 } 00:17:24.909 ]' 00:17:24.909 14:06:03 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:25.172 14:06:03 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:17:25.172 14:06:03 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:25.172 14:06:03 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:25.172 14:06:03 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:25.172 14:06:03 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:17:25.172 14:06:03 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:17:25.172 14:06:03 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 0d16ec71-5c24-4d56-a640-1b4a6fa55d3f --l2p_dram_limit 10' 00:17:25.172 14:06:03 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:17:25.172 14:06:03 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:17:25.172 14:06:03 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:17:25.172 14:06:03 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:17:25.172 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:17:25.172 14:06:03 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 0d16ec71-5c24-4d56-a640-1b4a6fa55d3f --l2p_dram_limit 10 -c nvc0n1p0 00:17:25.172 [2024-11-17 14:06:03.449365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.172 [2024-11-17 14:06:03.449402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:25.172 [2024-11-17 14:06:03.449415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:25.172 [2024-11-17 14:06:03.449423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.172 [2024-11-17 14:06:03.449469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.172 [2024-11-17 14:06:03.449479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:25.172 [2024-11-17 14:06:03.449485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:17:25.172 [2024-11-17 14:06:03.449494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.172 [2024-11-17 14:06:03.449514] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:25.172 [2024-11-17 14:06:03.449724] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:25.172 [2024-11-17 14:06:03.449735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.172 [2024-11-17 14:06:03.449743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:25.172 [2024-11-17 14:06:03.449750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.229 ms 00:17:25.172 [2024-11-17 14:06:03.449758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.172 [2024-11-17 14:06:03.449783] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 7d8283dd-40dc-4766-adff-c1626714363d 00:17:25.172 [2024-11-17 14:06:03.450730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.172 [2024-11-17 14:06:03.450747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:25.172 [2024-11-17 14:06:03.450756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:17:25.172 [2024-11-17 14:06:03.450763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.172 [2024-11-17 14:06:03.455486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.172 [2024-11-17 14:06:03.455508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:25.172 [2024-11-17 14:06:03.455518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.684 ms 00:17:25.172 [2024-11-17 14:06:03.455524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.172 [2024-11-17 14:06:03.455620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.172 [2024-11-17 14:06:03.455630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:25.172 [2024-11-17 14:06:03.455640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:17:25.172 [2024-11-17 14:06:03.455648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.172 [2024-11-17 14:06:03.455684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.172 [2024-11-17 14:06:03.455691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:25.172 [2024-11-17 14:06:03.455698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:25.172 [2024-11-17 14:06:03.455703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.172 [2024-11-17 14:06:03.455722] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:25.172 [2024-11-17 14:06:03.456943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.172 [2024-11-17 14:06:03.456966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:25.172 [2024-11-17 14:06:03.456975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.228 ms 00:17:25.172 [2024-11-17 14:06:03.456983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.172 [2024-11-17 14:06:03.457007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.172 [2024-11-17 14:06:03.457016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:25.172 [2024-11-17 14:06:03.457022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:25.172 [2024-11-17 14:06:03.457030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.172 [2024-11-17 14:06:03.457043] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:25.172 [2024-11-17 14:06:03.457154] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:25.172 [2024-11-17 14:06:03.457164] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:25.172 [2024-11-17 14:06:03.457174] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:25.172 [2024-11-17 14:06:03.457181] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:25.172 [2024-11-17 14:06:03.457190] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:25.172 [2024-11-17 14:06:03.457196] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:25.172 [2024-11-17 14:06:03.457205] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:25.172 [2024-11-17 14:06:03.457211] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:25.172 [2024-11-17 14:06:03.457219] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:25.172 [2024-11-17 14:06:03.457226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.172 [2024-11-17 14:06:03.457233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:25.172 [2024-11-17 14:06:03.457251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.184 ms 00:17:25.172 [2024-11-17 14:06:03.457258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.172 [2024-11-17 14:06:03.457322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.172 [2024-11-17 14:06:03.457331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:25.172 [2024-11-17 14:06:03.457337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:17:25.172 [2024-11-17 14:06:03.457346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.172 [2024-11-17 14:06:03.457427] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:25.172 [2024-11-17 14:06:03.457437] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:25.172 [2024-11-17 14:06:03.457443] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:25.172 [2024-11-17 14:06:03.457451] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:25.172 [2024-11-17 14:06:03.457457] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:25.172 [2024-11-17 14:06:03.457463] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:25.172 [2024-11-17 14:06:03.457468] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:25.172 [2024-11-17 14:06:03.457475] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:25.172 [2024-11-17 14:06:03.457480] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:25.172 [2024-11-17 14:06:03.457487] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:25.172 [2024-11-17 14:06:03.457492] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:25.172 [2024-11-17 14:06:03.457500] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:25.172 [2024-11-17 14:06:03.457505] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:25.172 [2024-11-17 14:06:03.457513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:25.172 [2024-11-17 14:06:03.457519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:25.172 [2024-11-17 14:06:03.457527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:25.172 [2024-11-17 14:06:03.457533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:25.172 [2024-11-17 14:06:03.457539] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:25.172 [2024-11-17 14:06:03.457544] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:25.172 [2024-11-17 14:06:03.457550] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:25.172 [2024-11-17 14:06:03.457556] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:25.172 [2024-11-17 14:06:03.457562] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:25.172 [2024-11-17 14:06:03.457567] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:25.172 [2024-11-17 14:06:03.457574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:25.172 [2024-11-17 14:06:03.457578] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:25.172 [2024-11-17 14:06:03.457585] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:25.172 [2024-11-17 14:06:03.457589] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:25.172 [2024-11-17 14:06:03.457596] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:25.172 [2024-11-17 14:06:03.457602] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:25.172 [2024-11-17 14:06:03.457611] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:25.173 [2024-11-17 14:06:03.457616] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:25.173 [2024-11-17 14:06:03.457624] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:25.173 [2024-11-17 14:06:03.457630] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:25.173 [2024-11-17 14:06:03.457637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:25.173 [2024-11-17 14:06:03.457642] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:25.173 [2024-11-17 14:06:03.457650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:25.173 [2024-11-17 14:06:03.457655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:25.173 [2024-11-17 14:06:03.457664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:25.173 [2024-11-17 14:06:03.457669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:25.173 [2024-11-17 14:06:03.457676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:25.173 [2024-11-17 14:06:03.457682] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:25.173 [2024-11-17 14:06:03.457690] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:25.173 [2024-11-17 14:06:03.457696] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:25.173 [2024-11-17 14:06:03.457702] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:25.173 [2024-11-17 14:06:03.457709] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:25.173 [2024-11-17 14:06:03.457718] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:25.173 [2024-11-17 14:06:03.457724] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:25.173 [2024-11-17 14:06:03.457733] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:25.173 [2024-11-17 14:06:03.457739] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:25.173 [2024-11-17 14:06:03.457746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:25.173 [2024-11-17 14:06:03.457752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:25.173 [2024-11-17 14:06:03.457759] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:25.173 [2024-11-17 14:06:03.457765] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:25.173 [2024-11-17 14:06:03.457775] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:25.173 [2024-11-17 14:06:03.457782] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:25.173 [2024-11-17 14:06:03.457791] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:25.173 [2024-11-17 14:06:03.457798] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:25.173 [2024-11-17 14:06:03.457805] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:25.173 [2024-11-17 14:06:03.457812] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:25.173 [2024-11-17 14:06:03.457819] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:25.173 [2024-11-17 14:06:03.457825] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:25.173 [2024-11-17 14:06:03.457835] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:25.173 [2024-11-17 14:06:03.457841] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:25.173 [2024-11-17 14:06:03.457848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:25.173 [2024-11-17 14:06:03.457855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:25.173 [2024-11-17 14:06:03.457862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:25.173 [2024-11-17 14:06:03.457868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:25.173 [2024-11-17 14:06:03.457876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:25.173 [2024-11-17 14:06:03.457882] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:25.173 [2024-11-17 14:06:03.457890] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:25.173 [2024-11-17 14:06:03.457898] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:25.173 [2024-11-17 14:06:03.457906] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:25.173 [2024-11-17 14:06:03.457912] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:25.173 [2024-11-17 14:06:03.457919] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:25.173 [2024-11-17 14:06:03.457927] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:25.173 [2024-11-17 14:06:03.457934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:25.173 [2024-11-17 14:06:03.457940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:25.173 [2024-11-17 14:06:03.457950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.557 ms 00:17:25.173 [2024-11-17 14:06:03.457958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:25.173 [2024-11-17 14:06:03.457988] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:25.173 [2024-11-17 14:06:03.457996] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:29.381 [2024-11-17 14:06:07.259584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.381 [2024-11-17 14:06:07.259636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:29.381 [2024-11-17 14:06:07.259652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3801.579 ms 00:17:29.382 [2024-11-17 14:06:07.259659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.382 [2024-11-17 14:06:07.266940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.382 [2024-11-17 14:06:07.266972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:29.382 [2024-11-17 14:06:07.266983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.223 ms 00:17:29.382 [2024-11-17 14:06:07.266988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.382 [2024-11-17 14:06:07.267060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.382 [2024-11-17 14:06:07.267068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:29.382 [2024-11-17 14:06:07.267078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:17:29.382 [2024-11-17 14:06:07.267084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.382 [2024-11-17 14:06:07.273948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.382 [2024-11-17 14:06:07.273973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:29.382 [2024-11-17 14:06:07.273983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.839 ms 00:17:29.382 [2024-11-17 14:06:07.273989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.382 [2024-11-17 14:06:07.274010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.382 [2024-11-17 14:06:07.274019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:29.382 [2024-11-17 14:06:07.274026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:17:29.382 [2024-11-17 14:06:07.274032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.382 [2024-11-17 14:06:07.274367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.382 [2024-11-17 14:06:07.274385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:29.382 [2024-11-17 14:06:07.274394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:17:29.382 [2024-11-17 14:06:07.274400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.382 [2024-11-17 14:06:07.274481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.382 [2024-11-17 14:06:07.274487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:29.382 [2024-11-17 14:06:07.274497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:17:29.382 [2024-11-17 14:06:07.274502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.382 [2024-11-17 14:06:07.286919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.382 [2024-11-17 14:06:07.286956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:29.382 [2024-11-17 14:06:07.286970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.398 ms 00:17:29.382 [2024-11-17 14:06:07.286978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.382 [2024-11-17 14:06:07.295133] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:17:29.382 [2024-11-17 14:06:07.297716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.382 [2024-11-17 14:06:07.297856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:29.382 [2024-11-17 14:06:07.297871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.660 ms 00:17:29.382 [2024-11-17 14:06:07.297881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.382 [2024-11-17 14:06:07.345429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.382 [2024-11-17 14:06:07.345463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:29.382 [2024-11-17 14:06:07.345472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.524 ms 00:17:29.382 [2024-11-17 14:06:07.345482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.382 [2024-11-17 14:06:07.345625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.382 [2024-11-17 14:06:07.345635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:29.382 [2024-11-17 14:06:07.345641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:17:29.382 [2024-11-17 14:06:07.345649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.382 [2024-11-17 14:06:07.348124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.382 [2024-11-17 14:06:07.348155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:29.382 [2024-11-17 14:06:07.348163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.461 ms 00:17:29.382 [2024-11-17 14:06:07.348171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.382 [2024-11-17 14:06:07.350351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.382 [2024-11-17 14:06:07.350379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:29.382 [2024-11-17 14:06:07.350388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.151 ms 00:17:29.382 [2024-11-17 14:06:07.350396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.382 [2024-11-17 14:06:07.350630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.382 [2024-11-17 14:06:07.350642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:29.382 [2024-11-17 14:06:07.350650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.208 ms 00:17:29.382 [2024-11-17 14:06:07.350659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.382 [2024-11-17 14:06:07.377341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.382 [2024-11-17 14:06:07.377374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:29.382 [2024-11-17 14:06:07.377383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.658 ms 00:17:29.382 [2024-11-17 14:06:07.377391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.382 [2024-11-17 14:06:07.380554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.382 [2024-11-17 14:06:07.380585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:29.382 [2024-11-17 14:06:07.380594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.132 ms 00:17:29.382 [2024-11-17 14:06:07.380602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.382 [2024-11-17 14:06:07.383306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.382 [2024-11-17 14:06:07.383416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:29.382 [2024-11-17 14:06:07.383441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.687 ms 00:17:29.382 [2024-11-17 14:06:07.383448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.382 [2024-11-17 14:06:07.386053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.382 [2024-11-17 14:06:07.386082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:29.382 [2024-11-17 14:06:07.386090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.588 ms 00:17:29.382 [2024-11-17 14:06:07.386100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.382 [2024-11-17 14:06:07.386119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.382 [2024-11-17 14:06:07.386128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:29.382 [2024-11-17 14:06:07.386135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:29.382 [2024-11-17 14:06:07.386143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.382 [2024-11-17 14:06:07.386192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.382 [2024-11-17 14:06:07.386201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:29.382 [2024-11-17 14:06:07.386209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:17:29.382 [2024-11-17 14:06:07.386222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.382 [2024-11-17 14:06:07.386895] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3937.217 ms, result 0 00:17:29.382 { 00:17:29.382 "name": "ftl0", 00:17:29.382 "uuid": "7d8283dd-40dc-4766-adff-c1626714363d" 00:17:29.382 } 00:17:29.382 14:06:07 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:17:29.382 14:06:07 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:29.382 14:06:07 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:17:29.382 14:06:07 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:29.645 [2024-11-17 14:06:07.792387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.645 [2024-11-17 14:06:07.792521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:29.645 [2024-11-17 14:06:07.792571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:29.645 [2024-11-17 14:06:07.792590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.645 [2024-11-17 14:06:07.792625] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:29.645 [2024-11-17 14:06:07.793039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.645 [2024-11-17 14:06:07.793116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:29.645 [2024-11-17 14:06:07.793159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.384 ms 00:17:29.645 [2024-11-17 14:06:07.793178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.645 [2024-11-17 14:06:07.793392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.645 [2024-11-17 14:06:07.793448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:29.645 [2024-11-17 14:06:07.793509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.187 ms 00:17:29.645 [2024-11-17 14:06:07.793528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.645 [2024-11-17 14:06:07.795994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.645 [2024-11-17 14:06:07.796061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:29.645 [2024-11-17 14:06:07.796099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.421 ms 00:17:29.645 [2024-11-17 14:06:07.796118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.645 [2024-11-17 14:06:07.800803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.645 [2024-11-17 14:06:07.800886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:29.645 [2024-11-17 14:06:07.800926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.661 ms 00:17:29.645 [2024-11-17 14:06:07.800948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.645 [2024-11-17 14:06:07.802292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.645 [2024-11-17 14:06:07.802375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:29.645 [2024-11-17 14:06:07.802415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.259 ms 00:17:29.645 [2024-11-17 14:06:07.802436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.645 [2024-11-17 14:06:07.806254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.645 [2024-11-17 14:06:07.806343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:29.645 [2024-11-17 14:06:07.806384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.728 ms 00:17:29.645 [2024-11-17 14:06:07.806403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.645 [2024-11-17 14:06:07.806570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.645 [2024-11-17 14:06:07.806621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:29.646 [2024-11-17 14:06:07.806657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:17:29.646 [2024-11-17 14:06:07.806676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.646 [2024-11-17 14:06:07.807959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.646 [2024-11-17 14:06:07.808042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:29.646 [2024-11-17 14:06:07.808088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.259 ms 00:17:29.646 [2024-11-17 14:06:07.808107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.646 [2024-11-17 14:06:07.809411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.646 [2024-11-17 14:06:07.809493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:29.646 [2024-11-17 14:06:07.809503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.272 ms 00:17:29.646 [2024-11-17 14:06:07.809510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.646 [2024-11-17 14:06:07.810487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.646 [2024-11-17 14:06:07.810558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:29.646 [2024-11-17 14:06:07.810600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.954 ms 00:17:29.646 [2024-11-17 14:06:07.810619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.646 [2024-11-17 14:06:07.811528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.646 [2024-11-17 14:06:07.811609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:29.646 [2024-11-17 14:06:07.811650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.859 ms 00:17:29.646 [2024-11-17 14:06:07.811668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.646 [2024-11-17 14:06:07.811699] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:29.646 [2024-11-17 14:06:07.811796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:29.646 [2024-11-17 14:06:07.811823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:29.646 [2024-11-17 14:06:07.811848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:29.646 [2024-11-17 14:06:07.811896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:29.646 [2024-11-17 14:06:07.811925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:29.646 [2024-11-17 14:06:07.811948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:29.646 [2024-11-17 14:06:07.811972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:29.646 [2024-11-17 14:06:07.812026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:29.646 [2024-11-17 14:06:07.812091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:29.646 [2024-11-17 14:06:07.812136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:29.646 [2024-11-17 14:06:07.812163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:29.646 [2024-11-17 14:06:07.812185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:29.646 [2024-11-17 14:06:07.812209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:29.646 [2024-11-17 14:06:07.812285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:29.646 [2024-11-17 14:06:07.812311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:29.646 [2024-11-17 14:06:07.812333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:29.646 [2024-11-17 14:06:07.812358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:29.646 [2024-11-17 14:06:07.812381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:29.646 [2024-11-17 14:06:07.812428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:29.646 [2024-11-17 14:06:07.812436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:29.646 [2024-11-17 14:06:07.812445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:29.646 [2024-11-17 14:06:07.812451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:29.646 [2024-11-17 14:06:07.812458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:29.646 [2024-11-17 14:06:07.812464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:29.646 [2024-11-17 14:06:07.812471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:29.646 [2024-11-17 14:06:07.812477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:29.646 [2024-11-17 14:06:07.812486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:29.646 [2024-11-17 14:06:07.812491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:29.646 [2024-11-17 14:06:07.812498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:29.646 [2024-11-17 14:06:07.812504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:29.646 [2024-11-17 14:06:07.812511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:29.646 [2024-11-17 14:06:07.812517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:29.646 [2024-11-17 14:06:07.812525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:29.646 [2024-11-17 14:06:07.812530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:29.646 [2024-11-17 14:06:07.812538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:29.646 [2024-11-17 14:06:07.812543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:29.646 [2024-11-17 14:06:07.812552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:29.646 [2024-11-17 14:06:07.812557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:29.646 [2024-11-17 14:06:07.812565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:29.646 [2024-11-17 14:06:07.812571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:29.646 [2024-11-17 14:06:07.812578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:29.646 [2024-11-17 14:06:07.812584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:29.646 [2024-11-17 14:06:07.812592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:29.646 [2024-11-17 14:06:07.812597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:29.646 [2024-11-17 14:06:07.812604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:29.646 [2024-11-17 14:06:07.812610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:29.646 [2024-11-17 14:06:07.812617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:29.646 [2024-11-17 14:06:07.812623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:29.646 [2024-11-17 14:06:07.812631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:29.646 [2024-11-17 14:06:07.812637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:29.646 [2024-11-17 14:06:07.812643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:29.646 [2024-11-17 14:06:07.812649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:29.646 [2024-11-17 14:06:07.812658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:29.646 [2024-11-17 14:06:07.812664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:29.646 [2024-11-17 14:06:07.812671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:29.646 [2024-11-17 14:06:07.812677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:29.646 [2024-11-17 14:06:07.812684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:29.646 [2024-11-17 14:06:07.812690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:29.646 [2024-11-17 14:06:07.812698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:29.647 [2024-11-17 14:06:07.812704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:29.647 [2024-11-17 14:06:07.812710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:29.647 [2024-11-17 14:06:07.812717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:29.647 [2024-11-17 14:06:07.812724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:29.647 [2024-11-17 14:06:07.812730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:29.647 [2024-11-17 14:06:07.812742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:29.647 [2024-11-17 14:06:07.812748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:29.647 [2024-11-17 14:06:07.812755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:29.647 [2024-11-17 14:06:07.812761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:29.647 [2024-11-17 14:06:07.812770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:29.647 [2024-11-17 14:06:07.812775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:29.647 [2024-11-17 14:06:07.812782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:29.647 [2024-11-17 14:06:07.812788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:29.647 [2024-11-17 14:06:07.812796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:29.647 [2024-11-17 14:06:07.812803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:29.647 [2024-11-17 14:06:07.812810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:29.647 [2024-11-17 14:06:07.812816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:29.647 [2024-11-17 14:06:07.812823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:29.647 [2024-11-17 14:06:07.812830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:29.647 [2024-11-17 14:06:07.812838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:29.647 [2024-11-17 14:06:07.812844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:29.647 [2024-11-17 14:06:07.812851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:29.647 [2024-11-17 14:06:07.812857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:29.647 [2024-11-17 14:06:07.812864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:29.647 [2024-11-17 14:06:07.812869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:29.647 [2024-11-17 14:06:07.812878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:29.647 [2024-11-17 14:06:07.812884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:29.647 [2024-11-17 14:06:07.812891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:29.647 [2024-11-17 14:06:07.812897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:29.647 [2024-11-17 14:06:07.812904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:29.647 [2024-11-17 14:06:07.812910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:29.647 [2024-11-17 14:06:07.812917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:29.647 [2024-11-17 14:06:07.812922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:29.647 [2024-11-17 14:06:07.812929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:29.647 [2024-11-17 14:06:07.812935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:29.647 [2024-11-17 14:06:07.812942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:29.647 [2024-11-17 14:06:07.812948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:29.647 [2024-11-17 14:06:07.812955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:29.647 [2024-11-17 14:06:07.812961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:29.647 [2024-11-17 14:06:07.812968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:29.647 [2024-11-17 14:06:07.812974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:29.647 [2024-11-17 14:06:07.812989] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:29.647 [2024-11-17 14:06:07.812995] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7d8283dd-40dc-4766-adff-c1626714363d 00:17:29.647 [2024-11-17 14:06:07.813003] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:29.647 [2024-11-17 14:06:07.813008] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:29.647 [2024-11-17 14:06:07.813016] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:29.647 [2024-11-17 14:06:07.813023] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:29.647 [2024-11-17 14:06:07.813029] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:29.647 [2024-11-17 14:06:07.813036] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:29.647 [2024-11-17 14:06:07.813043] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:29.647 [2024-11-17 14:06:07.813048] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:29.647 [2024-11-17 14:06:07.813054] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:29.647 [2024-11-17 14:06:07.813060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.647 [2024-11-17 14:06:07.813069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:29.647 [2024-11-17 14:06:07.813075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.362 ms 00:17:29.647 [2024-11-17 14:06:07.813082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.647 [2024-11-17 14:06:07.814539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.647 [2024-11-17 14:06:07.814604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:29.647 [2024-11-17 14:06:07.814685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.441 ms 00:17:29.647 [2024-11-17 14:06:07.814704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.647 [2024-11-17 14:06:07.814790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.647 [2024-11-17 14:06:07.814814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:29.647 [2024-11-17 14:06:07.814858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:17:29.647 [2024-11-17 14:06:07.814877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.647 [2024-11-17 14:06:07.819318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.647 [2024-11-17 14:06:07.819405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:29.647 [2024-11-17 14:06:07.819499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.647 [2024-11-17 14:06:07.819519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.647 [2024-11-17 14:06:07.819569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.647 [2024-11-17 14:06:07.819641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:29.647 [2024-11-17 14:06:07.819659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.647 [2024-11-17 14:06:07.819674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.647 [2024-11-17 14:06:07.819722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.647 [2024-11-17 14:06:07.819749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:29.648 [2024-11-17 14:06:07.819851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.648 [2024-11-17 14:06:07.819870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.648 [2024-11-17 14:06:07.819894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.648 [2024-11-17 14:06:07.819913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:29.648 [2024-11-17 14:06:07.819928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.648 [2024-11-17 14:06:07.820019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.648 [2024-11-17 14:06:07.827334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.648 [2024-11-17 14:06:07.827460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:29.648 [2024-11-17 14:06:07.827535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.648 [2024-11-17 14:06:07.827555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.648 [2024-11-17 14:06:07.834001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.648 [2024-11-17 14:06:07.834114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:29.648 [2024-11-17 14:06:07.834159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.648 [2024-11-17 14:06:07.834182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.648 [2024-11-17 14:06:07.834254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.648 [2024-11-17 14:06:07.834343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:29.648 [2024-11-17 14:06:07.834362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.648 [2024-11-17 14:06:07.834379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.648 [2024-11-17 14:06:07.834419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.648 [2024-11-17 14:06:07.834521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:29.648 [2024-11-17 14:06:07.834544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.648 [2024-11-17 14:06:07.834560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.648 [2024-11-17 14:06:07.834626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.648 [2024-11-17 14:06:07.834717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:29.648 [2024-11-17 14:06:07.834738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.648 [2024-11-17 14:06:07.834754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.648 [2024-11-17 14:06:07.834794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.648 [2024-11-17 14:06:07.834875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:29.648 [2024-11-17 14:06:07.834893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.648 [2024-11-17 14:06:07.834912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.648 [2024-11-17 14:06:07.834948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.648 [2024-11-17 14:06:07.835037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:29.648 [2024-11-17 14:06:07.835055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.648 [2024-11-17 14:06:07.835072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.648 [2024-11-17 14:06:07.835118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:29.648 [2024-11-17 14:06:07.835199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:29.648 [2024-11-17 14:06:07.835219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:29.648 [2024-11-17 14:06:07.835235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.648 [2024-11-17 14:06:07.835361] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 42.951 ms, result 0 00:17:29.648 true 00:17:29.648 14:06:07 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 86086 00:17:29.648 14:06:07 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 86086 ']' 00:17:29.648 14:06:07 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 86086 00:17:29.648 14:06:07 ftl.ftl_restore -- common/autotest_common.sh@955 -- # uname 00:17:29.648 14:06:07 ftl.ftl_restore -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:29.648 14:06:07 ftl.ftl_restore -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86086 00:17:29.648 killing process with pid 86086 00:17:29.648 14:06:07 ftl.ftl_restore -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:29.648 14:06:07 ftl.ftl_restore -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:29.648 14:06:07 ftl.ftl_restore -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86086' 00:17:29.648 14:06:07 ftl.ftl_restore -- common/autotest_common.sh@969 -- # kill 86086 00:17:29.648 14:06:07 ftl.ftl_restore -- common/autotest_common.sh@974 -- # wait 86086 00:17:34.943 14:06:12 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:17:39.149 262144+0 records in 00:17:39.149 262144+0 records out 00:17:39.149 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.80504 s, 282 MB/s 00:17:39.149 14:06:16 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:17:40.594 14:06:18 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:40.594 [2024-11-17 14:06:18.863564] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:40.594 [2024-11-17 14:06:18.863678] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86300 ] 00:17:40.855 [2024-11-17 14:06:19.011653] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:40.855 [2024-11-17 14:06:19.052713] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:41.116 [2024-11-17 14:06:19.162889] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:41.116 [2024-11-17 14:06:19.163206] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:41.116 [2024-11-17 14:06:19.324406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.116 [2024-11-17 14:06:19.324628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:41.116 [2024-11-17 14:06:19.324664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:41.116 [2024-11-17 14:06:19.324673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.116 [2024-11-17 14:06:19.324747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.116 [2024-11-17 14:06:19.324762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:41.116 [2024-11-17 14:06:19.324771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:17:41.117 [2024-11-17 14:06:19.324779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.117 [2024-11-17 14:06:19.324802] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:41.117 [2024-11-17 14:06:19.325080] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:41.117 [2024-11-17 14:06:19.325098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.117 [2024-11-17 14:06:19.325107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:41.117 [2024-11-17 14:06:19.325120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.302 ms 00:17:41.117 [2024-11-17 14:06:19.325135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.117 [2024-11-17 14:06:19.326871] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:41.117 [2024-11-17 14:06:19.330710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.117 [2024-11-17 14:06:19.330763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:41.117 [2024-11-17 14:06:19.330776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.841 ms 00:17:41.117 [2024-11-17 14:06:19.330784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.117 [2024-11-17 14:06:19.330870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.117 [2024-11-17 14:06:19.330881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:41.117 [2024-11-17 14:06:19.330893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:17:41.117 [2024-11-17 14:06:19.330901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.117 [2024-11-17 14:06:19.339209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.117 [2024-11-17 14:06:19.339449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:41.117 [2024-11-17 14:06:19.339470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.264 ms 00:17:41.117 [2024-11-17 14:06:19.339488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.117 [2024-11-17 14:06:19.339611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.117 [2024-11-17 14:06:19.339623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:41.117 [2024-11-17 14:06:19.339632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:17:41.117 [2024-11-17 14:06:19.339640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.117 [2024-11-17 14:06:19.339705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.117 [2024-11-17 14:06:19.339716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:41.117 [2024-11-17 14:06:19.339725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:41.117 [2024-11-17 14:06:19.339732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.117 [2024-11-17 14:06:19.339759] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:41.117 [2024-11-17 14:06:19.341787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.117 [2024-11-17 14:06:19.341826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:41.117 [2024-11-17 14:06:19.341838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.037 ms 00:17:41.117 [2024-11-17 14:06:19.341845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.117 [2024-11-17 14:06:19.341881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.117 [2024-11-17 14:06:19.341891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:41.117 [2024-11-17 14:06:19.341909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:17:41.117 [2024-11-17 14:06:19.341921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.117 [2024-11-17 14:06:19.341945] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:41.117 [2024-11-17 14:06:19.341973] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:41.117 [2024-11-17 14:06:19.342012] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:41.117 [2024-11-17 14:06:19.342029] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:41.117 [2024-11-17 14:06:19.342136] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:41.117 [2024-11-17 14:06:19.342151] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:41.117 [2024-11-17 14:06:19.342167] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:41.117 [2024-11-17 14:06:19.342177] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:41.117 [2024-11-17 14:06:19.342189] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:41.117 [2024-11-17 14:06:19.342198] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:41.117 [2024-11-17 14:06:19.342206] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:41.117 [2024-11-17 14:06:19.342214] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:41.117 [2024-11-17 14:06:19.342222] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:41.117 [2024-11-17 14:06:19.342230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.117 [2024-11-17 14:06:19.342281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:41.117 [2024-11-17 14:06:19.342290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:17:41.117 [2024-11-17 14:06:19.342298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.117 [2024-11-17 14:06:19.342386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.117 [2024-11-17 14:06:19.342397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:41.117 [2024-11-17 14:06:19.342405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:41.117 [2024-11-17 14:06:19.342413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.117 [2024-11-17 14:06:19.342513] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:41.117 [2024-11-17 14:06:19.342525] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:41.117 [2024-11-17 14:06:19.342534] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:41.117 [2024-11-17 14:06:19.342550] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:41.117 [2024-11-17 14:06:19.342559] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:41.117 [2024-11-17 14:06:19.342568] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:41.117 [2024-11-17 14:06:19.342576] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:41.117 [2024-11-17 14:06:19.342586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:41.117 [2024-11-17 14:06:19.342594] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:41.117 [2024-11-17 14:06:19.342602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:41.117 [2024-11-17 14:06:19.342611] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:41.117 [2024-11-17 14:06:19.342620] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:41.117 [2024-11-17 14:06:19.342630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:41.117 [2024-11-17 14:06:19.342638] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:41.117 [2024-11-17 14:06:19.342646] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:41.117 [2024-11-17 14:06:19.342655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:41.117 [2024-11-17 14:06:19.342664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:41.117 [2024-11-17 14:06:19.342673] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:41.117 [2024-11-17 14:06:19.342681] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:41.117 [2024-11-17 14:06:19.342689] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:41.117 [2024-11-17 14:06:19.342698] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:41.117 [2024-11-17 14:06:19.342706] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:41.117 [2024-11-17 14:06:19.342714] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:41.117 [2024-11-17 14:06:19.342722] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:41.117 [2024-11-17 14:06:19.342729] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:41.117 [2024-11-17 14:06:19.342737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:41.117 [2024-11-17 14:06:19.342744] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:41.118 [2024-11-17 14:06:19.342752] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:41.118 [2024-11-17 14:06:19.342766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:41.118 [2024-11-17 14:06:19.342774] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:41.118 [2024-11-17 14:06:19.342781] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:41.118 [2024-11-17 14:06:19.342789] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:41.118 [2024-11-17 14:06:19.342797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:41.118 [2024-11-17 14:06:19.342805] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:41.118 [2024-11-17 14:06:19.342813] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:41.118 [2024-11-17 14:06:19.342821] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:41.118 [2024-11-17 14:06:19.342828] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:41.118 [2024-11-17 14:06:19.342836] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:41.118 [2024-11-17 14:06:19.342843] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:41.118 [2024-11-17 14:06:19.342850] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:41.118 [2024-11-17 14:06:19.342856] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:41.118 [2024-11-17 14:06:19.342863] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:41.118 [2024-11-17 14:06:19.342870] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:41.118 [2024-11-17 14:06:19.342876] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:41.118 [2024-11-17 14:06:19.342889] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:41.118 [2024-11-17 14:06:19.342897] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:41.118 [2024-11-17 14:06:19.342907] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:41.118 [2024-11-17 14:06:19.342915] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:41.118 [2024-11-17 14:06:19.342923] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:41.118 [2024-11-17 14:06:19.342930] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:41.118 [2024-11-17 14:06:19.342937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:41.118 [2024-11-17 14:06:19.342944] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:41.118 [2024-11-17 14:06:19.342951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:41.118 [2024-11-17 14:06:19.342959] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:41.118 [2024-11-17 14:06:19.342968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:41.118 [2024-11-17 14:06:19.342978] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:41.118 [2024-11-17 14:06:19.342986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:41.118 [2024-11-17 14:06:19.342993] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:41.118 [2024-11-17 14:06:19.343000] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:41.118 [2024-11-17 14:06:19.343007] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:41.118 [2024-11-17 14:06:19.343016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:41.118 [2024-11-17 14:06:19.343023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:41.118 [2024-11-17 14:06:19.343030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:41.118 [2024-11-17 14:06:19.343038] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:41.118 [2024-11-17 14:06:19.343045] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:41.118 [2024-11-17 14:06:19.343052] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:41.118 [2024-11-17 14:06:19.343059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:41.118 [2024-11-17 14:06:19.343066] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:41.118 [2024-11-17 14:06:19.343073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:41.118 [2024-11-17 14:06:19.343081] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:41.118 [2024-11-17 14:06:19.343089] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:41.118 [2024-11-17 14:06:19.343103] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:41.118 [2024-11-17 14:06:19.343110] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:41.118 [2024-11-17 14:06:19.343118] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:41.118 [2024-11-17 14:06:19.343125] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:41.118 [2024-11-17 14:06:19.343133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.118 [2024-11-17 14:06:19.343142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:41.118 [2024-11-17 14:06:19.343154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.688 ms 00:17:41.118 [2024-11-17 14:06:19.343161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.118 [2024-11-17 14:06:19.367904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.118 [2024-11-17 14:06:19.368191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:41.118 [2024-11-17 14:06:19.368399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.689 ms 00:17:41.118 [2024-11-17 14:06:19.368446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.118 [2024-11-17 14:06:19.368702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.118 [2024-11-17 14:06:19.368751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:41.118 [2024-11-17 14:06:19.368839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:17:41.118 [2024-11-17 14:06:19.368881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.118 [2024-11-17 14:06:19.381532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.118 [2024-11-17 14:06:19.381705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:41.118 [2024-11-17 14:06:19.381762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.520 ms 00:17:41.118 [2024-11-17 14:06:19.381784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.118 [2024-11-17 14:06:19.381837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.118 [2024-11-17 14:06:19.381860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:41.118 [2024-11-17 14:06:19.381881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:41.118 [2024-11-17 14:06:19.381901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.118 [2024-11-17 14:06:19.382493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.118 [2024-11-17 14:06:19.382645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:41.118 [2024-11-17 14:06:19.383123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.525 ms 00:17:41.118 [2024-11-17 14:06:19.383151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.118 [2024-11-17 14:06:19.383359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.118 [2024-11-17 14:06:19.383373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:41.118 [2024-11-17 14:06:19.383383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.159 ms 00:17:41.118 [2024-11-17 14:06:19.383391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.118 [2024-11-17 14:06:19.390481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.118 [2024-11-17 14:06:19.390528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:41.118 [2024-11-17 14:06:19.390547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.066 ms 00:17:41.118 [2024-11-17 14:06:19.390556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.118 [2024-11-17 14:06:19.394601] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:17:41.118 [2024-11-17 14:06:19.394661] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:41.118 [2024-11-17 14:06:19.394674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.118 [2024-11-17 14:06:19.394682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:41.119 [2024-11-17 14:06:19.394692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.011 ms 00:17:41.119 [2024-11-17 14:06:19.394699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.119 [2024-11-17 14:06:19.411001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.119 [2024-11-17 14:06:19.411059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:41.119 [2024-11-17 14:06:19.411073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.248 ms 00:17:41.119 [2024-11-17 14:06:19.411090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.380 [2024-11-17 14:06:19.414210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.380 [2024-11-17 14:06:19.414401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:41.380 [2024-11-17 14:06:19.414420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.062 ms 00:17:41.380 [2024-11-17 14:06:19.414427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.380 [2024-11-17 14:06:19.417027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.380 [2024-11-17 14:06:19.417075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:41.380 [2024-11-17 14:06:19.417086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.563 ms 00:17:41.380 [2024-11-17 14:06:19.417093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.380 [2024-11-17 14:06:19.417485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.380 [2024-11-17 14:06:19.417500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:41.380 [2024-11-17 14:06:19.417511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:17:41.380 [2024-11-17 14:06:19.417519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.380 [2024-11-17 14:06:19.442866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.380 [2024-11-17 14:06:19.442914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:41.380 [2024-11-17 14:06:19.442935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.329 ms 00:17:41.380 [2024-11-17 14:06:19.442944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.380 [2024-11-17 14:06:19.451089] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:17:41.380 [2024-11-17 14:06:19.454126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.380 [2024-11-17 14:06:19.454168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:41.380 [2024-11-17 14:06:19.454179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.129 ms 00:17:41.380 [2024-11-17 14:06:19.454192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.380 [2024-11-17 14:06:19.454291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.380 [2024-11-17 14:06:19.454303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:41.380 [2024-11-17 14:06:19.454312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:41.380 [2024-11-17 14:06:19.454320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.380 [2024-11-17 14:06:19.454389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.380 [2024-11-17 14:06:19.454399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:41.380 [2024-11-17 14:06:19.454408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:17:41.380 [2024-11-17 14:06:19.454416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.380 [2024-11-17 14:06:19.454444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.380 [2024-11-17 14:06:19.454460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:41.380 [2024-11-17 14:06:19.454469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:41.380 [2024-11-17 14:06:19.454477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.380 [2024-11-17 14:06:19.454513] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:41.380 [2024-11-17 14:06:19.454526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.380 [2024-11-17 14:06:19.454535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:41.380 [2024-11-17 14:06:19.454544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:41.380 [2024-11-17 14:06:19.454552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.380 [2024-11-17 14:06:19.459859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.380 [2024-11-17 14:06:19.459910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:41.380 [2024-11-17 14:06:19.459921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.289 ms 00:17:41.380 [2024-11-17 14:06:19.459929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.380 [2024-11-17 14:06:19.460014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.380 [2024-11-17 14:06:19.460024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:41.380 [2024-11-17 14:06:19.460034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:17:41.380 [2024-11-17 14:06:19.460046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.380 [2024-11-17 14:06:19.461137] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 136.300 ms, result 0 00:17:42.323  [2024-11-17T14:06:21.577Z] Copying: 22/1024 [MB] (22 MBps) [2024-11-17T14:06:22.522Z] Copying: 38/1024 [MB] (16 MBps) [2024-11-17T14:06:23.913Z] Copying: 51/1024 [MB] (12 MBps) [2024-11-17T14:06:24.485Z] Copying: 61/1024 [MB] (10 MBps) [2024-11-17T14:06:25.873Z] Copying: 91/1024 [MB] (30 MBps) [2024-11-17T14:06:26.820Z] Copying: 121/1024 [MB] (29 MBps) [2024-11-17T14:06:27.764Z] Copying: 131/1024 [MB] (10 MBps) [2024-11-17T14:06:28.708Z] Copying: 155/1024 [MB] (24 MBps) [2024-11-17T14:06:29.654Z] Copying: 175/1024 [MB] (19 MBps) [2024-11-17T14:06:30.595Z] Copying: 215/1024 [MB] (39 MBps) [2024-11-17T14:06:31.538Z] Copying: 243/1024 [MB] (27 MBps) [2024-11-17T14:06:32.483Z] Copying: 262/1024 [MB] (18 MBps) [2024-11-17T14:06:33.871Z] Copying: 288/1024 [MB] (26 MBps) [2024-11-17T14:06:34.815Z] Copying: 307/1024 [MB] (18 MBps) [2024-11-17T14:06:35.758Z] Copying: 325/1024 [MB] (17 MBps) [2024-11-17T14:06:36.703Z] Copying: 359/1024 [MB] (34 MBps) [2024-11-17T14:06:37.646Z] Copying: 381/1024 [MB] (22 MBps) [2024-11-17T14:06:38.590Z] Copying: 418/1024 [MB] (36 MBps) [2024-11-17T14:06:39.534Z] Copying: 440/1024 [MB] (22 MBps) [2024-11-17T14:06:40.563Z] Copying: 471/1024 [MB] (31 MBps) [2024-11-17T14:06:41.509Z] Copying: 489/1024 [MB] (17 MBps) [2024-11-17T14:06:42.897Z] Copying: 499/1024 [MB] (10 MBps) [2024-11-17T14:06:43.840Z] Copying: 518/1024 [MB] (18 MBps) [2024-11-17T14:06:44.785Z] Copying: 540/1024 [MB] (22 MBps) [2024-11-17T14:06:45.729Z] Copying: 560/1024 [MB] (20 MBps) [2024-11-17T14:06:46.674Z] Copying: 575/1024 [MB] (14 MBps) [2024-11-17T14:06:47.617Z] Copying: 599/1024 [MB] (23 MBps) [2024-11-17T14:06:48.560Z] Copying: 618/1024 [MB] (18 MBps) [2024-11-17T14:06:49.505Z] Copying: 641/1024 [MB] (23 MBps) [2024-11-17T14:06:50.891Z] Copying: 661/1024 [MB] (20 MBps) [2024-11-17T14:06:51.836Z] Copying: 677/1024 [MB] (15 MBps) [2024-11-17T14:06:52.781Z] Copying: 697/1024 [MB] (20 MBps) [2024-11-17T14:06:53.723Z] Copying: 714/1024 [MB] (16 MBps) [2024-11-17T14:06:54.668Z] Copying: 727/1024 [MB] (13 MBps) [2024-11-17T14:06:55.613Z] Copying: 740/1024 [MB] (12 MBps) [2024-11-17T14:06:56.557Z] Copying: 752/1024 [MB] (11 MBps) [2024-11-17T14:06:57.498Z] Copying: 764/1024 [MB] (12 MBps) [2024-11-17T14:06:58.886Z] Copying: 788/1024 [MB] (23 MBps) [2024-11-17T14:06:59.841Z] Copying: 813/1024 [MB] (25 MBps) [2024-11-17T14:07:00.784Z] Copying: 832/1024 [MB] (19 MBps) [2024-11-17T14:07:01.727Z] Copying: 853/1024 [MB] (20 MBps) [2024-11-17T14:07:02.671Z] Copying: 875/1024 [MB] (22 MBps) [2024-11-17T14:07:03.615Z] Copying: 892/1024 [MB] (16 MBps) [2024-11-17T14:07:04.556Z] Copying: 910/1024 [MB] (17 MBps) [2024-11-17T14:07:05.500Z] Copying: 944/1024 [MB] (34 MBps) [2024-11-17T14:07:06.888Z] Copying: 973/1024 [MB] (28 MBps) [2024-11-17T14:07:07.832Z] Copying: 993/1024 [MB] (19 MBps) [2024-11-17T14:07:08.094Z] Copying: 1012/1024 [MB] (19 MBps) [2024-11-17T14:07:08.094Z] Copying: 1024/1024 [MB] (average 21 MBps)[2024-11-17 14:07:08.082761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.793 [2024-11-17 14:07:08.082800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:29.793 [2024-11-17 14:07:08.082811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:18:29.793 [2024-11-17 14:07:08.082818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.793 [2024-11-17 14:07:08.082837] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:29.793 [2024-11-17 14:07:08.083264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.793 [2024-11-17 14:07:08.083284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:29.793 [2024-11-17 14:07:08.083292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.413 ms 00:18:29.793 [2024-11-17 14:07:08.083299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.793 [2024-11-17 14:07:08.085023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.793 [2024-11-17 14:07:08.085052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:29.793 [2024-11-17 14:07:08.085060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.709 ms 00:18:29.793 [2024-11-17 14:07:08.085065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.056 [2024-11-17 14:07:08.098923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.057 [2024-11-17 14:07:08.098956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:30.057 [2024-11-17 14:07:08.098964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.846 ms 00:18:30.057 [2024-11-17 14:07:08.098970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.057 [2024-11-17 14:07:08.103820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.057 [2024-11-17 14:07:08.103850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:30.057 [2024-11-17 14:07:08.103857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.822 ms 00:18:30.057 [2024-11-17 14:07:08.103862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.057 [2024-11-17 14:07:08.104764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.057 [2024-11-17 14:07:08.104791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:30.057 [2024-11-17 14:07:08.104798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.873 ms 00:18:30.057 [2024-11-17 14:07:08.104804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.057 [2024-11-17 14:07:08.107838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.057 [2024-11-17 14:07:08.107870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:30.057 [2024-11-17 14:07:08.107877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.010 ms 00:18:30.057 [2024-11-17 14:07:08.107883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.057 [2024-11-17 14:07:08.107966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.057 [2024-11-17 14:07:08.107973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:30.057 [2024-11-17 14:07:08.107979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:18:30.057 [2024-11-17 14:07:08.107985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.057 [2024-11-17 14:07:08.109574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.057 [2024-11-17 14:07:08.109686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:30.057 [2024-11-17 14:07:08.109697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.578 ms 00:18:30.057 [2024-11-17 14:07:08.109703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.057 [2024-11-17 14:07:08.110789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.057 [2024-11-17 14:07:08.110822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:30.057 [2024-11-17 14:07:08.110828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.063 ms 00:18:30.057 [2024-11-17 14:07:08.110833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.057 [2024-11-17 14:07:08.111911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.057 [2024-11-17 14:07:08.111939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:30.057 [2024-11-17 14:07:08.111945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.055 ms 00:18:30.057 [2024-11-17 14:07:08.111950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.057 [2024-11-17 14:07:08.112846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.057 [2024-11-17 14:07:08.112942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:30.057 [2024-11-17 14:07:08.112954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.855 ms 00:18:30.057 [2024-11-17 14:07:08.112959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.057 [2024-11-17 14:07:08.112980] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:30.057 [2024-11-17 14:07:08.112991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:30.057 [2024-11-17 14:07:08.113004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:30.057 [2024-11-17 14:07:08.113011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:30.057 [2024-11-17 14:07:08.113017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:30.057 [2024-11-17 14:07:08.113022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:30.057 [2024-11-17 14:07:08.113028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:30.057 [2024-11-17 14:07:08.113033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:30.057 [2024-11-17 14:07:08.113039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:30.057 [2024-11-17 14:07:08.113044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:30.057 [2024-11-17 14:07:08.113050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:30.057 [2024-11-17 14:07:08.113056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:30.057 [2024-11-17 14:07:08.113061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:30.057 [2024-11-17 14:07:08.113067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:30.057 [2024-11-17 14:07:08.113072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:30.057 [2024-11-17 14:07:08.113078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:30.057 [2024-11-17 14:07:08.113084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:30.057 [2024-11-17 14:07:08.113089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:30.057 [2024-11-17 14:07:08.113095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:30.057 [2024-11-17 14:07:08.113100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:30.057 [2024-11-17 14:07:08.113106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:30.057 [2024-11-17 14:07:08.113111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:30.057 [2024-11-17 14:07:08.113116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:30.057 [2024-11-17 14:07:08.113122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:30.057 [2024-11-17 14:07:08.113127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:30.057 [2024-11-17 14:07:08.113133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:30.057 [2024-11-17 14:07:08.113138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:30.057 [2024-11-17 14:07:08.113144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:30.057 [2024-11-17 14:07:08.113150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:30.057 [2024-11-17 14:07:08.113156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:30.057 [2024-11-17 14:07:08.113161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:30.057 [2024-11-17 14:07:08.113167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:30.057 [2024-11-17 14:07:08.113172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:30.057 [2024-11-17 14:07:08.113178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:30.057 [2024-11-17 14:07:08.113186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:30.057 [2024-11-17 14:07:08.113191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:30.057 [2024-11-17 14:07:08.113196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:30.057 [2024-11-17 14:07:08.113202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:30.057 [2024-11-17 14:07:08.113207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:30.057 [2024-11-17 14:07:08.113213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:30.057 [2024-11-17 14:07:08.113218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:30.057 [2024-11-17 14:07:08.113224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:30.057 [2024-11-17 14:07:08.113229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:30.057 [2024-11-17 14:07:08.113235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:30.057 [2024-11-17 14:07:08.113253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:30.057 [2024-11-17 14:07:08.113259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:30.057 [2024-11-17 14:07:08.113264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:30.057 [2024-11-17 14:07:08.113270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:30.057 [2024-11-17 14:07:08.113275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:30.057 [2024-11-17 14:07:08.113281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:30.057 [2024-11-17 14:07:08.113286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:30.057 [2024-11-17 14:07:08.113292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:30.057 [2024-11-17 14:07:08.113297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:30.057 [2024-11-17 14:07:08.113303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:30.057 [2024-11-17 14:07:08.113308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:30.057 [2024-11-17 14:07:08.113314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:30.057 [2024-11-17 14:07:08.113320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:30.057 [2024-11-17 14:07:08.113326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:30.057 [2024-11-17 14:07:08.113332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:30.057 [2024-11-17 14:07:08.113337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:30.058 [2024-11-17 14:07:08.113343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:30.058 [2024-11-17 14:07:08.113348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:30.058 [2024-11-17 14:07:08.113354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:30.058 [2024-11-17 14:07:08.113359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:30.058 [2024-11-17 14:07:08.113364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:30.058 [2024-11-17 14:07:08.113370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:30.058 [2024-11-17 14:07:08.113377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:30.058 [2024-11-17 14:07:08.113383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:30.058 [2024-11-17 14:07:08.113388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:30.058 [2024-11-17 14:07:08.113394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:30.058 [2024-11-17 14:07:08.113399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:30.058 [2024-11-17 14:07:08.113405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:30.058 [2024-11-17 14:07:08.113410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:30.058 [2024-11-17 14:07:08.113415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:30.058 [2024-11-17 14:07:08.113421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:30.058 [2024-11-17 14:07:08.113427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:30.058 [2024-11-17 14:07:08.113432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:30.058 [2024-11-17 14:07:08.113438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:30.058 [2024-11-17 14:07:08.113443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:30.058 [2024-11-17 14:07:08.113449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:30.058 [2024-11-17 14:07:08.113454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:30.058 [2024-11-17 14:07:08.113460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:30.058 [2024-11-17 14:07:08.113466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:30.058 [2024-11-17 14:07:08.113472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:30.058 [2024-11-17 14:07:08.113477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:30.058 [2024-11-17 14:07:08.113483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:30.058 [2024-11-17 14:07:08.113489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:30.058 [2024-11-17 14:07:08.113494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:30.058 [2024-11-17 14:07:08.113500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:30.058 [2024-11-17 14:07:08.113506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:30.058 [2024-11-17 14:07:08.113511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:30.058 [2024-11-17 14:07:08.113517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:30.058 [2024-11-17 14:07:08.113522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:30.058 [2024-11-17 14:07:08.113528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:30.058 [2024-11-17 14:07:08.113533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:30.058 [2024-11-17 14:07:08.113539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:30.058 [2024-11-17 14:07:08.113544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:30.058 [2024-11-17 14:07:08.113550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:30.058 [2024-11-17 14:07:08.113555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:30.058 [2024-11-17 14:07:08.113561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:30.058 [2024-11-17 14:07:08.113566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:30.058 [2024-11-17 14:07:08.113578] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:30.058 [2024-11-17 14:07:08.113584] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7d8283dd-40dc-4766-adff-c1626714363d 00:18:30.058 [2024-11-17 14:07:08.113590] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:30.058 [2024-11-17 14:07:08.113595] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:30.058 [2024-11-17 14:07:08.113603] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:30.058 [2024-11-17 14:07:08.113608] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:30.058 [2024-11-17 14:07:08.113614] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:30.058 [2024-11-17 14:07:08.113619] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:30.058 [2024-11-17 14:07:08.113627] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:30.058 [2024-11-17 14:07:08.113632] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:30.058 [2024-11-17 14:07:08.113637] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:30.058 [2024-11-17 14:07:08.113642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.058 [2024-11-17 14:07:08.113648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:30.058 [2024-11-17 14:07:08.113654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.663 ms 00:18:30.058 [2024-11-17 14:07:08.113664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.058 [2024-11-17 14:07:08.114875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.058 [2024-11-17 14:07:08.114899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:30.058 [2024-11-17 14:07:08.114905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.199 ms 00:18:30.058 [2024-11-17 14:07:08.114911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.058 [2024-11-17 14:07:08.114978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.058 [2024-11-17 14:07:08.114984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:30.058 [2024-11-17 14:07:08.114992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:18:30.058 [2024-11-17 14:07:08.115000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.058 [2024-11-17 14:07:08.118886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:30.058 [2024-11-17 14:07:08.118975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:30.058 [2024-11-17 14:07:08.119023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:30.058 [2024-11-17 14:07:08.119041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.058 [2024-11-17 14:07:08.119090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:30.058 [2024-11-17 14:07:08.119126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:30.058 [2024-11-17 14:07:08.119199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:30.058 [2024-11-17 14:07:08.119216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.058 [2024-11-17 14:07:08.119280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:30.058 [2024-11-17 14:07:08.119394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:30.058 [2024-11-17 14:07:08.119413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:30.058 [2024-11-17 14:07:08.119427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.058 [2024-11-17 14:07:08.119448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:30.058 [2024-11-17 14:07:08.119493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:30.058 [2024-11-17 14:07:08.119510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:30.058 [2024-11-17 14:07:08.119528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.058 [2024-11-17 14:07:08.127053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:30.058 [2024-11-17 14:07:08.127168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:30.058 [2024-11-17 14:07:08.127213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:30.058 [2024-11-17 14:07:08.127232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.058 [2024-11-17 14:07:08.133503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:30.058 [2024-11-17 14:07:08.133617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:30.058 [2024-11-17 14:07:08.133669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:30.058 [2024-11-17 14:07:08.133687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.058 [2024-11-17 14:07:08.133739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:30.058 [2024-11-17 14:07:08.133814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:30.058 [2024-11-17 14:07:08.133832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:30.058 [2024-11-17 14:07:08.133848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.058 [2024-11-17 14:07:08.133890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:30.058 [2024-11-17 14:07:08.133948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:30.058 [2024-11-17 14:07:08.133967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:30.058 [2024-11-17 14:07:08.133981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.058 [2024-11-17 14:07:08.134049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:30.058 [2024-11-17 14:07:08.134078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:30.059 [2024-11-17 14:07:08.134097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:30.059 [2024-11-17 14:07:08.134112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.059 [2024-11-17 14:07:08.134174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:30.059 [2024-11-17 14:07:08.134193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:30.059 [2024-11-17 14:07:08.134259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:30.059 [2024-11-17 14:07:08.134277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.059 [2024-11-17 14:07:08.134317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:30.059 [2024-11-17 14:07:08.134358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:30.059 [2024-11-17 14:07:08.134375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:30.059 [2024-11-17 14:07:08.134505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.059 [2024-11-17 14:07:08.134601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:30.059 [2024-11-17 14:07:08.134621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:30.059 [2024-11-17 14:07:08.134636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:30.059 [2024-11-17 14:07:08.134650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.059 [2024-11-17 14:07:08.134809] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 52.029 ms, result 0 00:18:30.632 00:18:30.632 00:18:30.632 14:07:08 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:18:30.632 [2024-11-17 14:07:08.743833] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:18:30.632 [2024-11-17 14:07:08.744116] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86822 ] 00:18:30.632 [2024-11-17 14:07:08.890569] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.632 [2024-11-17 14:07:08.929409] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:30.893 [2024-11-17 14:07:09.013468] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:30.893 [2024-11-17 14:07:09.013672] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:30.893 [2024-11-17 14:07:09.160827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.893 [2024-11-17 14:07:09.160948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:30.893 [2024-11-17 14:07:09.161011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:30.893 [2024-11-17 14:07:09.161021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.893 [2024-11-17 14:07:09.161062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.893 [2024-11-17 14:07:09.161070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:30.893 [2024-11-17 14:07:09.161076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:18:30.893 [2024-11-17 14:07:09.161082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.893 [2024-11-17 14:07:09.161096] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:30.894 [2024-11-17 14:07:09.161288] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:30.894 [2024-11-17 14:07:09.161300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.894 [2024-11-17 14:07:09.161306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:30.894 [2024-11-17 14:07:09.161320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.207 ms 00:18:30.894 [2024-11-17 14:07:09.161328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.894 [2024-11-17 14:07:09.162315] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:30.894 [2024-11-17 14:07:09.164180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.894 [2024-11-17 14:07:09.164303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:30.894 [2024-11-17 14:07:09.164323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.866 ms 00:18:30.894 [2024-11-17 14:07:09.164330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.894 [2024-11-17 14:07:09.164374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.894 [2024-11-17 14:07:09.164384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:30.894 [2024-11-17 14:07:09.164391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:18:30.894 [2024-11-17 14:07:09.164396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.894 [2024-11-17 14:07:09.168829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.894 [2024-11-17 14:07:09.168910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:30.894 [2024-11-17 14:07:09.168958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.396 ms 00:18:30.894 [2024-11-17 14:07:09.168980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.894 [2024-11-17 14:07:09.169054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.894 [2024-11-17 14:07:09.169073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:30.894 [2024-11-17 14:07:09.169088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:18:30.894 [2024-11-17 14:07:09.169106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.894 [2024-11-17 14:07:09.169152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.894 [2024-11-17 14:07:09.169175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:30.894 [2024-11-17 14:07:09.169191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:30.894 [2024-11-17 14:07:09.169253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.894 [2024-11-17 14:07:09.169345] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:30.894 [2024-11-17 14:07:09.170506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.894 [2024-11-17 14:07:09.170586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:30.894 [2024-11-17 14:07:09.170625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.168 ms 00:18:30.894 [2024-11-17 14:07:09.170642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.894 [2024-11-17 14:07:09.170677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.894 [2024-11-17 14:07:09.170719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:30.894 [2024-11-17 14:07:09.170737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:30.894 [2024-11-17 14:07:09.170752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.894 [2024-11-17 14:07:09.170792] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:30.894 [2024-11-17 14:07:09.170822] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:30.894 [2024-11-17 14:07:09.170892] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:30.894 [2024-11-17 14:07:09.170925] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:30.894 [2024-11-17 14:07:09.171041] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:30.894 [2024-11-17 14:07:09.171103] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:30.894 [2024-11-17 14:07:09.171128] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:30.894 [2024-11-17 14:07:09.171152] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:30.894 [2024-11-17 14:07:09.171202] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:30.894 [2024-11-17 14:07:09.171226] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:30.894 [2024-11-17 14:07:09.171254] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:30.894 [2024-11-17 14:07:09.171270] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:30.894 [2024-11-17 14:07:09.171284] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:30.894 [2024-11-17 14:07:09.171325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.894 [2024-11-17 14:07:09.171341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:30.894 [2024-11-17 14:07:09.171480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.534 ms 00:18:30.894 [2024-11-17 14:07:09.171553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.894 [2024-11-17 14:07:09.171634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.894 [2024-11-17 14:07:09.171683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:30.894 [2024-11-17 14:07:09.171702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:18:30.894 [2024-11-17 14:07:09.171749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.894 [2024-11-17 14:07:09.171836] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:30.894 [2024-11-17 14:07:09.171902] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:30.894 [2024-11-17 14:07:09.171919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:30.894 [2024-11-17 14:07:09.171941] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:30.894 [2024-11-17 14:07:09.171955] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:30.894 [2024-11-17 14:07:09.171970] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:30.894 [2024-11-17 14:07:09.172014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:30.894 [2024-11-17 14:07:09.172030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:30.894 [2024-11-17 14:07:09.172045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:30.894 [2024-11-17 14:07:09.172059] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:30.894 [2024-11-17 14:07:09.172073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:30.894 [2024-11-17 14:07:09.172111] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:30.894 [2024-11-17 14:07:09.172127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:30.894 [2024-11-17 14:07:09.172144] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:30.894 [2024-11-17 14:07:09.172216] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:30.895 [2024-11-17 14:07:09.172232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:30.895 [2024-11-17 14:07:09.172259] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:30.895 [2024-11-17 14:07:09.172274] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:30.895 [2024-11-17 14:07:09.172287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:30.895 [2024-11-17 14:07:09.172329] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:30.895 [2024-11-17 14:07:09.172345] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:30.895 [2024-11-17 14:07:09.172360] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:30.895 [2024-11-17 14:07:09.172373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:30.895 [2024-11-17 14:07:09.172387] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:30.895 [2024-11-17 14:07:09.172401] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:30.895 [2024-11-17 14:07:09.172446] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:30.895 [2024-11-17 14:07:09.172488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:30.895 [2024-11-17 14:07:09.172520] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:30.895 [2024-11-17 14:07:09.172535] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:30.895 [2024-11-17 14:07:09.172555] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:30.895 [2024-11-17 14:07:09.172569] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:30.895 [2024-11-17 14:07:09.172583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:30.895 [2024-11-17 14:07:09.172596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:30.895 [2024-11-17 14:07:09.172638] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:30.895 [2024-11-17 14:07:09.172654] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:30.895 [2024-11-17 14:07:09.172668] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:30.895 [2024-11-17 14:07:09.172682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:30.895 [2024-11-17 14:07:09.172695] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:30.895 [2024-11-17 14:07:09.172709] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:30.895 [2024-11-17 14:07:09.172748] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:30.895 [2024-11-17 14:07:09.172783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:30.895 [2024-11-17 14:07:09.172799] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:30.895 [2024-11-17 14:07:09.172829] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:30.895 [2024-11-17 14:07:09.172845] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:30.895 [2024-11-17 14:07:09.172859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:30.895 [2024-11-17 14:07:09.172876] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:30.895 [2024-11-17 14:07:09.172945] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:30.895 [2024-11-17 14:07:09.172962] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:30.895 [2024-11-17 14:07:09.172976] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:30.895 [2024-11-17 14:07:09.172990] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:30.895 [2024-11-17 14:07:09.173004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:30.895 [2024-11-17 14:07:09.173017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:30.895 [2024-11-17 14:07:09.173031] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:30.895 [2024-11-17 14:07:09.173047] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:30.895 [2024-11-17 14:07:09.173098] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:30.895 [2024-11-17 14:07:09.173123] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:30.895 [2024-11-17 14:07:09.173145] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:30.895 [2024-11-17 14:07:09.173166] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:30.895 [2024-11-17 14:07:09.173187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:30.895 [2024-11-17 14:07:09.173246] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:30.895 [2024-11-17 14:07:09.173270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:30.895 [2024-11-17 14:07:09.173298] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:30.895 [2024-11-17 14:07:09.173320] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:30.895 [2024-11-17 14:07:09.173342] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:30.895 [2024-11-17 14:07:09.173391] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:30.895 [2024-11-17 14:07:09.173414] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:30.895 [2024-11-17 14:07:09.173436] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:30.895 [2024-11-17 14:07:09.173457] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:30.895 [2024-11-17 14:07:09.173480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:30.895 [2024-11-17 14:07:09.173552] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:30.895 [2024-11-17 14:07:09.173574] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:30.895 [2024-11-17 14:07:09.173596] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:30.895 [2024-11-17 14:07:09.173618] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:30.895 [2024-11-17 14:07:09.173639] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:30.895 [2024-11-17 14:07:09.173684] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:30.895 [2024-11-17 14:07:09.173709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.895 [2024-11-17 14:07:09.173729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:30.895 [2024-11-17 14:07:09.173745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.925 ms 00:18:30.895 [2024-11-17 14:07:09.173760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.895 [2024-11-17 14:07:09.190137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.895 [2024-11-17 14:07:09.190288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:30.895 [2024-11-17 14:07:09.190344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.320 ms 00:18:30.895 [2024-11-17 14:07:09.190377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.895 [2024-11-17 14:07:09.190477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.895 [2024-11-17 14:07:09.190506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:30.896 [2024-11-17 14:07:09.190526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:18:30.896 [2024-11-17 14:07:09.190544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.156 [2024-11-17 14:07:09.199215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.156 [2024-11-17 14:07:09.199365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:31.156 [2024-11-17 14:07:09.199486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.554 ms 00:18:31.156 [2024-11-17 14:07:09.199518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.156 [2024-11-17 14:07:09.199572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.156 [2024-11-17 14:07:09.199660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:31.156 [2024-11-17 14:07:09.199686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:31.156 [2024-11-17 14:07:09.199746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.156 [2024-11-17 14:07:09.200114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.156 [2024-11-17 14:07:09.200227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:31.156 [2024-11-17 14:07:09.200309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.302 ms 00:18:31.156 [2024-11-17 14:07:09.200405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.156 [2024-11-17 14:07:09.200583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.156 [2024-11-17 14:07:09.200622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:31.156 [2024-11-17 14:07:09.200775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:18:31.156 [2024-11-17 14:07:09.200811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.156 [2024-11-17 14:07:09.205447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.156 [2024-11-17 14:07:09.205538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:31.157 [2024-11-17 14:07:09.205584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.569 ms 00:18:31.157 [2024-11-17 14:07:09.205601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.157 [2024-11-17 14:07:09.207589] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:31.157 [2024-11-17 14:07:09.207691] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:31.157 [2024-11-17 14:07:09.207741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.157 [2024-11-17 14:07:09.207761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:31.157 [2024-11-17 14:07:09.207776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.064 ms 00:18:31.157 [2024-11-17 14:07:09.207826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.157 [2024-11-17 14:07:09.219299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.157 [2024-11-17 14:07:09.219397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:31.157 [2024-11-17 14:07:09.219527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.435 ms 00:18:31.157 [2024-11-17 14:07:09.219545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.157 [2024-11-17 14:07:09.221009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.157 [2024-11-17 14:07:09.221093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:31.157 [2024-11-17 14:07:09.221132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.430 ms 00:18:31.157 [2024-11-17 14:07:09.221148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.157 [2024-11-17 14:07:09.222362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.157 [2024-11-17 14:07:09.222387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:31.157 [2024-11-17 14:07:09.222393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.183 ms 00:18:31.157 [2024-11-17 14:07:09.222399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.157 [2024-11-17 14:07:09.222639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.157 [2024-11-17 14:07:09.222652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:31.157 [2024-11-17 14:07:09.222663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:18:31.157 [2024-11-17 14:07:09.222669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.157 [2024-11-17 14:07:09.236780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.157 [2024-11-17 14:07:09.236818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:31.157 [2024-11-17 14:07:09.236827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.100 ms 00:18:31.157 [2024-11-17 14:07:09.236834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.157 [2024-11-17 14:07:09.242565] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:31.157 [2024-11-17 14:07:09.244448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.157 [2024-11-17 14:07:09.244473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:31.157 [2024-11-17 14:07:09.244487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.584 ms 00:18:31.157 [2024-11-17 14:07:09.244493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.157 [2024-11-17 14:07:09.244530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.157 [2024-11-17 14:07:09.244537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:31.157 [2024-11-17 14:07:09.244543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:31.157 [2024-11-17 14:07:09.244551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.157 [2024-11-17 14:07:09.244600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.157 [2024-11-17 14:07:09.244611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:31.157 [2024-11-17 14:07:09.244618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:18:31.157 [2024-11-17 14:07:09.244623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.157 [2024-11-17 14:07:09.244639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.157 [2024-11-17 14:07:09.244645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:31.157 [2024-11-17 14:07:09.244651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:31.157 [2024-11-17 14:07:09.244656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.157 [2024-11-17 14:07:09.244680] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:31.157 [2024-11-17 14:07:09.244687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.157 [2024-11-17 14:07:09.244694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:31.157 [2024-11-17 14:07:09.244699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:31.157 [2024-11-17 14:07:09.244708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.157 [2024-11-17 14:07:09.247728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.157 [2024-11-17 14:07:09.247832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:31.157 [2024-11-17 14:07:09.247844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.006 ms 00:18:31.157 [2024-11-17 14:07:09.247859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.157 [2024-11-17 14:07:09.247909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.157 [2024-11-17 14:07:09.247917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:31.157 [2024-11-17 14:07:09.247924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:18:31.157 [2024-11-17 14:07:09.247929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.157 [2024-11-17 14:07:09.248705] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 87.562 ms, result 0 00:18:32.100  [2024-11-17T14:07:11.839Z] Copying: 18/1024 [MB] (18 MBps) [2024-11-17T14:07:12.416Z] Copying: 29/1024 [MB] (11 MBps) [2024-11-17T14:07:13.805Z] Copying: 48/1024 [MB] (18 MBps) [2024-11-17T14:07:14.749Z] Copying: 65/1024 [MB] (17 MBps) [2024-11-17T14:07:15.693Z] Copying: 85/1024 [MB] (19 MBps) [2024-11-17T14:07:16.637Z] Copying: 97/1024 [MB] (12 MBps) [2024-11-17T14:07:17.580Z] Copying: 115/1024 [MB] (18 MBps) [2024-11-17T14:07:18.524Z] Copying: 126/1024 [MB] (10 MBps) [2024-11-17T14:07:19.466Z] Copying: 141/1024 [MB] (14 MBps) [2024-11-17T14:07:20.408Z] Copying: 161/1024 [MB] (19 MBps) [2024-11-17T14:07:21.797Z] Copying: 183/1024 [MB] (22 MBps) [2024-11-17T14:07:22.742Z] Copying: 205/1024 [MB] (21 MBps) [2024-11-17T14:07:23.687Z] Copying: 227/1024 [MB] (21 MBps) [2024-11-17T14:07:24.633Z] Copying: 247/1024 [MB] (20 MBps) [2024-11-17T14:07:25.575Z] Copying: 271/1024 [MB] (23 MBps) [2024-11-17T14:07:26.516Z] Copying: 293/1024 [MB] (22 MBps) [2024-11-17T14:07:27.460Z] Copying: 309/1024 [MB] (16 MBps) [2024-11-17T14:07:28.404Z] Copying: 322/1024 [MB] (12 MBps) [2024-11-17T14:07:29.805Z] Copying: 343/1024 [MB] (21 MBps) [2024-11-17T14:07:30.748Z] Copying: 370/1024 [MB] (26 MBps) [2024-11-17T14:07:31.691Z] Copying: 387/1024 [MB] (16 MBps) [2024-11-17T14:07:32.634Z] Copying: 404/1024 [MB] (17 MBps) [2024-11-17T14:07:33.577Z] Copying: 421/1024 [MB] (17 MBps) [2024-11-17T14:07:34.521Z] Copying: 434/1024 [MB] (13 MBps) [2024-11-17T14:07:35.464Z] Copying: 448/1024 [MB] (13 MBps) [2024-11-17T14:07:36.407Z] Copying: 458/1024 [MB] (10 MBps) [2024-11-17T14:07:37.796Z] Copying: 469/1024 [MB] (10 MBps) [2024-11-17T14:07:38.768Z] Copying: 489/1024 [MB] (19 MBps) [2024-11-17T14:07:39.741Z] Copying: 503/1024 [MB] (14 MBps) [2024-11-17T14:07:40.681Z] Copying: 514/1024 [MB] (11 MBps) [2024-11-17T14:07:41.623Z] Copying: 538/1024 [MB] (24 MBps) [2024-11-17T14:07:42.566Z] Copying: 556/1024 [MB] (17 MBps) [2024-11-17T14:07:43.510Z] Copying: 576/1024 [MB] (20 MBps) [2024-11-17T14:07:44.454Z] Copying: 589/1024 [MB] (12 MBps) [2024-11-17T14:07:45.398Z] Copying: 599/1024 [MB] (10 MBps) [2024-11-17T14:07:46.784Z] Copying: 613/1024 [MB] (13 MBps) [2024-11-17T14:07:47.726Z] Copying: 628/1024 [MB] (14 MBps) [2024-11-17T14:07:48.669Z] Copying: 640/1024 [MB] (12 MBps) [2024-11-17T14:07:49.612Z] Copying: 652/1024 [MB] (11 MBps) [2024-11-17T14:07:50.555Z] Copying: 667/1024 [MB] (15 MBps) [2024-11-17T14:07:51.498Z] Copying: 682/1024 [MB] (14 MBps) [2024-11-17T14:07:52.442Z] Copying: 693/1024 [MB] (10 MBps) [2024-11-17T14:07:53.386Z] Copying: 711/1024 [MB] (18 MBps) [2024-11-17T14:07:54.773Z] Copying: 724/1024 [MB] (13 MBps) [2024-11-17T14:07:55.722Z] Copying: 742/1024 [MB] (17 MBps) [2024-11-17T14:07:56.667Z] Copying: 757/1024 [MB] (15 MBps) [2024-11-17T14:07:57.611Z] Copying: 775/1024 [MB] (17 MBps) [2024-11-17T14:07:58.555Z] Copying: 793/1024 [MB] (18 MBps) [2024-11-17T14:07:59.498Z] Copying: 813/1024 [MB] (20 MBps) [2024-11-17T14:08:00.440Z] Copying: 836/1024 [MB] (22 MBps) [2024-11-17T14:08:01.382Z] Copying: 854/1024 [MB] (18 MBps) [2024-11-17T14:08:02.765Z] Copying: 869/1024 [MB] (15 MBps) [2024-11-17T14:08:03.707Z] Copying: 886/1024 [MB] (16 MBps) [2024-11-17T14:08:04.652Z] Copying: 908/1024 [MB] (22 MBps) [2024-11-17T14:08:05.594Z] Copying: 922/1024 [MB] (14 MBps) [2024-11-17T14:08:06.539Z] Copying: 940/1024 [MB] (17 MBps) [2024-11-17T14:08:07.482Z] Copying: 962/1024 [MB] (22 MBps) [2024-11-17T14:08:08.484Z] Copying: 977/1024 [MB] (14 MBps) [2024-11-17T14:08:09.426Z] Copying: 993/1024 [MB] (16 MBps) [2024-11-17T14:08:10.370Z] Copying: 1007/1024 [MB] (13 MBps) [2024-11-17T14:08:11.331Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-11-17 14:08:11.096907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.030 [2024-11-17 14:08:11.097011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:33.030 [2024-11-17 14:08:11.097033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:33.030 [2024-11-17 14:08:11.097050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.030 [2024-11-17 14:08:11.097089] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:33.030 [2024-11-17 14:08:11.097920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.030 [2024-11-17 14:08:11.097952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:33.030 [2024-11-17 14:08:11.097968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.811 ms 00:19:33.030 [2024-11-17 14:08:11.097981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.030 [2024-11-17 14:08:11.098488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.030 [2024-11-17 14:08:11.098504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:33.030 [2024-11-17 14:08:11.098518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.475 ms 00:19:33.030 [2024-11-17 14:08:11.098530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.030 [2024-11-17 14:08:11.102641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.030 [2024-11-17 14:08:11.102824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:33.030 [2024-11-17 14:08:11.102843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.085 ms 00:19:33.030 [2024-11-17 14:08:11.102852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.030 [2024-11-17 14:08:11.109495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.030 [2024-11-17 14:08:11.109534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:33.030 [2024-11-17 14:08:11.109544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.614 ms 00:19:33.030 [2024-11-17 14:08:11.109562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.030 [2024-11-17 14:08:11.112690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.030 [2024-11-17 14:08:11.112745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:33.030 [2024-11-17 14:08:11.112756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.059 ms 00:19:33.030 [2024-11-17 14:08:11.112765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.030 [2024-11-17 14:08:11.117433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.030 [2024-11-17 14:08:11.117498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:33.030 [2024-11-17 14:08:11.117510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.625 ms 00:19:33.030 [2024-11-17 14:08:11.117519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.030 [2024-11-17 14:08:11.117645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.030 [2024-11-17 14:08:11.117656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:33.030 [2024-11-17 14:08:11.117665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:19:33.030 [2024-11-17 14:08:11.117673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.030 [2024-11-17 14:08:11.120926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.030 [2024-11-17 14:08:11.121102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:33.030 [2024-11-17 14:08:11.121120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.235 ms 00:19:33.030 [2024-11-17 14:08:11.121128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.030 [2024-11-17 14:08:11.124105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.030 [2024-11-17 14:08:11.124151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:33.030 [2024-11-17 14:08:11.124160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.834 ms 00:19:33.030 [2024-11-17 14:08:11.124167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.030 [2024-11-17 14:08:11.126386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.030 [2024-11-17 14:08:11.126431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:33.030 [2024-11-17 14:08:11.126441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.178 ms 00:19:33.030 [2024-11-17 14:08:11.126448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.030 [2024-11-17 14:08:11.128778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.030 [2024-11-17 14:08:11.128945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:33.030 [2024-11-17 14:08:11.128963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.260 ms 00:19:33.030 [2024-11-17 14:08:11.128971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.030 [2024-11-17 14:08:11.129078] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:33.030 [2024-11-17 14:08:11.129127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:33.030 [2024-11-17 14:08:11.129138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:33.030 [2024-11-17 14:08:11.129147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:33.030 [2024-11-17 14:08:11.129157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:33.030 [2024-11-17 14:08:11.129165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:33.030 [2024-11-17 14:08:11.129174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:33.030 [2024-11-17 14:08:11.129182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:33.030 [2024-11-17 14:08:11.129191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:33.030 [2024-11-17 14:08:11.129199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:33.030 [2024-11-17 14:08:11.129207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:33.030 [2024-11-17 14:08:11.129215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:33.030 [2024-11-17 14:08:11.129223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:33.030 [2024-11-17 14:08:11.129230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:33.030 [2024-11-17 14:08:11.129255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:33.031 [2024-11-17 14:08:11.129940] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:33.031 [2024-11-17 14:08:11.129948] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7d8283dd-40dc-4766-adff-c1626714363d 00:19:33.031 [2024-11-17 14:08:11.129957] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:33.032 [2024-11-17 14:08:11.129965] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:33.032 [2024-11-17 14:08:11.129972] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:33.032 [2024-11-17 14:08:11.129980] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:33.032 [2024-11-17 14:08:11.129987] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:33.032 [2024-11-17 14:08:11.129996] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:33.032 [2024-11-17 14:08:11.130003] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:33.032 [2024-11-17 14:08:11.130010] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:33.032 [2024-11-17 14:08:11.130017] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:33.032 [2024-11-17 14:08:11.130025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.032 [2024-11-17 14:08:11.130034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:33.032 [2024-11-17 14:08:11.130051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.950 ms 00:19:33.032 [2024-11-17 14:08:11.130059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.032 [2024-11-17 14:08:11.132385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.032 [2024-11-17 14:08:11.132416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:33.032 [2024-11-17 14:08:11.132426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.301 ms 00:19:33.032 [2024-11-17 14:08:11.132435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.032 [2024-11-17 14:08:11.132572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.032 [2024-11-17 14:08:11.132588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:33.032 [2024-11-17 14:08:11.132598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:19:33.032 [2024-11-17 14:08:11.132606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.032 [2024-11-17 14:08:11.139227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:33.032 [2024-11-17 14:08:11.139313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:33.032 [2024-11-17 14:08:11.139323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:33.032 [2024-11-17 14:08:11.139332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.032 [2024-11-17 14:08:11.139403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:33.032 [2024-11-17 14:08:11.139419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:33.032 [2024-11-17 14:08:11.139428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:33.032 [2024-11-17 14:08:11.139436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.032 [2024-11-17 14:08:11.139487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:33.032 [2024-11-17 14:08:11.139498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:33.032 [2024-11-17 14:08:11.139506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:33.032 [2024-11-17 14:08:11.139514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.032 [2024-11-17 14:08:11.139529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:33.032 [2024-11-17 14:08:11.139538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:33.032 [2024-11-17 14:08:11.139549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:33.032 [2024-11-17 14:08:11.139556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.032 [2024-11-17 14:08:11.152934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:33.032 [2024-11-17 14:08:11.152987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:33.032 [2024-11-17 14:08:11.152999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:33.032 [2024-11-17 14:08:11.153008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.032 [2024-11-17 14:08:11.164128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:33.032 [2024-11-17 14:08:11.164181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:33.032 [2024-11-17 14:08:11.164201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:33.032 [2024-11-17 14:08:11.164209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.032 [2024-11-17 14:08:11.164285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:33.032 [2024-11-17 14:08:11.164297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:33.032 [2024-11-17 14:08:11.164306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:33.032 [2024-11-17 14:08:11.164316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.032 [2024-11-17 14:08:11.164356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:33.032 [2024-11-17 14:08:11.164366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:33.032 [2024-11-17 14:08:11.164375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:33.032 [2024-11-17 14:08:11.164392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.032 [2024-11-17 14:08:11.164468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:33.032 [2024-11-17 14:08:11.164478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:33.032 [2024-11-17 14:08:11.164487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:33.032 [2024-11-17 14:08:11.164495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.032 [2024-11-17 14:08:11.164525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:33.032 [2024-11-17 14:08:11.164536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:33.032 [2024-11-17 14:08:11.164544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:33.032 [2024-11-17 14:08:11.164557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.032 [2024-11-17 14:08:11.164606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:33.032 [2024-11-17 14:08:11.164616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:33.032 [2024-11-17 14:08:11.164624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:33.032 [2024-11-17 14:08:11.164632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.032 [2024-11-17 14:08:11.164678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:33.032 [2024-11-17 14:08:11.164689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:33.032 [2024-11-17 14:08:11.164697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:33.032 [2024-11-17 14:08:11.164713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.032 [2024-11-17 14:08:11.164849] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 67.918 ms, result 0 00:19:33.293 00:19:33.293 00:19:33.293 14:08:11 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:35.842 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:19:35.842 14:08:13 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:19:35.842 [2024-11-17 14:08:13.719995] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:19:35.842 [2024-11-17 14:08:13.720155] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87489 ] 00:19:35.842 [2024-11-17 14:08:13.872202] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:35.842 [2024-11-17 14:08:13.920785] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:35.842 [2024-11-17 14:08:14.033810] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:35.842 [2024-11-17 14:08:14.033889] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:36.105 [2024-11-17 14:08:14.195159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.105 [2024-11-17 14:08:14.195223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:36.105 [2024-11-17 14:08:14.195262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:36.105 [2024-11-17 14:08:14.195276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.105 [2024-11-17 14:08:14.195334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.105 [2024-11-17 14:08:14.195364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:36.105 [2024-11-17 14:08:14.195391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:19:36.105 [2024-11-17 14:08:14.195400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.105 [2024-11-17 14:08:14.195429] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:36.105 [2024-11-17 14:08:14.195709] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:36.105 [2024-11-17 14:08:14.195734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.105 [2024-11-17 14:08:14.195747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:36.105 [2024-11-17 14:08:14.195759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.311 ms 00:19:36.105 [2024-11-17 14:08:14.195771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.105 [2024-11-17 14:08:14.197451] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:36.105 [2024-11-17 14:08:14.201325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.105 [2024-11-17 14:08:14.201376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:36.105 [2024-11-17 14:08:14.201388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.877 ms 00:19:36.105 [2024-11-17 14:08:14.201396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.105 [2024-11-17 14:08:14.201480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.105 [2024-11-17 14:08:14.201490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:36.105 [2024-11-17 14:08:14.201507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:19:36.105 [2024-11-17 14:08:14.201515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.105 [2024-11-17 14:08:14.209618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.105 [2024-11-17 14:08:14.209661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:36.105 [2024-11-17 14:08:14.209672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.056 ms 00:19:36.105 [2024-11-17 14:08:14.209688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.105 [2024-11-17 14:08:14.209790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.105 [2024-11-17 14:08:14.209800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:36.105 [2024-11-17 14:08:14.209809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:19:36.105 [2024-11-17 14:08:14.209821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.105 [2024-11-17 14:08:14.209880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.105 [2024-11-17 14:08:14.209891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:36.105 [2024-11-17 14:08:14.209900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:36.105 [2024-11-17 14:08:14.209908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.105 [2024-11-17 14:08:14.209935] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:36.105 [2024-11-17 14:08:14.212016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.105 [2024-11-17 14:08:14.212059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:36.105 [2024-11-17 14:08:14.212070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.090 ms 00:19:36.105 [2024-11-17 14:08:14.212077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.105 [2024-11-17 14:08:14.212113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.105 [2024-11-17 14:08:14.212122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:36.105 [2024-11-17 14:08:14.212130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:36.105 [2024-11-17 14:08:14.212138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.105 [2024-11-17 14:08:14.212161] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:36.105 [2024-11-17 14:08:14.212191] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:36.105 [2024-11-17 14:08:14.212228] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:36.105 [2024-11-17 14:08:14.212268] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:36.105 [2024-11-17 14:08:14.212377] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:36.105 [2024-11-17 14:08:14.212387] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:36.105 [2024-11-17 14:08:14.212398] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:36.105 [2024-11-17 14:08:14.212410] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:36.105 [2024-11-17 14:08:14.212422] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:36.105 [2024-11-17 14:08:14.212431] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:36.105 [2024-11-17 14:08:14.212438] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:36.105 [2024-11-17 14:08:14.212445] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:36.105 [2024-11-17 14:08:14.212453] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:36.105 [2024-11-17 14:08:14.212461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.105 [2024-11-17 14:08:14.212475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:36.105 [2024-11-17 14:08:14.212483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.302 ms 00:19:36.105 [2024-11-17 14:08:14.212490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.105 [2024-11-17 14:08:14.212576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.105 [2024-11-17 14:08:14.212586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:36.105 [2024-11-17 14:08:14.212594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:36.105 [2024-11-17 14:08:14.212602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.105 [2024-11-17 14:08:14.212700] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:36.105 [2024-11-17 14:08:14.212711] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:36.105 [2024-11-17 14:08:14.212720] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:36.105 [2024-11-17 14:08:14.212740] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:36.105 [2024-11-17 14:08:14.212749] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:36.105 [2024-11-17 14:08:14.212757] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:36.105 [2024-11-17 14:08:14.212765] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:36.105 [2024-11-17 14:08:14.212774] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:36.105 [2024-11-17 14:08:14.212783] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:36.105 [2024-11-17 14:08:14.212791] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:36.105 [2024-11-17 14:08:14.212800] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:36.105 [2024-11-17 14:08:14.212808] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:36.105 [2024-11-17 14:08:14.212822] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:36.105 [2024-11-17 14:08:14.212831] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:36.105 [2024-11-17 14:08:14.212839] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:36.105 [2024-11-17 14:08:14.212847] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:36.105 [2024-11-17 14:08:14.212855] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:36.105 [2024-11-17 14:08:14.212864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:36.105 [2024-11-17 14:08:14.212872] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:36.105 [2024-11-17 14:08:14.212880] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:36.105 [2024-11-17 14:08:14.212889] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:36.105 [2024-11-17 14:08:14.212898] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:36.105 [2024-11-17 14:08:14.212905] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:36.105 [2024-11-17 14:08:14.212913] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:36.105 [2024-11-17 14:08:14.212921] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:36.105 [2024-11-17 14:08:14.212930] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:36.105 [2024-11-17 14:08:14.212937] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:36.105 [2024-11-17 14:08:14.212945] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:36.105 [2024-11-17 14:08:14.212958] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:36.105 [2024-11-17 14:08:14.212966] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:36.105 [2024-11-17 14:08:14.212975] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:36.106 [2024-11-17 14:08:14.212982] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:36.106 [2024-11-17 14:08:14.212990] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:36.106 [2024-11-17 14:08:14.212997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:36.106 [2024-11-17 14:08:14.213005] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:36.106 [2024-11-17 14:08:14.213013] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:36.106 [2024-11-17 14:08:14.213020] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:36.106 [2024-11-17 14:08:14.213028] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:36.106 [2024-11-17 14:08:14.213036] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:36.106 [2024-11-17 14:08:14.213043] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:36.106 [2024-11-17 14:08:14.213051] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:36.106 [2024-11-17 14:08:14.213058] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:36.106 [2024-11-17 14:08:14.213065] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:36.106 [2024-11-17 14:08:14.213071] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:36.106 [2024-11-17 14:08:14.213083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:36.106 [2024-11-17 14:08:14.213092] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:36.106 [2024-11-17 14:08:14.213108] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:36.106 [2024-11-17 14:08:14.213115] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:36.106 [2024-11-17 14:08:14.213123] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:36.106 [2024-11-17 14:08:14.213130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:36.106 [2024-11-17 14:08:14.213137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:36.106 [2024-11-17 14:08:14.213143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:36.106 [2024-11-17 14:08:14.213151] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:36.106 [2024-11-17 14:08:14.213160] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:36.106 [2024-11-17 14:08:14.213169] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:36.106 [2024-11-17 14:08:14.213177] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:36.106 [2024-11-17 14:08:14.213185] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:36.106 [2024-11-17 14:08:14.213191] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:36.106 [2024-11-17 14:08:14.213199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:36.106 [2024-11-17 14:08:14.213206] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:36.106 [2024-11-17 14:08:14.213215] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:36.106 [2024-11-17 14:08:14.213224] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:36.106 [2024-11-17 14:08:14.213231] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:36.106 [2024-11-17 14:08:14.213252] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:36.106 [2024-11-17 14:08:14.213260] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:36.106 [2024-11-17 14:08:14.213267] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:36.106 [2024-11-17 14:08:14.213274] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:36.106 [2024-11-17 14:08:14.213282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:36.106 [2024-11-17 14:08:14.213289] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:36.106 [2024-11-17 14:08:14.213296] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:36.106 [2024-11-17 14:08:14.213305] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:36.106 [2024-11-17 14:08:14.213318] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:36.106 [2024-11-17 14:08:14.213326] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:36.106 [2024-11-17 14:08:14.213334] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:36.106 [2024-11-17 14:08:14.213342] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:36.106 [2024-11-17 14:08:14.213350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.106 [2024-11-17 14:08:14.213361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:36.106 [2024-11-17 14:08:14.213370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.719 ms 00:19:36.106 [2024-11-17 14:08:14.213377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.106 [2024-11-17 14:08:14.239459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.106 [2024-11-17 14:08:14.239847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:36.106 [2024-11-17 14:08:14.240048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.028 ms 00:19:36.106 [2024-11-17 14:08:14.240119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.106 [2024-11-17 14:08:14.240424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.106 [2024-11-17 14:08:14.240509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:36.106 [2024-11-17 14:08:14.240735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.197 ms 00:19:36.106 [2024-11-17 14:08:14.240801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.106 [2024-11-17 14:08:14.253365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.106 [2024-11-17 14:08:14.253531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:36.106 [2024-11-17 14:08:14.253589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.275 ms 00:19:36.106 [2024-11-17 14:08:14.253612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.106 [2024-11-17 14:08:14.253662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.106 [2024-11-17 14:08:14.253685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:36.106 [2024-11-17 14:08:14.253705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:36.106 [2024-11-17 14:08:14.253725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.106 [2024-11-17 14:08:14.254321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.106 [2024-11-17 14:08:14.254393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:36.106 [2024-11-17 14:08:14.254422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.530 ms 00:19:36.106 [2024-11-17 14:08:14.254441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.106 [2024-11-17 14:08:14.254683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.106 [2024-11-17 14:08:14.254711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:36.106 [2024-11-17 14:08:14.254738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:19:36.106 [2024-11-17 14:08:14.254757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.106 [2024-11-17 14:08:14.261710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.106 [2024-11-17 14:08:14.261866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:36.106 [2024-11-17 14:08:14.261930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.867 ms 00:19:36.106 [2024-11-17 14:08:14.261953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.106 [2024-11-17 14:08:14.265852] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:36.106 [2024-11-17 14:08:14.266036] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:36.106 [2024-11-17 14:08:14.266104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.106 [2024-11-17 14:08:14.266125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:36.106 [2024-11-17 14:08:14.266144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.036 ms 00:19:36.106 [2024-11-17 14:08:14.266162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.106 [2024-11-17 14:08:14.282137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.106 [2024-11-17 14:08:14.282331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:36.106 [2024-11-17 14:08:14.282402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.820 ms 00:19:36.106 [2024-11-17 14:08:14.282427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.106 [2024-11-17 14:08:14.285282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.106 [2024-11-17 14:08:14.285432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:36.106 [2024-11-17 14:08:14.285449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.799 ms 00:19:36.106 [2024-11-17 14:08:14.285456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.106 [2024-11-17 14:08:14.288215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.106 [2024-11-17 14:08:14.288290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:36.106 [2024-11-17 14:08:14.288301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.722 ms 00:19:36.106 [2024-11-17 14:08:14.288309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.106 [2024-11-17 14:08:14.288665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.106 [2024-11-17 14:08:14.288677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:36.106 [2024-11-17 14:08:14.288692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:19:36.107 [2024-11-17 14:08:14.288699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.107 [2024-11-17 14:08:14.312865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.107 [2024-11-17 14:08:14.313071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:36.107 [2024-11-17 14:08:14.313091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.144 ms 00:19:36.107 [2024-11-17 14:08:14.313100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.107 [2024-11-17 14:08:14.321402] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:36.107 [2024-11-17 14:08:14.324164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.107 [2024-11-17 14:08:14.324320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:36.107 [2024-11-17 14:08:14.324348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.022 ms 00:19:36.107 [2024-11-17 14:08:14.324357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.107 [2024-11-17 14:08:14.324432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.107 [2024-11-17 14:08:14.324443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:36.107 [2024-11-17 14:08:14.324458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:36.107 [2024-11-17 14:08:14.324465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.107 [2024-11-17 14:08:14.324537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.107 [2024-11-17 14:08:14.324548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:36.107 [2024-11-17 14:08:14.324558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:19:36.107 [2024-11-17 14:08:14.324568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.107 [2024-11-17 14:08:14.324589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.107 [2024-11-17 14:08:14.324598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:36.107 [2024-11-17 14:08:14.324606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:36.107 [2024-11-17 14:08:14.324614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.107 [2024-11-17 14:08:14.324650] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:36.107 [2024-11-17 14:08:14.324666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.107 [2024-11-17 14:08:14.324675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:36.107 [2024-11-17 14:08:14.324683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:19:36.107 [2024-11-17 14:08:14.324691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.107 [2024-11-17 14:08:14.329663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.107 [2024-11-17 14:08:14.329706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:36.107 [2024-11-17 14:08:14.329717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.951 ms 00:19:36.107 [2024-11-17 14:08:14.329734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.107 [2024-11-17 14:08:14.329813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.107 [2024-11-17 14:08:14.329823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:36.107 [2024-11-17 14:08:14.329832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:19:36.107 [2024-11-17 14:08:14.329841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.107 [2024-11-17 14:08:14.331031] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 135.421 ms, result 0 00:19:37.051  [2024-11-17T14:08:16.740Z] Copying: 14/1024 [MB] (14 MBps) [2024-11-17T14:08:17.684Z] Copying: 32/1024 [MB] (18 MBps) [2024-11-17T14:08:18.628Z] Copying: 43/1024 [MB] (11 MBps) [2024-11-17T14:08:19.571Z] Copying: 57/1024 [MB] (13 MBps) [2024-11-17T14:08:20.515Z] Copying: 69/1024 [MB] (12 MBps) [2024-11-17T14:08:21.460Z] Copying: 82/1024 [MB] (12 MBps) [2024-11-17T14:08:22.404Z] Copying: 93/1024 [MB] (11 MBps) [2024-11-17T14:08:23.345Z] Copying: 108/1024 [MB] (14 MBps) [2024-11-17T14:08:24.731Z] Copying: 125/1024 [MB] (17 MBps) [2024-11-17T14:08:25.673Z] Copying: 138/1024 [MB] (12 MBps) [2024-11-17T14:08:26.615Z] Copying: 173/1024 [MB] (35 MBps) [2024-11-17T14:08:27.558Z] Copying: 220/1024 [MB] (46 MBps) [2024-11-17T14:08:28.501Z] Copying: 265/1024 [MB] (45 MBps) [2024-11-17T14:08:29.441Z] Copying: 313/1024 [MB] (47 MBps) [2024-11-17T14:08:30.385Z] Copying: 358/1024 [MB] (44 MBps) [2024-11-17T14:08:31.772Z] Copying: 370/1024 [MB] (12 MBps) [2024-11-17T14:08:32.345Z] Copying: 391/1024 [MB] (20 MBps) [2024-11-17T14:08:33.733Z] Copying: 403/1024 [MB] (12 MBps) [2024-11-17T14:08:34.677Z] Copying: 416/1024 [MB] (12 MBps) [2024-11-17T14:08:35.620Z] Copying: 433/1024 [MB] (17 MBps) [2024-11-17T14:08:36.564Z] Copying: 463/1024 [MB] (29 MBps) [2024-11-17T14:08:37.563Z] Copying: 504/1024 [MB] (40 MBps) [2024-11-17T14:08:38.508Z] Copying: 526/1024 [MB] (22 MBps) [2024-11-17T14:08:39.454Z] Copying: 545/1024 [MB] (19 MBps) [2024-11-17T14:08:40.398Z] Copying: 566/1024 [MB] (21 MBps) [2024-11-17T14:08:41.344Z] Copying: 584/1024 [MB] (17 MBps) [2024-11-17T14:08:42.731Z] Copying: 596/1024 [MB] (11 MBps) [2024-11-17T14:08:43.678Z] Copying: 607/1024 [MB] (11 MBps) [2024-11-17T14:08:44.624Z] Copying: 622/1024 [MB] (14 MBps) [2024-11-17T14:08:45.569Z] Copying: 635/1024 [MB] (12 MBps) [2024-11-17T14:08:46.513Z] Copying: 659/1024 [MB] (24 MBps) [2024-11-17T14:08:47.459Z] Copying: 694/1024 [MB] (34 MBps) [2024-11-17T14:08:48.403Z] Copying: 717/1024 [MB] (23 MBps) [2024-11-17T14:08:49.349Z] Copying: 732/1024 [MB] (15 MBps) [2024-11-17T14:08:50.738Z] Copying: 753/1024 [MB] (20 MBps) [2024-11-17T14:08:51.681Z] Copying: 773/1024 [MB] (20 MBps) [2024-11-17T14:08:52.625Z] Copying: 812/1024 [MB] (38 MBps) [2024-11-17T14:08:53.570Z] Copying: 852/1024 [MB] (40 MBps) [2024-11-17T14:08:54.515Z] Copying: 869/1024 [MB] (16 MBps) [2024-11-17T14:08:55.461Z] Copying: 884/1024 [MB] (15 MBps) [2024-11-17T14:08:56.405Z] Copying: 898/1024 [MB] (13 MBps) [2024-11-17T14:08:57.348Z] Copying: 911/1024 [MB] (13 MBps) [2024-11-17T14:08:58.732Z] Copying: 943624/1048576 [kB] (10168 kBps) [2024-11-17T14:08:59.676Z] Copying: 937/1024 [MB] (15 MBps) [2024-11-17T14:09:00.621Z] Copying: 952/1024 [MB] (15 MBps) [2024-11-17T14:09:01.563Z] Copying: 968/1024 [MB] (16 MBps) [2024-11-17T14:09:02.508Z] Copying: 990/1024 [MB] (21 MBps) [2024-11-17T14:09:03.454Z] Copying: 1010/1024 [MB] (20 MBps) [2024-11-17T14:09:03.454Z] Copying: 1024/1024 [MB] (average 20 MBps)[2024-11-17 14:09:03.104920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.153 [2024-11-17 14:09:03.105013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:25.153 [2024-11-17 14:09:03.105065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:20:25.153 [2024-11-17 14:09:03.105085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.153 [2024-11-17 14:09:03.105117] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:25.153 [2024-11-17 14:09:03.105530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.153 [2024-11-17 14:09:03.105619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:25.153 [2024-11-17 14:09:03.105663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.383 ms 00:20:25.153 [2024-11-17 14:09:03.105680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.153 [2024-11-17 14:09:03.107580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.153 [2024-11-17 14:09:03.107669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:25.153 [2024-11-17 14:09:03.107716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.873 ms 00:20:25.153 [2024-11-17 14:09:03.107734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.153 [2024-11-17 14:09:03.120892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.153 [2024-11-17 14:09:03.120989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:25.153 [2024-11-17 14:09:03.121008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.130 ms 00:20:25.153 [2024-11-17 14:09:03.121014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.153 [2024-11-17 14:09:03.125825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.153 [2024-11-17 14:09:03.125909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:25.153 [2024-11-17 14:09:03.125956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.788 ms 00:20:25.153 [2024-11-17 14:09:03.125974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.153 [2024-11-17 14:09:03.127462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.153 [2024-11-17 14:09:03.127552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:25.153 [2024-11-17 14:09:03.127593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.364 ms 00:20:25.153 [2024-11-17 14:09:03.127609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.153 [2024-11-17 14:09:03.131192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.153 [2024-11-17 14:09:03.131291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:25.153 [2024-11-17 14:09:03.131348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.520 ms 00:20:25.153 [2024-11-17 14:09:03.131436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.153 [2024-11-17 14:09:03.132284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.153 [2024-11-17 14:09:03.132356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:25.153 [2024-11-17 14:09:03.132394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.813 ms 00:20:25.153 [2024-11-17 14:09:03.132410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.153 [2024-11-17 14:09:03.134231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.153 [2024-11-17 14:09:03.134320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:25.153 [2024-11-17 14:09:03.134357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.800 ms 00:20:25.153 [2024-11-17 14:09:03.134373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.153 [2024-11-17 14:09:03.135860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.153 [2024-11-17 14:09:03.135944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:25.153 [2024-11-17 14:09:03.135986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.446 ms 00:20:25.153 [2024-11-17 14:09:03.136002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.153 [2024-11-17 14:09:03.137227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.153 [2024-11-17 14:09:03.137316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:25.153 [2024-11-17 14:09:03.137358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.194 ms 00:20:25.153 [2024-11-17 14:09:03.137374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.153 [2024-11-17 14:09:03.138525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.153 [2024-11-17 14:09:03.138608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:25.153 [2024-11-17 14:09:03.138649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.102 ms 00:20:25.153 [2024-11-17 14:09:03.138665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.153 [2024-11-17 14:09:03.138693] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:25.153 [2024-11-17 14:09:03.138776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 512 / 261120 wr_cnt: 1 state: open 00:20:25.153 [2024-11-17 14:09:03.138809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:25.153 [2024-11-17 14:09:03.138832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:25.153 [2024-11-17 14:09:03.138853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:25.153 [2024-11-17 14:09:03.138908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:25.153 [2024-11-17 14:09:03.138932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.138954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.138975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.139042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.139085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.139108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.139130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.139153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.139203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.139226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.139263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.139287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.139344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.139369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.139391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.139413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.139435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.139502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.139544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.139567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.139589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.139611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.139654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.139679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.139702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.139724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.139745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.139805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.139873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.139896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.139941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.139966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.139988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.140027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.140050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.140097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.140120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.140160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.140184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.140206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.140268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.140290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.140334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.140357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.140372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.140378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.140384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.140390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.140395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.140402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.140408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.140413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.140419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.140424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.140430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.140436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.140441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.140447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.140454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.140459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.140465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.140470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.140476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.140482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.140487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.140493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.140499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.140504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.140510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.140515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.140521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.140526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.140532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.140538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.140544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.140550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.140556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.140561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.140567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.140573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.140578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.140584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.140589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.140595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.140600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.140606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.140612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.140618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:25.154 [2024-11-17 14:09:03.140623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:25.155 [2024-11-17 14:09:03.140630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:25.155 [2024-11-17 14:09:03.140636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:25.155 [2024-11-17 14:09:03.140642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:25.155 [2024-11-17 14:09:03.140648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:25.155 [2024-11-17 14:09:03.140653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:25.155 [2024-11-17 14:09:03.140659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:25.155 [2024-11-17 14:09:03.140671] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:25.155 [2024-11-17 14:09:03.140682] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7d8283dd-40dc-4766-adff-c1626714363d 00:20:25.155 [2024-11-17 14:09:03.140688] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 512 00:20:25.155 [2024-11-17 14:09:03.140694] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 1472 00:20:25.155 [2024-11-17 14:09:03.140699] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 512 00:20:25.155 [2024-11-17 14:09:03.140706] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 2.8750 00:20:25.155 [2024-11-17 14:09:03.140712] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:25.155 [2024-11-17 14:09:03.140718] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:25.155 [2024-11-17 14:09:03.140723] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:25.155 [2024-11-17 14:09:03.140728] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:25.155 [2024-11-17 14:09:03.140733] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:25.155 [2024-11-17 14:09:03.140739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.155 [2024-11-17 14:09:03.140745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:25.155 [2024-11-17 14:09:03.140755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.046 ms 00:20:25.155 [2024-11-17 14:09:03.140761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.155 [2024-11-17 14:09:03.142134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.155 [2024-11-17 14:09:03.142212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:25.155 [2024-11-17 14:09:03.142270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.355 ms 00:20:25.155 [2024-11-17 14:09:03.142289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.155 [2024-11-17 14:09:03.142412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.155 [2024-11-17 14:09:03.142467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:25.155 [2024-11-17 14:09:03.142485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:20:25.155 [2024-11-17 14:09:03.142526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.155 [2024-11-17 14:09:03.146316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:25.155 [2024-11-17 14:09:03.146407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:25.155 [2024-11-17 14:09:03.146446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:25.155 [2024-11-17 14:09:03.146464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.155 [2024-11-17 14:09:03.146529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:25.155 [2024-11-17 14:09:03.146627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:25.155 [2024-11-17 14:09:03.146661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:25.155 [2024-11-17 14:09:03.146677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.155 [2024-11-17 14:09:03.146717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:25.155 [2024-11-17 14:09:03.146755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:25.155 [2024-11-17 14:09:03.146785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:25.155 [2024-11-17 14:09:03.146799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.155 [2024-11-17 14:09:03.146819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:25.155 [2024-11-17 14:09:03.146838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:25.155 [2024-11-17 14:09:03.146852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:25.155 [2024-11-17 14:09:03.146888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.155 [2024-11-17 14:09:03.154410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:25.155 [2024-11-17 14:09:03.154526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:25.155 [2024-11-17 14:09:03.154567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:25.155 [2024-11-17 14:09:03.154609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.155 [2024-11-17 14:09:03.160750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:25.155 [2024-11-17 14:09:03.160862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:25.155 [2024-11-17 14:09:03.160873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:25.155 [2024-11-17 14:09:03.160880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.155 [2024-11-17 14:09:03.160914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:25.155 [2024-11-17 14:09:03.160921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:25.155 [2024-11-17 14:09:03.160927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:25.155 [2024-11-17 14:09:03.160933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.155 [2024-11-17 14:09:03.160951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:25.155 [2024-11-17 14:09:03.160957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:25.155 [2024-11-17 14:09:03.160969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:25.155 [2024-11-17 14:09:03.160975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.155 [2024-11-17 14:09:03.161029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:25.155 [2024-11-17 14:09:03.161036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:25.155 [2024-11-17 14:09:03.161042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:25.155 [2024-11-17 14:09:03.161049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.155 [2024-11-17 14:09:03.161069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:25.155 [2024-11-17 14:09:03.161076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:25.155 [2024-11-17 14:09:03.161082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:25.155 [2024-11-17 14:09:03.161090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.155 [2024-11-17 14:09:03.161117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:25.155 [2024-11-17 14:09:03.161124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:25.155 [2024-11-17 14:09:03.161130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:25.155 [2024-11-17 14:09:03.161136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.155 [2024-11-17 14:09:03.161167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:25.155 [2024-11-17 14:09:03.161174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:25.155 [2024-11-17 14:09:03.161183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:25.155 [2024-11-17 14:09:03.161188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.155 [2024-11-17 14:09:03.161293] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 56.352 ms, result 0 00:20:25.726 00:20:25.726 00:20:25.726 14:09:03 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:20:25.726 [2024-11-17 14:09:03.808606] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:20:25.726 [2024-11-17 14:09:03.808874] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88011 ] 00:20:25.726 [2024-11-17 14:09:03.957864] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:25.726 [2024-11-17 14:09:03.989858] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:25.988 [2024-11-17 14:09:04.071030] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:25.988 [2024-11-17 14:09:04.071086] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:25.988 [2024-11-17 14:09:04.227623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.988 [2024-11-17 14:09:04.227665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:25.988 [2024-11-17 14:09:04.227680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:25.988 [2024-11-17 14:09:04.227692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.988 [2024-11-17 14:09:04.227737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.988 [2024-11-17 14:09:04.227751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:25.988 [2024-11-17 14:09:04.227762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:20:25.988 [2024-11-17 14:09:04.227769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.988 [2024-11-17 14:09:04.227785] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:25.988 [2024-11-17 14:09:04.228022] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:25.988 [2024-11-17 14:09:04.228036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.988 [2024-11-17 14:09:04.228044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:25.988 [2024-11-17 14:09:04.228054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:20:25.988 [2024-11-17 14:09:04.228067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.988 [2024-11-17 14:09:04.229092] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:25.988 [2024-11-17 14:09:04.231539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.988 [2024-11-17 14:09:04.231573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:25.988 [2024-11-17 14:09:04.231583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.448 ms 00:20:25.988 [2024-11-17 14:09:04.231590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.988 [2024-11-17 14:09:04.231644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.988 [2024-11-17 14:09:04.231657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:25.988 [2024-11-17 14:09:04.231665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:20:25.988 [2024-11-17 14:09:04.231674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.988 [2024-11-17 14:09:04.236532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.988 [2024-11-17 14:09:04.236562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:25.988 [2024-11-17 14:09:04.236574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.801 ms 00:20:25.988 [2024-11-17 14:09:04.236585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.988 [2024-11-17 14:09:04.236665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.988 [2024-11-17 14:09:04.236674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:25.988 [2024-11-17 14:09:04.236681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:20:25.988 [2024-11-17 14:09:04.236689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.988 [2024-11-17 14:09:04.236729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.988 [2024-11-17 14:09:04.236745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:25.988 [2024-11-17 14:09:04.236753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:25.988 [2024-11-17 14:09:04.236760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.988 [2024-11-17 14:09:04.236782] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:25.988 [2024-11-17 14:09:04.238099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.988 [2024-11-17 14:09:04.238127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:25.988 [2024-11-17 14:09:04.238136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.324 ms 00:20:25.988 [2024-11-17 14:09:04.238149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.988 [2024-11-17 14:09:04.238176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.988 [2024-11-17 14:09:04.238188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:25.988 [2024-11-17 14:09:04.238195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:25.988 [2024-11-17 14:09:04.238202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.988 [2024-11-17 14:09:04.238224] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:25.988 [2024-11-17 14:09:04.238259] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:25.988 [2024-11-17 14:09:04.238295] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:25.988 [2024-11-17 14:09:04.238310] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:25.988 [2024-11-17 14:09:04.238426] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:25.988 [2024-11-17 14:09:04.238436] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:25.988 [2024-11-17 14:09:04.238446] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:25.988 [2024-11-17 14:09:04.238456] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:25.988 [2024-11-17 14:09:04.238471] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:25.988 [2024-11-17 14:09:04.238478] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:25.988 [2024-11-17 14:09:04.238486] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:25.988 [2024-11-17 14:09:04.238492] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:25.988 [2024-11-17 14:09:04.238499] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:25.988 [2024-11-17 14:09:04.238507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.988 [2024-11-17 14:09:04.238514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:25.988 [2024-11-17 14:09:04.238522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:20:25.988 [2024-11-17 14:09:04.238529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.988 [2024-11-17 14:09:04.238609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.988 [2024-11-17 14:09:04.238622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:25.988 [2024-11-17 14:09:04.238629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:20:25.988 [2024-11-17 14:09:04.238636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.988 [2024-11-17 14:09:04.238731] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:25.988 [2024-11-17 14:09:04.238740] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:25.988 [2024-11-17 14:09:04.238748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:25.988 [2024-11-17 14:09:04.238763] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:25.988 [2024-11-17 14:09:04.238772] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:25.988 [2024-11-17 14:09:04.238780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:25.988 [2024-11-17 14:09:04.238787] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:25.988 [2024-11-17 14:09:04.238795] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:25.988 [2024-11-17 14:09:04.238803] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:25.988 [2024-11-17 14:09:04.238810] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:25.988 [2024-11-17 14:09:04.238819] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:25.988 [2024-11-17 14:09:04.238827] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:25.988 [2024-11-17 14:09:04.238834] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:25.988 [2024-11-17 14:09:04.238845] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:25.988 [2024-11-17 14:09:04.238853] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:25.988 [2024-11-17 14:09:04.238861] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:25.988 [2024-11-17 14:09:04.238869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:25.988 [2024-11-17 14:09:04.238877] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:25.988 [2024-11-17 14:09:04.238884] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:25.989 [2024-11-17 14:09:04.238892] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:25.989 [2024-11-17 14:09:04.238900] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:25.989 [2024-11-17 14:09:04.238908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:25.989 [2024-11-17 14:09:04.238915] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:25.989 [2024-11-17 14:09:04.238923] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:25.989 [2024-11-17 14:09:04.238930] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:25.989 [2024-11-17 14:09:04.238938] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:25.989 [2024-11-17 14:09:04.238946] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:25.989 [2024-11-17 14:09:04.238953] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:25.989 [2024-11-17 14:09:04.238960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:25.989 [2024-11-17 14:09:04.238973] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:25.989 [2024-11-17 14:09:04.238981] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:25.989 [2024-11-17 14:09:04.238988] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:25.989 [2024-11-17 14:09:04.238996] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:25.989 [2024-11-17 14:09:04.239003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:25.989 [2024-11-17 14:09:04.239010] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:25.989 [2024-11-17 14:09:04.239018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:25.989 [2024-11-17 14:09:04.239026] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:25.989 [2024-11-17 14:09:04.239033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:25.989 [2024-11-17 14:09:04.239041] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:25.989 [2024-11-17 14:09:04.239048] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:25.989 [2024-11-17 14:09:04.239056] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:25.989 [2024-11-17 14:09:04.239063] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:25.989 [2024-11-17 14:09:04.239071] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:25.989 [2024-11-17 14:09:04.239079] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:25.989 [2024-11-17 14:09:04.239087] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:25.989 [2024-11-17 14:09:04.239097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:25.989 [2024-11-17 14:09:04.239108] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:25.989 [2024-11-17 14:09:04.239119] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:25.989 [2024-11-17 14:09:04.239127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:25.989 [2024-11-17 14:09:04.239135] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:25.989 [2024-11-17 14:09:04.239143] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:25.989 [2024-11-17 14:09:04.239151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:25.989 [2024-11-17 14:09:04.239159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:25.989 [2024-11-17 14:09:04.239168] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:25.989 [2024-11-17 14:09:04.239178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:25.989 [2024-11-17 14:09:04.239191] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:25.989 [2024-11-17 14:09:04.239199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:25.989 [2024-11-17 14:09:04.239208] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:25.989 [2024-11-17 14:09:04.239216] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:25.989 [2024-11-17 14:09:04.239223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:25.989 [2024-11-17 14:09:04.239230] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:25.989 [2024-11-17 14:09:04.239248] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:25.989 [2024-11-17 14:09:04.239256] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:25.989 [2024-11-17 14:09:04.239263] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:25.989 [2024-11-17 14:09:04.239270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:25.989 [2024-11-17 14:09:04.239277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:25.989 [2024-11-17 14:09:04.239283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:25.989 [2024-11-17 14:09:04.239290] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:25.989 [2024-11-17 14:09:04.239297] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:25.989 [2024-11-17 14:09:04.239304] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:25.989 [2024-11-17 14:09:04.239311] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:25.989 [2024-11-17 14:09:04.239327] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:25.989 [2024-11-17 14:09:04.239334] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:25.989 [2024-11-17 14:09:04.239341] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:25.989 [2024-11-17 14:09:04.239349] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:25.989 [2024-11-17 14:09:04.239356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.989 [2024-11-17 14:09:04.239363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:25.989 [2024-11-17 14:09:04.239376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.694 ms 00:20:25.989 [2024-11-17 14:09:04.239383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.989 [2024-11-17 14:09:04.255276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.989 [2024-11-17 14:09:04.255316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:25.989 [2024-11-17 14:09:04.255338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.851 ms 00:20:25.989 [2024-11-17 14:09:04.255350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.989 [2024-11-17 14:09:04.255438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.989 [2024-11-17 14:09:04.255447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:25.989 [2024-11-17 14:09:04.255456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:20:25.989 [2024-11-17 14:09:04.255463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.989 [2024-11-17 14:09:04.263560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.989 [2024-11-17 14:09:04.263593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:25.989 [2024-11-17 14:09:04.263608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.043 ms 00:20:25.989 [2024-11-17 14:09:04.263615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.989 [2024-11-17 14:09:04.263643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.989 [2024-11-17 14:09:04.263651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:25.989 [2024-11-17 14:09:04.263660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:25.989 [2024-11-17 14:09:04.263667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.989 [2024-11-17 14:09:04.263996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.989 [2024-11-17 14:09:04.264024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:25.989 [2024-11-17 14:09:04.264033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:20:25.989 [2024-11-17 14:09:04.264044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.989 [2024-11-17 14:09:04.264171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.989 [2024-11-17 14:09:04.264184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:25.989 [2024-11-17 14:09:04.264195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:20:25.989 [2024-11-17 14:09:04.264203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.989 [2024-11-17 14:09:04.268790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.989 [2024-11-17 14:09:04.268821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:25.989 [2024-11-17 14:09:04.268834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.565 ms 00:20:25.989 [2024-11-17 14:09:04.268842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:25.989 [2024-11-17 14:09:04.271563] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 3, empty chunks = 1 00:20:25.989 [2024-11-17 14:09:04.271596] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:25.989 [2024-11-17 14:09:04.271606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:25.989 [2024-11-17 14:09:04.271617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:25.989 [2024-11-17 14:09:04.271626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.675 ms 00:20:25.989 [2024-11-17 14:09:04.271633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.251 [2024-11-17 14:09:04.286251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.251 [2024-11-17 14:09:04.286282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:26.251 [2024-11-17 14:09:04.286301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.581 ms 00:20:26.251 [2024-11-17 14:09:04.286311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.251 [2024-11-17 14:09:04.288175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.251 [2024-11-17 14:09:04.288206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:26.251 [2024-11-17 14:09:04.288214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.814 ms 00:20:26.251 [2024-11-17 14:09:04.288221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.251 [2024-11-17 14:09:04.289984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.251 [2024-11-17 14:09:04.290107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:26.251 [2024-11-17 14:09:04.290121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.716 ms 00:20:26.251 [2024-11-17 14:09:04.290128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.251 [2024-11-17 14:09:04.290450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.251 [2024-11-17 14:09:04.290467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:26.251 [2024-11-17 14:09:04.290476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:20:26.251 [2024-11-17 14:09:04.290482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.251 [2024-11-17 14:09:04.307798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.251 [2024-11-17 14:09:04.307846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:26.251 [2024-11-17 14:09:04.307858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.300 ms 00:20:26.251 [2024-11-17 14:09:04.307866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.251 [2024-11-17 14:09:04.315396] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:26.251 [2024-11-17 14:09:04.318028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.251 [2024-11-17 14:09:04.318057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:26.251 [2024-11-17 14:09:04.318073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.126 ms 00:20:26.251 [2024-11-17 14:09:04.318085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.251 [2024-11-17 14:09:04.318133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.251 [2024-11-17 14:09:04.318144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:26.251 [2024-11-17 14:09:04.318154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:26.251 [2024-11-17 14:09:04.318162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.251 [2024-11-17 14:09:04.318742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.251 [2024-11-17 14:09:04.318790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:26.251 [2024-11-17 14:09:04.318802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.542 ms 00:20:26.251 [2024-11-17 14:09:04.318809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.251 [2024-11-17 14:09:04.318836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.251 [2024-11-17 14:09:04.318844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:26.251 [2024-11-17 14:09:04.318853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:26.251 [2024-11-17 14:09:04.318865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.251 [2024-11-17 14:09:04.318899] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:26.251 [2024-11-17 14:09:04.318908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.251 [2024-11-17 14:09:04.318922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:26.251 [2024-11-17 14:09:04.318929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:26.251 [2024-11-17 14:09:04.318940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.251 [2024-11-17 14:09:04.322609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.251 [2024-11-17 14:09:04.322641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:26.251 [2024-11-17 14:09:04.322658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.648 ms 00:20:26.251 [2024-11-17 14:09:04.322668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.251 [2024-11-17 14:09:04.322733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.251 [2024-11-17 14:09:04.322742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:26.251 [2024-11-17 14:09:04.322750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:20:26.251 [2024-11-17 14:09:04.322761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.251 [2024-11-17 14:09:04.323822] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 95.811 ms, result 0 00:20:27.238  [2024-11-17T14:09:06.512Z] Copying: 1012/1048576 [kB] (1012 kBps) [2024-11-17T14:09:07.897Z] Copying: 11/1024 [MB] (10 MBps) [2024-11-17T14:09:08.842Z] Copying: 22/1024 [MB] (10 MBps) [2024-11-17T14:09:09.784Z] Copying: 41/1024 [MB] (18 MBps) [2024-11-17T14:09:10.728Z] Copying: 62/1024 [MB] (20 MBps) [2024-11-17T14:09:11.672Z] Copying: 76/1024 [MB] (14 MBps) [2024-11-17T14:09:12.616Z] Copying: 92/1024 [MB] (16 MBps) [2024-11-17T14:09:13.563Z] Copying: 118/1024 [MB] (25 MBps) [2024-11-17T14:09:14.506Z] Copying: 137/1024 [MB] (19 MBps) [2024-11-17T14:09:15.895Z] Copying: 150/1024 [MB] (12 MBps) [2024-11-17T14:09:16.841Z] Copying: 164/1024 [MB] (13 MBps) [2024-11-17T14:09:17.786Z] Copying: 181/1024 [MB] (17 MBps) [2024-11-17T14:09:18.731Z] Copying: 195/1024 [MB] (14 MBps) [2024-11-17T14:09:19.674Z] Copying: 206/1024 [MB] (10 MBps) [2024-11-17T14:09:20.616Z] Copying: 216/1024 [MB] (10 MBps) [2024-11-17T14:09:21.560Z] Copying: 229/1024 [MB] (12 MBps) [2024-11-17T14:09:22.943Z] Copying: 239/1024 [MB] (10 MBps) [2024-11-17T14:09:23.515Z] Copying: 250/1024 [MB] (10 MBps) [2024-11-17T14:09:24.902Z] Copying: 269/1024 [MB] (18 MBps) [2024-11-17T14:09:25.846Z] Copying: 283/1024 [MB] (14 MBps) [2024-11-17T14:09:26.791Z] Copying: 298/1024 [MB] (14 MBps) [2024-11-17T14:09:27.736Z] Copying: 321/1024 [MB] (23 MBps) [2024-11-17T14:09:28.678Z] Copying: 343/1024 [MB] (21 MBps) [2024-11-17T14:09:29.620Z] Copying: 356/1024 [MB] (13 MBps) [2024-11-17T14:09:30.562Z] Copying: 374/1024 [MB] (18 MBps) [2024-11-17T14:09:31.506Z] Copying: 388/1024 [MB] (13 MBps) [2024-11-17T14:09:32.893Z] Copying: 406/1024 [MB] (17 MBps) [2024-11-17T14:09:33.837Z] Copying: 418/1024 [MB] (12 MBps) [2024-11-17T14:09:34.813Z] Copying: 437/1024 [MB] (18 MBps) [2024-11-17T14:09:35.776Z] Copying: 451/1024 [MB] (14 MBps) [2024-11-17T14:09:36.720Z] Copying: 467/1024 [MB] (15 MBps) [2024-11-17T14:09:37.662Z] Copying: 483/1024 [MB] (16 MBps) [2024-11-17T14:09:38.606Z] Copying: 498/1024 [MB] (14 MBps) [2024-11-17T14:09:39.549Z] Copying: 514/1024 [MB] (16 MBps) [2024-11-17T14:09:40.937Z] Copying: 525/1024 [MB] (10 MBps) [2024-11-17T14:09:41.508Z] Copying: 536/1024 [MB] (11 MBps) [2024-11-17T14:09:42.894Z] Copying: 551/1024 [MB] (14 MBps) [2024-11-17T14:09:43.836Z] Copying: 574/1024 [MB] (23 MBps) [2024-11-17T14:09:44.781Z] Copying: 596/1024 [MB] (22 MBps) [2024-11-17T14:09:45.725Z] Copying: 613/1024 [MB] (17 MBps) [2024-11-17T14:09:46.677Z] Copying: 624/1024 [MB] (10 MBps) [2024-11-17T14:09:47.620Z] Copying: 635/1024 [MB] (10 MBps) [2024-11-17T14:09:48.564Z] Copying: 655/1024 [MB] (19 MBps) [2024-11-17T14:09:49.507Z] Copying: 668/1024 [MB] (13 MBps) [2024-11-17T14:09:50.893Z] Copying: 693/1024 [MB] (25 MBps) [2024-11-17T14:09:51.838Z] Copying: 707/1024 [MB] (13 MBps) [2024-11-17T14:09:52.783Z] Copying: 729/1024 [MB] (21 MBps) [2024-11-17T14:09:53.728Z] Copying: 741/1024 [MB] (12 MBps) [2024-11-17T14:09:54.672Z] Copying: 759/1024 [MB] (17 MBps) [2024-11-17T14:09:55.616Z] Copying: 779/1024 [MB] (19 MBps) [2024-11-17T14:09:56.557Z] Copying: 800/1024 [MB] (21 MBps) [2024-11-17T14:09:57.943Z] Copying: 818/1024 [MB] (18 MBps) [2024-11-17T14:09:58.516Z] Copying: 843/1024 [MB] (24 MBps) [2024-11-17T14:09:59.903Z] Copying: 866/1024 [MB] (22 MBps) [2024-11-17T14:10:00.848Z] Copying: 897/1024 [MB] (31 MBps) [2024-11-17T14:10:01.795Z] Copying: 911/1024 [MB] (13 MBps) [2024-11-17T14:10:02.739Z] Copying: 921/1024 [MB] (10 MBps) [2024-11-17T14:10:03.705Z] Copying: 932/1024 [MB] (10 MBps) [2024-11-17T14:10:04.652Z] Copying: 942/1024 [MB] (10 MBps) [2024-11-17T14:10:05.595Z] Copying: 956/1024 [MB] (13 MBps) [2024-11-17T14:10:06.539Z] Copying: 967/1024 [MB] (10 MBps) [2024-11-17T14:10:07.926Z] Copying: 977/1024 [MB] (10 MBps) [2024-11-17T14:10:08.881Z] Copying: 991/1024 [MB] (14 MBps) [2024-11-17T14:10:09.143Z] Copying: 1010/1024 [MB] (18 MBps) [2024-11-17T14:10:09.143Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-17 14:10:09.127498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.842 [2024-11-17 14:10:09.127606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:30.842 [2024-11-17 14:10:09.127634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:30.842 [2024-11-17 14:10:09.127659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.842 [2024-11-17 14:10:09.127698] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:30.842 [2024-11-17 14:10:09.128639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.842 [2024-11-17 14:10:09.128778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:30.842 [2024-11-17 14:10:09.128799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.916 ms 00:21:30.842 [2024-11-17 14:10:09.128815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.842 [2024-11-17 14:10:09.129270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.842 [2024-11-17 14:10:09.129298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:30.842 [2024-11-17 14:10:09.129325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.403 ms 00:21:30.842 [2024-11-17 14:10:09.129339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.103 [2024-11-17 14:10:09.144303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.103 [2024-11-17 14:10:09.144357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:31.103 [2024-11-17 14:10:09.144370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.935 ms 00:21:31.103 [2024-11-17 14:10:09.144378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.103 [2024-11-17 14:10:09.150662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.103 [2024-11-17 14:10:09.150866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:31.103 [2024-11-17 14:10:09.150887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.250 ms 00:21:31.103 [2024-11-17 14:10:09.150896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.103 [2024-11-17 14:10:09.153751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.103 [2024-11-17 14:10:09.153802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:31.104 [2024-11-17 14:10:09.153814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.787 ms 00:21:31.104 [2024-11-17 14:10:09.153821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.104 [2024-11-17 14:10:09.159228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.104 [2024-11-17 14:10:09.159442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:31.104 [2024-11-17 14:10:09.159506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.363 ms 00:21:31.104 [2024-11-17 14:10:09.159543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.367 [2024-11-17 14:10:09.527340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.367 [2024-11-17 14:10:09.527547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:31.367 [2024-11-17 14:10:09.527751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 367.545 ms 00:21:31.367 [2024-11-17 14:10:09.527767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.367 [2024-11-17 14:10:09.531057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.367 [2024-11-17 14:10:09.531108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:31.367 [2024-11-17 14:10:09.531119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.260 ms 00:21:31.367 [2024-11-17 14:10:09.531127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.367 [2024-11-17 14:10:09.534040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.367 [2024-11-17 14:10:09.534222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:31.367 [2024-11-17 14:10:09.534258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.869 ms 00:21:31.367 [2024-11-17 14:10:09.534266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.367 [2024-11-17 14:10:09.536720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.367 [2024-11-17 14:10:09.536768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:31.367 [2024-11-17 14:10:09.536779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.341 ms 00:21:31.367 [2024-11-17 14:10:09.536787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.367 [2024-11-17 14:10:09.538502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.367 [2024-11-17 14:10:09.538669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:31.367 [2024-11-17 14:10:09.538736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.626 ms 00:21:31.367 [2024-11-17 14:10:09.538762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.367 [2024-11-17 14:10:09.538810] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:31.367 [2024-11-17 14:10:09.538851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131840 / 261120 wr_cnt: 1 state: open 00:21:31.367 [2024-11-17 14:10:09.538883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:31.367 [2024-11-17 14:10:09.538911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:31.367 [2024-11-17 14:10:09.538940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:31.367 [2024-11-17 14:10:09.539023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:31.367 [2024-11-17 14:10:09.539054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:31.367 [2024-11-17 14:10:09.539082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:31.367 [2024-11-17 14:10:09.539110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:31.367 [2024-11-17 14:10:09.539139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:31.367 [2024-11-17 14:10:09.539217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:31.367 [2024-11-17 14:10:09.539272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:31.367 [2024-11-17 14:10:09.539321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:31.367 [2024-11-17 14:10:09.539349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:31.367 [2024-11-17 14:10:09.539378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:31.367 [2024-11-17 14:10:09.539504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:31.367 [2024-11-17 14:10:09.539535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:31.367 [2024-11-17 14:10:09.539563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:31.367 [2024-11-17 14:10:09.539591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:31.367 [2024-11-17 14:10:09.539696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:31.367 [2024-11-17 14:10:09.539726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:31.367 [2024-11-17 14:10:09.539756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:31.367 [2024-11-17 14:10:09.539784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:31.367 [2024-11-17 14:10:09.539924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:31.367 [2024-11-17 14:10:09.539955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:31.367 [2024-11-17 14:10:09.539983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:31.367 [2024-11-17 14:10:09.540056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:31.367 [2024-11-17 14:10:09.540085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:31.367 [2024-11-17 14:10:09.540115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:31.367 [2024-11-17 14:10:09.540177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:31.367 [2024-11-17 14:10:09.540211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:31.367 [2024-11-17 14:10:09.540255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:31.367 [2024-11-17 14:10:09.540512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:31.367 [2024-11-17 14:10:09.540543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:31.367 [2024-11-17 14:10:09.540573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:31.367 [2024-11-17 14:10:09.540654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:31.367 [2024-11-17 14:10:09.540685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:31.367 [2024-11-17 14:10:09.540715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:31.367 [2024-11-17 14:10:09.540778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:31.367 [2024-11-17 14:10:09.540810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:31.367 [2024-11-17 14:10:09.540879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:31.367 [2024-11-17 14:10:09.540909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:31.367 [2024-11-17 14:10:09.540972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:31.367 [2024-11-17 14:10:09.541002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:31.367 [2024-11-17 14:10:09.541068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:31.367 [2024-11-17 14:10:09.541099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:31.367 [2024-11-17 14:10:09.541127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:31.367 [2024-11-17 14:10:09.541185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:31.367 [2024-11-17 14:10:09.541264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:31.367 [2024-11-17 14:10:09.541298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:31.367 [2024-11-17 14:10:09.541357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:31.367 [2024-11-17 14:10:09.541387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:31.367 [2024-11-17 14:10:09.541453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:31.367 [2024-11-17 14:10:09.541483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:31.368 [2024-11-17 14:10:09.541539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:31.368 [2024-11-17 14:10:09.541571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:31.368 [2024-11-17 14:10:09.541667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:31.368 [2024-11-17 14:10:09.541697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:31.368 [2024-11-17 14:10:09.541757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:31.368 [2024-11-17 14:10:09.541913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:31.368 [2024-11-17 14:10:09.541944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:31.368 [2024-11-17 14:10:09.541973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:31.368 [2024-11-17 14:10:09.542036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:31.368 [2024-11-17 14:10:09.542066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:31.368 [2024-11-17 14:10:09.542095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:31.368 [2024-11-17 14:10:09.542233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:31.368 [2024-11-17 14:10:09.542282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:31.368 [2024-11-17 14:10:09.542450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:31.368 [2024-11-17 14:10:09.542460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:31.368 [2024-11-17 14:10:09.542468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:31.368 [2024-11-17 14:10:09.542476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:31.368 [2024-11-17 14:10:09.542484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:31.368 [2024-11-17 14:10:09.542492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:31.368 [2024-11-17 14:10:09.542500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:31.368 [2024-11-17 14:10:09.542507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:31.368 [2024-11-17 14:10:09.542515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:31.368 [2024-11-17 14:10:09.542523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:31.368 [2024-11-17 14:10:09.542531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:31.368 [2024-11-17 14:10:09.542539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:31.368 [2024-11-17 14:10:09.542546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:31.368 [2024-11-17 14:10:09.542555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:31.368 [2024-11-17 14:10:09.542563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:31.368 [2024-11-17 14:10:09.542570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:31.368 [2024-11-17 14:10:09.542577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:31.368 [2024-11-17 14:10:09.542585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:31.368 [2024-11-17 14:10:09.542592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:31.368 [2024-11-17 14:10:09.542601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:31.368 [2024-11-17 14:10:09.542609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:31.368 [2024-11-17 14:10:09.542616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:31.368 [2024-11-17 14:10:09.542623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:31.368 [2024-11-17 14:10:09.542631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:31.368 [2024-11-17 14:10:09.542638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:31.368 [2024-11-17 14:10:09.542645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:31.368 [2024-11-17 14:10:09.542653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:31.368 [2024-11-17 14:10:09.542662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:31.368 [2024-11-17 14:10:09.542670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:31.368 [2024-11-17 14:10:09.542677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:31.368 [2024-11-17 14:10:09.542686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:31.368 [2024-11-17 14:10:09.542694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:31.368 [2024-11-17 14:10:09.542703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:31.368 [2024-11-17 14:10:09.542710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:31.368 [2024-11-17 14:10:09.542728] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:31.368 [2024-11-17 14:10:09.542736] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7d8283dd-40dc-4766-adff-c1626714363d 00:21:31.368 [2024-11-17 14:10:09.542745] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131840 00:21:31.368 [2024-11-17 14:10:09.542753] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 132288 00:21:31.368 [2024-11-17 14:10:09.542765] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 131328 00:21:31.368 [2024-11-17 14:10:09.542786] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0073 00:21:31.368 [2024-11-17 14:10:09.542795] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:31.368 [2024-11-17 14:10:09.542803] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:31.368 [2024-11-17 14:10:09.542811] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:31.368 [2024-11-17 14:10:09.542818] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:31.368 [2024-11-17 14:10:09.542825] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:31.368 [2024-11-17 14:10:09.542833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.368 [2024-11-17 14:10:09.542842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:31.368 [2024-11-17 14:10:09.542850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.025 ms 00:21:31.368 [2024-11-17 14:10:09.542859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.368 [2024-11-17 14:10:09.545276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.368 [2024-11-17 14:10:09.545330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:31.368 [2024-11-17 14:10:09.545341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.382 ms 00:21:31.368 [2024-11-17 14:10:09.545349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.368 [2024-11-17 14:10:09.545497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.368 [2024-11-17 14:10:09.545508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:31.368 [2024-11-17 14:10:09.545517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:21:31.368 [2024-11-17 14:10:09.545525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.368 [2024-11-17 14:10:09.552359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.368 [2024-11-17 14:10:09.552408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:31.368 [2024-11-17 14:10:09.552419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.368 [2024-11-17 14:10:09.552427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.368 [2024-11-17 14:10:09.552491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.368 [2024-11-17 14:10:09.552500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:31.368 [2024-11-17 14:10:09.552508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.368 [2024-11-17 14:10:09.552516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.368 [2024-11-17 14:10:09.552567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.368 [2024-11-17 14:10:09.552585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:31.368 [2024-11-17 14:10:09.552594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.368 [2024-11-17 14:10:09.552607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.368 [2024-11-17 14:10:09.552625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.368 [2024-11-17 14:10:09.552634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:31.368 [2024-11-17 14:10:09.552643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.368 [2024-11-17 14:10:09.552650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.368 [2024-11-17 14:10:09.566062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.368 [2024-11-17 14:10:09.566115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:31.368 [2024-11-17 14:10:09.566127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.368 [2024-11-17 14:10:09.566135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.368 [2024-11-17 14:10:09.577058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.368 [2024-11-17 14:10:09.577106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:31.368 [2024-11-17 14:10:09.577118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.368 [2024-11-17 14:10:09.577126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.368 [2024-11-17 14:10:09.577176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.368 [2024-11-17 14:10:09.577186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:31.368 [2024-11-17 14:10:09.577203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.368 [2024-11-17 14:10:09.577212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.368 [2024-11-17 14:10:09.577264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.368 [2024-11-17 14:10:09.577274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:31.369 [2024-11-17 14:10:09.577283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.369 [2024-11-17 14:10:09.577290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.369 [2024-11-17 14:10:09.577370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.369 [2024-11-17 14:10:09.577380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:31.369 [2024-11-17 14:10:09.577389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.369 [2024-11-17 14:10:09.577403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.369 [2024-11-17 14:10:09.577433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.369 [2024-11-17 14:10:09.577442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:31.369 [2024-11-17 14:10:09.577451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.369 [2024-11-17 14:10:09.577458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.369 [2024-11-17 14:10:09.577495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.369 [2024-11-17 14:10:09.577504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:31.369 [2024-11-17 14:10:09.577513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.369 [2024-11-17 14:10:09.577524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.369 [2024-11-17 14:10:09.577569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.369 [2024-11-17 14:10:09.577579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:31.369 [2024-11-17 14:10:09.577587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.369 [2024-11-17 14:10:09.577594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.369 [2024-11-17 14:10:09.577722] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 450.209 ms, result 0 00:21:31.631 00:21:31.631 00:21:31.631 14:10:09 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:34.177 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:21:34.177 14:10:12 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:21:34.177 14:10:12 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:21:34.177 14:10:12 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:21:34.177 14:10:12 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:34.177 14:10:12 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:34.177 Process with pid 86086 is not found 00:21:34.177 Remove shared memory files 00:21:34.177 14:10:12 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 86086 00:21:34.177 14:10:12 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 86086 ']' 00:21:34.177 14:10:12 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 86086 00:21:34.177 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (86086) - No such process 00:21:34.177 14:10:12 ftl.ftl_restore -- common/autotest_common.sh@977 -- # echo 'Process with pid 86086 is not found' 00:21:34.177 14:10:12 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:21:34.177 14:10:12 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:21:34.177 14:10:12 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:21:34.177 14:10:12 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:21:34.177 14:10:12 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:21:34.177 14:10:12 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:21:34.177 14:10:12 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:21:34.177 ************************************ 00:21:34.177 END TEST ftl_restore 00:21:34.177 ************************************ 00:21:34.177 00:21:34.177 real 4m12.802s 00:21:34.177 user 4m0.942s 00:21:34.177 sys 0m11.558s 00:21:34.177 14:10:12 ftl.ftl_restore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:34.177 14:10:12 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:21:34.177 14:10:12 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:21:34.177 14:10:12 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:21:34.178 14:10:12 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:34.178 14:10:12 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:34.178 ************************************ 00:21:34.178 START TEST ftl_dirty_shutdown 00:21:34.178 ************************************ 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:21:34.178 * Looking for test storage... 00:21:34.178 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:34.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.178 --rc genhtml_branch_coverage=1 00:21:34.178 --rc genhtml_function_coverage=1 00:21:34.178 --rc genhtml_legend=1 00:21:34.178 --rc geninfo_all_blocks=1 00:21:34.178 --rc geninfo_unexecuted_blocks=1 00:21:34.178 00:21:34.178 ' 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:34.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.178 --rc genhtml_branch_coverage=1 00:21:34.178 --rc genhtml_function_coverage=1 00:21:34.178 --rc genhtml_legend=1 00:21:34.178 --rc geninfo_all_blocks=1 00:21:34.178 --rc geninfo_unexecuted_blocks=1 00:21:34.178 00:21:34.178 ' 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:34.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.178 --rc genhtml_branch_coverage=1 00:21:34.178 --rc genhtml_function_coverage=1 00:21:34.178 --rc genhtml_legend=1 00:21:34.178 --rc geninfo_all_blocks=1 00:21:34.178 --rc geninfo_unexecuted_blocks=1 00:21:34.178 00:21:34.178 ' 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:34.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.178 --rc genhtml_branch_coverage=1 00:21:34.178 --rc genhtml_function_coverage=1 00:21:34.178 --rc genhtml_legend=1 00:21:34.178 --rc geninfo_all_blocks=1 00:21:34.178 --rc geninfo_unexecuted_blocks=1 00:21:34.178 00:21:34.178 ' 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=88781 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 88781 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@831 -- # '[' -z 88781 ']' 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:34.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:34.178 14:10:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:34.440 [2024-11-17 14:10:12.520047] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:21:34.440 [2024-11-17 14:10:12.520870] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88781 ] 00:21:34.440 [2024-11-17 14:10:12.674796] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:34.440 [2024-11-17 14:10:12.724284] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:35.383 14:10:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:35.383 14:10:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # return 0 00:21:35.383 14:10:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:21:35.383 14:10:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:21:35.383 14:10:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:21:35.384 14:10:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:21:35.384 14:10:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:21:35.384 14:10:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:21:35.384 14:10:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:21:35.384 14:10:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:21:35.384 14:10:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:21:35.384 14:10:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:21:35.384 14:10:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:35.384 14:10:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:21:35.384 14:10:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:21:35.384 14:10:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:21:35.646 14:10:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:35.646 { 00:21:35.646 "name": "nvme0n1", 00:21:35.646 "aliases": [ 00:21:35.646 "a67cbd7f-62e3-4011-b62b-8d8fa4ea9a10" 00:21:35.646 ], 00:21:35.646 "product_name": "NVMe disk", 00:21:35.646 "block_size": 4096, 00:21:35.646 "num_blocks": 1310720, 00:21:35.646 "uuid": "a67cbd7f-62e3-4011-b62b-8d8fa4ea9a10", 00:21:35.646 "numa_id": -1, 00:21:35.646 "assigned_rate_limits": { 00:21:35.646 "rw_ios_per_sec": 0, 00:21:35.646 "rw_mbytes_per_sec": 0, 00:21:35.646 "r_mbytes_per_sec": 0, 00:21:35.646 "w_mbytes_per_sec": 0 00:21:35.646 }, 00:21:35.646 "claimed": true, 00:21:35.646 "claim_type": "read_many_write_one", 00:21:35.646 "zoned": false, 00:21:35.646 "supported_io_types": { 00:21:35.646 "read": true, 00:21:35.646 "write": true, 00:21:35.646 "unmap": true, 00:21:35.646 "flush": true, 00:21:35.646 "reset": true, 00:21:35.646 "nvme_admin": true, 00:21:35.646 "nvme_io": true, 00:21:35.646 "nvme_io_md": false, 00:21:35.646 "write_zeroes": true, 00:21:35.646 "zcopy": false, 00:21:35.646 "get_zone_info": false, 00:21:35.646 "zone_management": false, 00:21:35.646 "zone_append": false, 00:21:35.646 "compare": true, 00:21:35.646 "compare_and_write": false, 00:21:35.646 "abort": true, 00:21:35.646 "seek_hole": false, 00:21:35.646 "seek_data": false, 00:21:35.646 "copy": true, 00:21:35.646 "nvme_iov_md": false 00:21:35.646 }, 00:21:35.646 "driver_specific": { 00:21:35.646 "nvme": [ 00:21:35.646 { 00:21:35.646 "pci_address": "0000:00:11.0", 00:21:35.646 "trid": { 00:21:35.646 "trtype": "PCIe", 00:21:35.646 "traddr": "0000:00:11.0" 00:21:35.646 }, 00:21:35.646 "ctrlr_data": { 00:21:35.646 "cntlid": 0, 00:21:35.646 "vendor_id": "0x1b36", 00:21:35.646 "model_number": "QEMU NVMe Ctrl", 00:21:35.646 "serial_number": "12341", 00:21:35.646 "firmware_revision": "8.0.0", 00:21:35.646 "subnqn": "nqn.2019-08.org.qemu:12341", 00:21:35.646 "oacs": { 00:21:35.646 "security": 0, 00:21:35.646 "format": 1, 00:21:35.646 "firmware": 0, 00:21:35.646 "ns_manage": 1 00:21:35.646 }, 00:21:35.646 "multi_ctrlr": false, 00:21:35.646 "ana_reporting": false 00:21:35.646 }, 00:21:35.646 "vs": { 00:21:35.646 "nvme_version": "1.4" 00:21:35.646 }, 00:21:35.646 "ns_data": { 00:21:35.646 "id": 1, 00:21:35.646 "can_share": false 00:21:35.646 } 00:21:35.646 } 00:21:35.646 ], 00:21:35.646 "mp_policy": "active_passive" 00:21:35.646 } 00:21:35.646 } 00:21:35.646 ]' 00:21:35.646 14:10:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:35.646 14:10:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:21:35.646 14:10:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:35.646 14:10:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:21:35.646 14:10:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:21:35.646 14:10:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:21:35.646 14:10:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:21:35.646 14:10:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:21:35.646 14:10:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:21:35.646 14:10:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:35.646 14:10:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:21:35.907 14:10:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=bb373656-262d-48a9-9413-b8e6f226da14 00:21:35.907 14:10:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:21:35.908 14:10:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bb373656-262d-48a9-9413-b8e6f226da14 00:21:36.169 14:10:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:21:36.430 14:10:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=54a45a5b-718e-4bc9-9d65-17d583d4cab7 00:21:36.430 14:10:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 54a45a5b-718e-4bc9-9d65-17d583d4cab7 00:21:36.692 14:10:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=ee2ebae6-09e7-4b32-8db9-e0b91c34932b 00:21:36.692 14:10:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:21:36.692 14:10:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 ee2ebae6-09e7-4b32-8db9-e0b91c34932b 00:21:36.692 14:10:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:21:36.692 14:10:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:21:36.692 14:10:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=ee2ebae6-09e7-4b32-8db9-e0b91c34932b 00:21:36.692 14:10:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:21:36.692 14:10:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size ee2ebae6-09e7-4b32-8db9-e0b91c34932b 00:21:36.692 14:10:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=ee2ebae6-09e7-4b32-8db9-e0b91c34932b 00:21:36.692 14:10:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:36.692 14:10:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:21:36.692 14:10:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:21:36.692 14:10:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ee2ebae6-09e7-4b32-8db9-e0b91c34932b 00:21:36.953 14:10:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:36.953 { 00:21:36.953 "name": "ee2ebae6-09e7-4b32-8db9-e0b91c34932b", 00:21:36.953 "aliases": [ 00:21:36.953 "lvs/nvme0n1p0" 00:21:36.953 ], 00:21:36.953 "product_name": "Logical Volume", 00:21:36.953 "block_size": 4096, 00:21:36.953 "num_blocks": 26476544, 00:21:36.953 "uuid": "ee2ebae6-09e7-4b32-8db9-e0b91c34932b", 00:21:36.953 "assigned_rate_limits": { 00:21:36.953 "rw_ios_per_sec": 0, 00:21:36.953 "rw_mbytes_per_sec": 0, 00:21:36.953 "r_mbytes_per_sec": 0, 00:21:36.953 "w_mbytes_per_sec": 0 00:21:36.953 }, 00:21:36.953 "claimed": false, 00:21:36.953 "zoned": false, 00:21:36.953 "supported_io_types": { 00:21:36.953 "read": true, 00:21:36.953 "write": true, 00:21:36.953 "unmap": true, 00:21:36.953 "flush": false, 00:21:36.953 "reset": true, 00:21:36.953 "nvme_admin": false, 00:21:36.953 "nvme_io": false, 00:21:36.953 "nvme_io_md": false, 00:21:36.953 "write_zeroes": true, 00:21:36.953 "zcopy": false, 00:21:36.953 "get_zone_info": false, 00:21:36.953 "zone_management": false, 00:21:36.953 "zone_append": false, 00:21:36.953 "compare": false, 00:21:36.953 "compare_and_write": false, 00:21:36.953 "abort": false, 00:21:36.953 "seek_hole": true, 00:21:36.953 "seek_data": true, 00:21:36.953 "copy": false, 00:21:36.953 "nvme_iov_md": false 00:21:36.953 }, 00:21:36.953 "driver_specific": { 00:21:36.953 "lvol": { 00:21:36.953 "lvol_store_uuid": "54a45a5b-718e-4bc9-9d65-17d583d4cab7", 00:21:36.953 "base_bdev": "nvme0n1", 00:21:36.953 "thin_provision": true, 00:21:36.953 "num_allocated_clusters": 0, 00:21:36.953 "snapshot": false, 00:21:36.953 "clone": false, 00:21:36.953 "esnap_clone": false 00:21:36.953 } 00:21:36.953 } 00:21:36.953 } 00:21:36.953 ]' 00:21:36.953 14:10:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:36.953 14:10:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:21:36.953 14:10:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:36.953 14:10:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:21:36.953 14:10:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:21:36.953 14:10:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:21:36.953 14:10:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:21:36.953 14:10:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:21:36.953 14:10:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:21:37.215 14:10:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:21:37.215 14:10:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:21:37.215 14:10:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size ee2ebae6-09e7-4b32-8db9-e0b91c34932b 00:21:37.215 14:10:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=ee2ebae6-09e7-4b32-8db9-e0b91c34932b 00:21:37.215 14:10:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:37.215 14:10:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:21:37.215 14:10:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:21:37.215 14:10:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ee2ebae6-09e7-4b32-8db9-e0b91c34932b 00:21:37.476 14:10:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:37.476 { 00:21:37.476 "name": "ee2ebae6-09e7-4b32-8db9-e0b91c34932b", 00:21:37.476 "aliases": [ 00:21:37.476 "lvs/nvme0n1p0" 00:21:37.476 ], 00:21:37.476 "product_name": "Logical Volume", 00:21:37.476 "block_size": 4096, 00:21:37.476 "num_blocks": 26476544, 00:21:37.476 "uuid": "ee2ebae6-09e7-4b32-8db9-e0b91c34932b", 00:21:37.476 "assigned_rate_limits": { 00:21:37.476 "rw_ios_per_sec": 0, 00:21:37.476 "rw_mbytes_per_sec": 0, 00:21:37.476 "r_mbytes_per_sec": 0, 00:21:37.476 "w_mbytes_per_sec": 0 00:21:37.476 }, 00:21:37.476 "claimed": false, 00:21:37.476 "zoned": false, 00:21:37.476 "supported_io_types": { 00:21:37.476 "read": true, 00:21:37.476 "write": true, 00:21:37.476 "unmap": true, 00:21:37.476 "flush": false, 00:21:37.476 "reset": true, 00:21:37.476 "nvme_admin": false, 00:21:37.476 "nvme_io": false, 00:21:37.476 "nvme_io_md": false, 00:21:37.476 "write_zeroes": true, 00:21:37.476 "zcopy": false, 00:21:37.476 "get_zone_info": false, 00:21:37.476 "zone_management": false, 00:21:37.476 "zone_append": false, 00:21:37.476 "compare": false, 00:21:37.476 "compare_and_write": false, 00:21:37.476 "abort": false, 00:21:37.476 "seek_hole": true, 00:21:37.476 "seek_data": true, 00:21:37.476 "copy": false, 00:21:37.476 "nvme_iov_md": false 00:21:37.476 }, 00:21:37.476 "driver_specific": { 00:21:37.476 "lvol": { 00:21:37.476 "lvol_store_uuid": "54a45a5b-718e-4bc9-9d65-17d583d4cab7", 00:21:37.476 "base_bdev": "nvme0n1", 00:21:37.476 "thin_provision": true, 00:21:37.476 "num_allocated_clusters": 0, 00:21:37.476 "snapshot": false, 00:21:37.476 "clone": false, 00:21:37.476 "esnap_clone": false 00:21:37.476 } 00:21:37.476 } 00:21:37.476 } 00:21:37.476 ]' 00:21:37.476 14:10:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:37.476 14:10:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:21:37.476 14:10:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:37.476 14:10:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:21:37.476 14:10:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:21:37.476 14:10:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:21:37.476 14:10:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:21:37.476 14:10:15 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:21:37.737 14:10:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:21:37.737 14:10:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size ee2ebae6-09e7-4b32-8db9-e0b91c34932b 00:21:37.737 14:10:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=ee2ebae6-09e7-4b32-8db9-e0b91c34932b 00:21:37.737 14:10:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:21:37.737 14:10:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:21:37.737 14:10:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:21:37.737 14:10:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ee2ebae6-09e7-4b32-8db9-e0b91c34932b 00:21:37.998 14:10:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:21:37.998 { 00:21:37.999 "name": "ee2ebae6-09e7-4b32-8db9-e0b91c34932b", 00:21:37.999 "aliases": [ 00:21:37.999 "lvs/nvme0n1p0" 00:21:37.999 ], 00:21:37.999 "product_name": "Logical Volume", 00:21:37.999 "block_size": 4096, 00:21:37.999 "num_blocks": 26476544, 00:21:37.999 "uuid": "ee2ebae6-09e7-4b32-8db9-e0b91c34932b", 00:21:37.999 "assigned_rate_limits": { 00:21:37.999 "rw_ios_per_sec": 0, 00:21:37.999 "rw_mbytes_per_sec": 0, 00:21:37.999 "r_mbytes_per_sec": 0, 00:21:37.999 "w_mbytes_per_sec": 0 00:21:37.999 }, 00:21:37.999 "claimed": false, 00:21:37.999 "zoned": false, 00:21:37.999 "supported_io_types": { 00:21:37.999 "read": true, 00:21:37.999 "write": true, 00:21:37.999 "unmap": true, 00:21:37.999 "flush": false, 00:21:37.999 "reset": true, 00:21:37.999 "nvme_admin": false, 00:21:37.999 "nvme_io": false, 00:21:37.999 "nvme_io_md": false, 00:21:37.999 "write_zeroes": true, 00:21:37.999 "zcopy": false, 00:21:37.999 "get_zone_info": false, 00:21:37.999 "zone_management": false, 00:21:37.999 "zone_append": false, 00:21:37.999 "compare": false, 00:21:37.999 "compare_and_write": false, 00:21:37.999 "abort": false, 00:21:37.999 "seek_hole": true, 00:21:37.999 "seek_data": true, 00:21:37.999 "copy": false, 00:21:37.999 "nvme_iov_md": false 00:21:37.999 }, 00:21:37.999 "driver_specific": { 00:21:37.999 "lvol": { 00:21:37.999 "lvol_store_uuid": "54a45a5b-718e-4bc9-9d65-17d583d4cab7", 00:21:37.999 "base_bdev": "nvme0n1", 00:21:37.999 "thin_provision": true, 00:21:37.999 "num_allocated_clusters": 0, 00:21:37.999 "snapshot": false, 00:21:37.999 "clone": false, 00:21:37.999 "esnap_clone": false 00:21:37.999 } 00:21:37.999 } 00:21:37.999 } 00:21:37.999 ]' 00:21:37.999 14:10:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:21:37.999 14:10:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:21:37.999 14:10:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:21:37.999 14:10:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:21:37.999 14:10:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:21:37.999 14:10:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:21:37.999 14:10:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:21:37.999 14:10:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d ee2ebae6-09e7-4b32-8db9-e0b91c34932b --l2p_dram_limit 10' 00:21:37.999 14:10:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:21:37.999 14:10:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:21:37.999 14:10:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:21:37.999 14:10:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d ee2ebae6-09e7-4b32-8db9-e0b91c34932b --l2p_dram_limit 10 -c nvc0n1p0 00:21:38.260 [2024-11-17 14:10:16.302060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.260 [2024-11-17 14:10:16.302111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:38.260 [2024-11-17 14:10:16.302125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:38.260 [2024-11-17 14:10:16.302135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.260 [2024-11-17 14:10:16.302199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.260 [2024-11-17 14:10:16.302214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:38.260 [2024-11-17 14:10:16.302222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:21:38.260 [2024-11-17 14:10:16.302248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.260 [2024-11-17 14:10:16.302273] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:38.260 [2024-11-17 14:10:16.302536] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:38.260 [2024-11-17 14:10:16.302558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.260 [2024-11-17 14:10:16.302568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:38.260 [2024-11-17 14:10:16.302579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:21:38.260 [2024-11-17 14:10:16.302588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.260 [2024-11-17 14:10:16.302618] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 0c4083d2-4361-48d2-b921-3a0d7b4163ef 00:21:38.260 [2024-11-17 14:10:16.303762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.260 [2024-11-17 14:10:16.303788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:21:38.260 [2024-11-17 14:10:16.303799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:21:38.260 [2024-11-17 14:10:16.303806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.260 [2024-11-17 14:10:16.309210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.260 [2024-11-17 14:10:16.309255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:38.260 [2024-11-17 14:10:16.309267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.327 ms 00:21:38.260 [2024-11-17 14:10:16.309275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.260 [2024-11-17 14:10:16.309349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.260 [2024-11-17 14:10:16.309358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:38.260 [2024-11-17 14:10:16.309367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:21:38.260 [2024-11-17 14:10:16.309376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.260 [2024-11-17 14:10:16.309427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.260 [2024-11-17 14:10:16.309436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:38.260 [2024-11-17 14:10:16.309445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:38.260 [2024-11-17 14:10:16.309452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.260 [2024-11-17 14:10:16.309477] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:38.260 [2024-11-17 14:10:16.310945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.260 [2024-11-17 14:10:16.310976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:38.260 [2024-11-17 14:10:16.310987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.474 ms 00:21:38.260 [2024-11-17 14:10:16.310996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.260 [2024-11-17 14:10:16.311027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.260 [2024-11-17 14:10:16.311037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:38.260 [2024-11-17 14:10:16.311045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:38.260 [2024-11-17 14:10:16.311055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.260 [2024-11-17 14:10:16.311071] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:21:38.260 [2024-11-17 14:10:16.311215] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:38.260 [2024-11-17 14:10:16.311226] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:38.260 [2024-11-17 14:10:16.311254] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:38.260 [2024-11-17 14:10:16.311264] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:38.260 [2024-11-17 14:10:16.311274] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:38.260 [2024-11-17 14:10:16.311291] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:38.260 [2024-11-17 14:10:16.311305] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:38.260 [2024-11-17 14:10:16.311314] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:38.260 [2024-11-17 14:10:16.311323] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:38.260 [2024-11-17 14:10:16.311333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.260 [2024-11-17 14:10:16.311342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:38.260 [2024-11-17 14:10:16.311349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:21:38.260 [2024-11-17 14:10:16.311358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.260 [2024-11-17 14:10:16.311441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.260 [2024-11-17 14:10:16.311452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:38.260 [2024-11-17 14:10:16.311459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:21:38.260 [2024-11-17 14:10:16.311467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.260 [2024-11-17 14:10:16.311560] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:38.261 [2024-11-17 14:10:16.311572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:38.261 [2024-11-17 14:10:16.311580] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:38.261 [2024-11-17 14:10:16.311590] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:38.261 [2024-11-17 14:10:16.311598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:38.261 [2024-11-17 14:10:16.311607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:38.261 [2024-11-17 14:10:16.311614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:38.261 [2024-11-17 14:10:16.311624] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:38.261 [2024-11-17 14:10:16.311632] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:38.261 [2024-11-17 14:10:16.311641] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:38.261 [2024-11-17 14:10:16.311648] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:38.261 [2024-11-17 14:10:16.311659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:38.261 [2024-11-17 14:10:16.311667] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:38.261 [2024-11-17 14:10:16.311679] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:38.261 [2024-11-17 14:10:16.311687] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:38.261 [2024-11-17 14:10:16.311696] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:38.261 [2024-11-17 14:10:16.311703] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:38.261 [2024-11-17 14:10:16.311712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:38.261 [2024-11-17 14:10:16.311720] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:38.261 [2024-11-17 14:10:16.311730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:38.261 [2024-11-17 14:10:16.311737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:38.261 [2024-11-17 14:10:16.311746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:38.261 [2024-11-17 14:10:16.311754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:38.261 [2024-11-17 14:10:16.311763] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:38.261 [2024-11-17 14:10:16.311770] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:38.261 [2024-11-17 14:10:16.311778] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:38.261 [2024-11-17 14:10:16.311786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:38.261 [2024-11-17 14:10:16.311795] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:38.261 [2024-11-17 14:10:16.311802] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:38.261 [2024-11-17 14:10:16.311812] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:38.261 [2024-11-17 14:10:16.311820] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:38.261 [2024-11-17 14:10:16.311830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:38.261 [2024-11-17 14:10:16.311837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:38.261 [2024-11-17 14:10:16.311846] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:38.261 [2024-11-17 14:10:16.311853] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:38.261 [2024-11-17 14:10:16.311862] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:38.261 [2024-11-17 14:10:16.311869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:38.261 [2024-11-17 14:10:16.311881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:38.261 [2024-11-17 14:10:16.311888] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:38.261 [2024-11-17 14:10:16.311897] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:38.261 [2024-11-17 14:10:16.311904] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:38.261 [2024-11-17 14:10:16.311913] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:38.261 [2024-11-17 14:10:16.311920] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:38.261 [2024-11-17 14:10:16.311928] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:38.261 [2024-11-17 14:10:16.311937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:38.261 [2024-11-17 14:10:16.311948] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:38.261 [2024-11-17 14:10:16.311957] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:38.261 [2024-11-17 14:10:16.311967] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:38.261 [2024-11-17 14:10:16.311974] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:38.261 [2024-11-17 14:10:16.311983] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:38.261 [2024-11-17 14:10:16.311991] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:38.261 [2024-11-17 14:10:16.312000] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:38.261 [2024-11-17 14:10:16.312007] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:38.261 [2024-11-17 14:10:16.312019] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:38.261 [2024-11-17 14:10:16.312032] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:38.261 [2024-11-17 14:10:16.312042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:38.261 [2024-11-17 14:10:16.312049] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:38.261 [2024-11-17 14:10:16.312059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:38.261 [2024-11-17 14:10:16.312066] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:38.261 [2024-11-17 14:10:16.312075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:38.261 [2024-11-17 14:10:16.312081] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:38.261 [2024-11-17 14:10:16.312093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:38.261 [2024-11-17 14:10:16.312101] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:38.261 [2024-11-17 14:10:16.312109] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:38.261 [2024-11-17 14:10:16.312115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:38.261 [2024-11-17 14:10:16.312124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:38.261 [2024-11-17 14:10:16.312131] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:38.261 [2024-11-17 14:10:16.312139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:38.261 [2024-11-17 14:10:16.312146] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:38.261 [2024-11-17 14:10:16.312154] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:38.261 [2024-11-17 14:10:16.312164] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:38.261 [2024-11-17 14:10:16.312173] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:38.261 [2024-11-17 14:10:16.312181] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:38.261 [2024-11-17 14:10:16.312189] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:38.261 [2024-11-17 14:10:16.312196] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:38.261 [2024-11-17 14:10:16.312205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.261 [2024-11-17 14:10:16.312211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:38.261 [2024-11-17 14:10:16.312222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.709 ms 00:21:38.261 [2024-11-17 14:10:16.312229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.261 [2024-11-17 14:10:16.312278] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:21:38.261 [2024-11-17 14:10:16.312288] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:21:42.468 [2024-11-17 14:10:20.305633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.468 [2024-11-17 14:10:20.305715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:21:42.468 [2024-11-17 14:10:20.305740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3993.332 ms 00:21:42.468 [2024-11-17 14:10:20.305750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.468 [2024-11-17 14:10:20.321035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.468 [2024-11-17 14:10:20.321101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:42.468 [2024-11-17 14:10:20.321120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.152 ms 00:21:42.468 [2024-11-17 14:10:20.321130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.468 [2024-11-17 14:10:20.321278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.468 [2024-11-17 14:10:20.321291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:42.468 [2024-11-17 14:10:20.321307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:21:42.468 [2024-11-17 14:10:20.321315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.468 [2024-11-17 14:10:20.333314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.468 [2024-11-17 14:10:20.333368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:42.468 [2024-11-17 14:10:20.333387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.941 ms 00:21:42.468 [2024-11-17 14:10:20.333396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.468 [2024-11-17 14:10:20.333432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.468 [2024-11-17 14:10:20.333446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:42.468 [2024-11-17 14:10:20.333457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:42.468 [2024-11-17 14:10:20.333465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.468 [2024-11-17 14:10:20.334064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.468 [2024-11-17 14:10:20.334099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:42.468 [2024-11-17 14:10:20.334114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.513 ms 00:21:42.468 [2024-11-17 14:10:20.334125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.468 [2024-11-17 14:10:20.334290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.468 [2024-11-17 14:10:20.334303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:42.468 [2024-11-17 14:10:20.334320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:21:42.468 [2024-11-17 14:10:20.334329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.468 [2024-11-17 14:10:20.349899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.468 [2024-11-17 14:10:20.349962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:42.469 [2024-11-17 14:10:20.349979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.539 ms 00:21:42.469 [2024-11-17 14:10:20.349988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.469 [2024-11-17 14:10:20.359856] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:42.469 [2024-11-17 14:10:20.363653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.469 [2024-11-17 14:10:20.363706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:42.469 [2024-11-17 14:10:20.363718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.556 ms 00:21:42.469 [2024-11-17 14:10:20.363729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.469 [2024-11-17 14:10:20.450449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.469 [2024-11-17 14:10:20.450527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:21:42.469 [2024-11-17 14:10:20.450542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.686 ms 00:21:42.469 [2024-11-17 14:10:20.450557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.469 [2024-11-17 14:10:20.450776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.469 [2024-11-17 14:10:20.450792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:42.469 [2024-11-17 14:10:20.450802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.159 ms 00:21:42.469 [2024-11-17 14:10:20.450812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.469 [2024-11-17 14:10:20.457341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.469 [2024-11-17 14:10:20.457406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:21:42.469 [2024-11-17 14:10:20.457418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.506 ms 00:21:42.469 [2024-11-17 14:10:20.457430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.469 [2024-11-17 14:10:20.463201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.469 [2024-11-17 14:10:20.463276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:21:42.469 [2024-11-17 14:10:20.463306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.714 ms 00:21:42.469 [2024-11-17 14:10:20.463316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.469 [2024-11-17 14:10:20.463664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.469 [2024-11-17 14:10:20.463680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:42.469 [2024-11-17 14:10:20.463691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:21:42.469 [2024-11-17 14:10:20.463704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.469 [2024-11-17 14:10:20.512252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.469 [2024-11-17 14:10:20.512323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:21:42.469 [2024-11-17 14:10:20.512337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.507 ms 00:21:42.469 [2024-11-17 14:10:20.512356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.469 [2024-11-17 14:10:20.520419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.469 [2024-11-17 14:10:20.520486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:21:42.469 [2024-11-17 14:10:20.520497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.977 ms 00:21:42.469 [2024-11-17 14:10:20.520509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.469 [2024-11-17 14:10:20.527081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.469 [2024-11-17 14:10:20.527147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:21:42.469 [2024-11-17 14:10:20.527159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.518 ms 00:21:42.469 [2024-11-17 14:10:20.527169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.469 [2024-11-17 14:10:20.534052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.469 [2024-11-17 14:10:20.534113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:42.469 [2024-11-17 14:10:20.534124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.830 ms 00:21:42.469 [2024-11-17 14:10:20.534138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.469 [2024-11-17 14:10:20.534192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.469 [2024-11-17 14:10:20.534203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:42.469 [2024-11-17 14:10:20.534213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:42.469 [2024-11-17 14:10:20.534223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.469 [2024-11-17 14:10:20.534370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.469 [2024-11-17 14:10:20.534386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:42.469 [2024-11-17 14:10:20.534395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:21:42.469 [2024-11-17 14:10:20.534413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.469 [2024-11-17 14:10:20.535578] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4233.001 ms, result 0 00:21:42.469 { 00:21:42.469 "name": "ftl0", 00:21:42.469 "uuid": "0c4083d2-4361-48d2-b921-3a0d7b4163ef" 00:21:42.469 } 00:21:42.469 14:10:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:21:42.469 14:10:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:21:42.731 14:10:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:21:42.731 14:10:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:21:42.731 14:10:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:21:42.731 /dev/nbd0 00:21:42.731 14:10:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:21:42.731 14:10:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:21:42.731 14:10:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # local i 00:21:42.731 14:10:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:21:42.731 14:10:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:21:42.731 14:10:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:21:42.731 14:10:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # break 00:21:42.731 14:10:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:21:42.731 14:10:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:21:42.731 14:10:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:21:42.731 1+0 records in 00:21:42.731 1+0 records out 00:21:42.731 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000495761 s, 8.3 MB/s 00:21:42.992 14:10:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:21:42.992 14:10:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # size=4096 00:21:42.992 14:10:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:21:42.992 14:10:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:21:42.992 14:10:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # return 0 00:21:42.992 14:10:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:21:42.992 [2024-11-17 14:10:21.108920] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:21:42.993 [2024-11-17 14:10:21.109069] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88929 ] 00:21:42.993 [2024-11-17 14:10:21.263838] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:43.254 [2024-11-17 14:10:21.335605] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:44.199  [2024-11-17T14:10:23.879Z] Copying: 186/1024 [MB] (186 MBps) [2024-11-17T14:10:24.455Z] Copying: 410/1024 [MB] (224 MBps) [2024-11-17T14:10:25.833Z] Copying: 671/1024 [MB] (260 MBps) [2024-11-17T14:10:26.093Z] Copying: 921/1024 [MB] (250 MBps) [2024-11-17T14:10:26.093Z] Copying: 1024/1024 [MB] (average 232 MBps) 00:21:47.792 00:21:47.792 14:10:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:21:50.338 14:10:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:21:50.338 [2024-11-17 14:10:28.212596] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:21:50.338 [2024-11-17 14:10:28.212852] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89009 ] 00:21:50.338 [2024-11-17 14:10:28.363669] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:50.338 [2024-11-17 14:10:28.405518] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:51.283  [2024-11-17T14:10:30.523Z] Copying: 19/1024 [MB] (19 MBps) [2024-11-17T14:10:31.897Z] Copying: 33/1024 [MB] (13 MBps) [2024-11-17T14:10:32.831Z] Copying: 61/1024 [MB] (28 MBps) [2024-11-17T14:10:33.768Z] Copying: 80/1024 [MB] (18 MBps) [2024-11-17T14:10:34.723Z] Copying: 97/1024 [MB] (16 MBps) [2024-11-17T14:10:35.658Z] Copying: 117/1024 [MB] (20 MBps) [2024-11-17T14:10:36.592Z] Copying: 139/1024 [MB] (22 MBps) [2024-11-17T14:10:37.526Z] Copying: 161/1024 [MB] (21 MBps) [2024-11-17T14:10:38.902Z] Copying: 182/1024 [MB] (21 MBps) [2024-11-17T14:10:39.835Z] Copying: 200/1024 [MB] (18 MBps) [2024-11-17T14:10:40.770Z] Copying: 221/1024 [MB] (21 MBps) [2024-11-17T14:10:41.705Z] Copying: 242/1024 [MB] (20 MBps) [2024-11-17T14:10:42.637Z] Copying: 264/1024 [MB] (21 MBps) [2024-11-17T14:10:43.571Z] Copying: 285/1024 [MB] (21 MBps) [2024-11-17T14:10:44.506Z] Copying: 306/1024 [MB] (21 MBps) [2024-11-17T14:10:45.880Z] Copying: 326/1024 [MB] (20 MBps) [2024-11-17T14:10:46.815Z] Copying: 345/1024 [MB] (18 MBps) [2024-11-17T14:10:47.751Z] Copying: 366/1024 [MB] (21 MBps) [2024-11-17T14:10:48.685Z] Copying: 383/1024 [MB] (16 MBps) [2024-11-17T14:10:49.619Z] Copying: 402/1024 [MB] (18 MBps) [2024-11-17T14:10:50.554Z] Copying: 418/1024 [MB] (16 MBps) [2024-11-17T14:10:51.489Z] Copying: 438/1024 [MB] (20 MBps) [2024-11-17T14:10:52.862Z] Copying: 460/1024 [MB] (21 MBps) [2024-11-17T14:10:53.805Z] Copying: 481/1024 [MB] (21 MBps) [2024-11-17T14:10:54.739Z] Copying: 499/1024 [MB] (18 MBps) [2024-11-17T14:10:55.676Z] Copying: 521/1024 [MB] (21 MBps) [2024-11-17T14:10:56.611Z] Copying: 544/1024 [MB] (23 MBps) [2024-11-17T14:10:57.545Z] Copying: 565/1024 [MB] (20 MBps) [2024-11-17T14:10:58.485Z] Copying: 586/1024 [MB] (20 MBps) [2024-11-17T14:10:59.858Z] Copying: 603/1024 [MB] (17 MBps) [2024-11-17T14:11:00.793Z] Copying: 634/1024 [MB] (30 MBps) [2024-11-17T14:11:01.729Z] Copying: 662/1024 [MB] (28 MBps) [2024-11-17T14:11:02.663Z] Copying: 684/1024 [MB] (21 MBps) [2024-11-17T14:11:03.637Z] Copying: 700/1024 [MB] (16 MBps) [2024-11-17T14:11:04.572Z] Copying: 716/1024 [MB] (15 MBps) [2024-11-17T14:11:05.503Z] Copying: 738/1024 [MB] (22 MBps) [2024-11-17T14:11:06.878Z] Copying: 761/1024 [MB] (22 MBps) [2024-11-17T14:11:07.811Z] Copying: 782/1024 [MB] (21 MBps) [2024-11-17T14:11:08.749Z] Copying: 802/1024 [MB] (19 MBps) [2024-11-17T14:11:09.682Z] Copying: 822/1024 [MB] (19 MBps) [2024-11-17T14:11:10.620Z] Copying: 843/1024 [MB] (21 MBps) [2024-11-17T14:11:11.563Z] Copying: 864/1024 [MB] (20 MBps) [2024-11-17T14:11:12.506Z] Copying: 877/1024 [MB] (13 MBps) [2024-11-17T14:11:13.890Z] Copying: 897/1024 [MB] (19 MBps) [2024-11-17T14:11:14.834Z] Copying: 918/1024 [MB] (21 MBps) [2024-11-17T14:11:15.774Z] Copying: 937/1024 [MB] (19 MBps) [2024-11-17T14:11:16.710Z] Copying: 959/1024 [MB] (21 MBps) [2024-11-17T14:11:17.652Z] Copying: 994/1024 [MB] (34 MBps) [2024-11-17T14:11:17.652Z] Copying: 1021/1024 [MB] (27 MBps) [2024-11-17T14:11:17.910Z] Copying: 1024/1024 [MB] (average 20 MBps) 00:22:39.609 00:22:39.609 14:11:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:22:39.609 14:11:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:22:39.870 14:11:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:22:39.870 [2024-11-17 14:11:18.059269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.870 [2024-11-17 14:11:18.059305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:39.870 [2024-11-17 14:11:18.059319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:39.870 [2024-11-17 14:11:18.059325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.870 [2024-11-17 14:11:18.059345] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:39.870 [2024-11-17 14:11:18.059837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.870 [2024-11-17 14:11:18.059862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:39.870 [2024-11-17 14:11:18.059870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.481 ms 00:22:39.870 [2024-11-17 14:11:18.059879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.870 [2024-11-17 14:11:18.061724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.870 [2024-11-17 14:11:18.061754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:39.870 [2024-11-17 14:11:18.061762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.828 ms 00:22:39.870 [2024-11-17 14:11:18.061770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.870 [2024-11-17 14:11:18.077832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.870 [2024-11-17 14:11:18.077860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:39.870 [2024-11-17 14:11:18.077870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.048 ms 00:22:39.870 [2024-11-17 14:11:18.077878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.870 [2024-11-17 14:11:18.082651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.870 [2024-11-17 14:11:18.082675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:39.870 [2024-11-17 14:11:18.082683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.746 ms 00:22:39.870 [2024-11-17 14:11:18.082692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.870 [2024-11-17 14:11:18.083945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.870 [2024-11-17 14:11:18.083977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:39.870 [2024-11-17 14:11:18.083984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.201 ms 00:22:39.871 [2024-11-17 14:11:18.083992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.871 [2024-11-17 14:11:18.088901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.871 [2024-11-17 14:11:18.088930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:39.871 [2024-11-17 14:11:18.088946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.883 ms 00:22:39.871 [2024-11-17 14:11:18.088954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.871 [2024-11-17 14:11:18.089047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.871 [2024-11-17 14:11:18.089056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:39.871 [2024-11-17 14:11:18.089063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:22:39.871 [2024-11-17 14:11:18.089070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.871 [2024-11-17 14:11:18.091681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.871 [2024-11-17 14:11:18.091709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:39.871 [2024-11-17 14:11:18.091716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.598 ms 00:22:39.871 [2024-11-17 14:11:18.091723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.871 [2024-11-17 14:11:18.093307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.871 [2024-11-17 14:11:18.093336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:39.871 [2024-11-17 14:11:18.093343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.558 ms 00:22:39.871 [2024-11-17 14:11:18.093350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.871 [2024-11-17 14:11:18.094510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.871 [2024-11-17 14:11:18.094539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:39.871 [2024-11-17 14:11:18.094545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.135 ms 00:22:39.871 [2024-11-17 14:11:18.094553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.871 [2024-11-17 14:11:18.095720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.871 [2024-11-17 14:11:18.095749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:39.871 [2024-11-17 14:11:18.095756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.122 ms 00:22:39.871 [2024-11-17 14:11:18.095763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.871 [2024-11-17 14:11:18.095787] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:39.871 [2024-11-17 14:11:18.095806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:39.871 [2024-11-17 14:11:18.095814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:39.871 [2024-11-17 14:11:18.095822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:39.871 [2024-11-17 14:11:18.095829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:39.871 [2024-11-17 14:11:18.095839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:39.871 [2024-11-17 14:11:18.095845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:39.871 [2024-11-17 14:11:18.095853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:39.871 [2024-11-17 14:11:18.095859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:39.871 [2024-11-17 14:11:18.095866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:39.871 [2024-11-17 14:11:18.095872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:39.871 [2024-11-17 14:11:18.095880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:39.871 [2024-11-17 14:11:18.095886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:39.871 [2024-11-17 14:11:18.095894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:39.871 [2024-11-17 14:11:18.095899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:39.871 [2024-11-17 14:11:18.095906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:39.871 [2024-11-17 14:11:18.095913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:39.871 [2024-11-17 14:11:18.095920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:39.871 [2024-11-17 14:11:18.095926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:39.871 [2024-11-17 14:11:18.095933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:39.871 [2024-11-17 14:11:18.095939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:39.871 [2024-11-17 14:11:18.095948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:39.871 [2024-11-17 14:11:18.095954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:39.871 [2024-11-17 14:11:18.095961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:39.871 [2024-11-17 14:11:18.095967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:39.871 [2024-11-17 14:11:18.095975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:39.871 [2024-11-17 14:11:18.095981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:39.871 [2024-11-17 14:11:18.095988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:39.871 [2024-11-17 14:11:18.095994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:39.871 [2024-11-17 14:11:18.096002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:39.871 [2024-11-17 14:11:18.096010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:39.871 [2024-11-17 14:11:18.096017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:39.871 [2024-11-17 14:11:18.096024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:39.871 [2024-11-17 14:11:18.096031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:39.871 [2024-11-17 14:11:18.096037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:39.871 [2024-11-17 14:11:18.096044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:39.871 [2024-11-17 14:11:18.096050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:39.871 [2024-11-17 14:11:18.096059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:39.871 [2024-11-17 14:11:18.096064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:39.871 [2024-11-17 14:11:18.096071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:39.871 [2024-11-17 14:11:18.096077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:39.871 [2024-11-17 14:11:18.096084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:39.871 [2024-11-17 14:11:18.096090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:39.871 [2024-11-17 14:11:18.096097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:39.871 [2024-11-17 14:11:18.096103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:39.871 [2024-11-17 14:11:18.096110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:39.871 [2024-11-17 14:11:18.096116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:39.871 [2024-11-17 14:11:18.096123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:39.871 [2024-11-17 14:11:18.096129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:39.871 [2024-11-17 14:11:18.096136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:39.871 [2024-11-17 14:11:18.096141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:39.871 [2024-11-17 14:11:18.096149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:39.871 [2024-11-17 14:11:18.096154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:39.871 [2024-11-17 14:11:18.096164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:39.871 [2024-11-17 14:11:18.096171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:39.871 [2024-11-17 14:11:18.096177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:39.871 [2024-11-17 14:11:18.096188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:39.871 [2024-11-17 14:11:18.096213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:39.871 [2024-11-17 14:11:18.096220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:39.871 [2024-11-17 14:11:18.096227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:39.871 [2024-11-17 14:11:18.096232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:39.871 [2024-11-17 14:11:18.096249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:39.871 [2024-11-17 14:11:18.096256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:39.871 [2024-11-17 14:11:18.096263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:39.871 [2024-11-17 14:11:18.096269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:39.871 [2024-11-17 14:11:18.096276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:39.871 [2024-11-17 14:11:18.096282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:39.871 [2024-11-17 14:11:18.096289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:39.871 [2024-11-17 14:11:18.096295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:39.871 [2024-11-17 14:11:18.096304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:39.872 [2024-11-17 14:11:18.096309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:39.872 [2024-11-17 14:11:18.096316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:39.872 [2024-11-17 14:11:18.096322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:39.872 [2024-11-17 14:11:18.096330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:39.872 [2024-11-17 14:11:18.096335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:39.872 [2024-11-17 14:11:18.096342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:39.872 [2024-11-17 14:11:18.096348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:39.872 [2024-11-17 14:11:18.096357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:39.872 [2024-11-17 14:11:18.096363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:39.872 [2024-11-17 14:11:18.096370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:39.872 [2024-11-17 14:11:18.096377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:39.872 [2024-11-17 14:11:18.096384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:39.872 [2024-11-17 14:11:18.096390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:39.872 [2024-11-17 14:11:18.096397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:39.872 [2024-11-17 14:11:18.096403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:39.872 [2024-11-17 14:11:18.096413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:39.872 [2024-11-17 14:11:18.096418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:39.872 [2024-11-17 14:11:18.096425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:39.872 [2024-11-17 14:11:18.096431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:39.872 [2024-11-17 14:11:18.096438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:39.872 [2024-11-17 14:11:18.096443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:39.872 [2024-11-17 14:11:18.096452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:39.872 [2024-11-17 14:11:18.096457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:39.872 [2024-11-17 14:11:18.096465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:39.872 [2024-11-17 14:11:18.096472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:39.872 [2024-11-17 14:11:18.096480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:39.872 [2024-11-17 14:11:18.096486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:39.872 [2024-11-17 14:11:18.096493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:39.872 [2024-11-17 14:11:18.096499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:39.872 [2024-11-17 14:11:18.096506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:39.872 [2024-11-17 14:11:18.096512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:39.872 [2024-11-17 14:11:18.096527] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:39.872 [2024-11-17 14:11:18.096534] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0c4083d2-4361-48d2-b921-3a0d7b4163ef 00:22:39.872 [2024-11-17 14:11:18.096544] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:39.872 [2024-11-17 14:11:18.096550] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:39.872 [2024-11-17 14:11:18.096557] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:39.872 [2024-11-17 14:11:18.096563] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:39.872 [2024-11-17 14:11:18.096571] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:39.872 [2024-11-17 14:11:18.096577] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:39.872 [2024-11-17 14:11:18.096588] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:39.872 [2024-11-17 14:11:18.096593] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:39.872 [2024-11-17 14:11:18.096599] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:39.872 [2024-11-17 14:11:18.096605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.872 [2024-11-17 14:11:18.096612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:39.872 [2024-11-17 14:11:18.096618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.818 ms 00:22:39.872 [2024-11-17 14:11:18.096626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.872 [2024-11-17 14:11:18.098330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.872 [2024-11-17 14:11:18.098355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:39.872 [2024-11-17 14:11:18.098363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.691 ms 00:22:39.872 [2024-11-17 14:11:18.098371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.872 [2024-11-17 14:11:18.098458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.872 [2024-11-17 14:11:18.098468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:39.872 [2024-11-17 14:11:18.098474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:22:39.872 [2024-11-17 14:11:18.098481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.872 [2024-11-17 14:11:18.104404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:39.872 [2024-11-17 14:11:18.104433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:39.872 [2024-11-17 14:11:18.104441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:39.872 [2024-11-17 14:11:18.104449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.872 [2024-11-17 14:11:18.104494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:39.872 [2024-11-17 14:11:18.104502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:39.872 [2024-11-17 14:11:18.104509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:39.872 [2024-11-17 14:11:18.104516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.872 [2024-11-17 14:11:18.104572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:39.872 [2024-11-17 14:11:18.104585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:39.872 [2024-11-17 14:11:18.104591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:39.872 [2024-11-17 14:11:18.104600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.872 [2024-11-17 14:11:18.104616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:39.872 [2024-11-17 14:11:18.104625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:39.872 [2024-11-17 14:11:18.104631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:39.872 [2024-11-17 14:11:18.104640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.872 [2024-11-17 14:11:18.114978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:39.872 [2024-11-17 14:11:18.115010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:39.872 [2024-11-17 14:11:18.115019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:39.872 [2024-11-17 14:11:18.115027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.872 [2024-11-17 14:11:18.123873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:39.872 [2024-11-17 14:11:18.123910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:39.872 [2024-11-17 14:11:18.123918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:39.872 [2024-11-17 14:11:18.123926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.872 [2024-11-17 14:11:18.123994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:39.872 [2024-11-17 14:11:18.124006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:39.872 [2024-11-17 14:11:18.124014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:39.872 [2024-11-17 14:11:18.124021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.872 [2024-11-17 14:11:18.124051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:39.872 [2024-11-17 14:11:18.124061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:39.872 [2024-11-17 14:11:18.124069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:39.872 [2024-11-17 14:11:18.124077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.872 [2024-11-17 14:11:18.124135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:39.872 [2024-11-17 14:11:18.124148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:39.872 [2024-11-17 14:11:18.124154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:39.872 [2024-11-17 14:11:18.124162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.872 [2024-11-17 14:11:18.124189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:39.872 [2024-11-17 14:11:18.124199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:39.872 [2024-11-17 14:11:18.124205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:39.872 [2024-11-17 14:11:18.124212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.872 [2024-11-17 14:11:18.124263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:39.872 [2024-11-17 14:11:18.124278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:39.872 [2024-11-17 14:11:18.124285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:39.872 [2024-11-17 14:11:18.124292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.872 [2024-11-17 14:11:18.124335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:39.872 [2024-11-17 14:11:18.124345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:39.872 [2024-11-17 14:11:18.124352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:39.872 [2024-11-17 14:11:18.124359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.873 [2024-11-17 14:11:18.124477] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 65.196 ms, result 0 00:22:39.873 true 00:22:39.873 14:11:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 88781 00:22:39.873 14:11:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid88781 00:22:39.873 14:11:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:22:40.132 [2024-11-17 14:11:18.209430] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:22:40.132 [2024-11-17 14:11:18.209548] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89541 ] 00:22:40.132 [2024-11-17 14:11:18.356755] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:40.132 [2024-11-17 14:11:18.406272] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:41.507  [2024-11-17T14:11:20.743Z] Copying: 257/1024 [MB] (257 MBps) [2024-11-17T14:11:21.678Z] Copying: 514/1024 [MB] (257 MBps) [2024-11-17T14:11:22.612Z] Copying: 768/1024 [MB] (253 MBps) [2024-11-17T14:11:22.612Z] Copying: 1019/1024 [MB] (251 MBps) [2024-11-17T14:11:22.872Z] Copying: 1024/1024 [MB] (average 254 MBps) 00:22:44.571 00:22:44.571 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 88781 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:22:44.571 14:11:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:44.571 [2024-11-17 14:11:22.746429] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:22:44.571 [2024-11-17 14:11:22.746548] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89592 ] 00:22:44.831 [2024-11-17 14:11:22.894176] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:44.831 [2024-11-17 14:11:22.938550] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:44.831 [2024-11-17 14:11:23.036750] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:44.831 [2024-11-17 14:11:23.036808] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:44.831 [2024-11-17 14:11:23.099405] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:22:44.831 [2024-11-17 14:11:23.099913] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:22:44.831 [2024-11-17 14:11:23.100506] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:22:45.399 [2024-11-17 14:11:23.566691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.399 [2024-11-17 14:11:23.566733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:45.399 [2024-11-17 14:11:23.566744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:45.399 [2024-11-17 14:11:23.566750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.399 [2024-11-17 14:11:23.566791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.399 [2024-11-17 14:11:23.566802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:45.399 [2024-11-17 14:11:23.566809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:22:45.399 [2024-11-17 14:11:23.566815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.399 [2024-11-17 14:11:23.566832] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:45.399 [2024-11-17 14:11:23.567027] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:45.399 [2024-11-17 14:11:23.567040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.399 [2024-11-17 14:11:23.567047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:45.399 [2024-11-17 14:11:23.567053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.212 ms 00:22:45.399 [2024-11-17 14:11:23.567059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.399 [2024-11-17 14:11:23.568320] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:45.399 [2024-11-17 14:11:23.570949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.399 [2024-11-17 14:11:23.570980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:45.399 [2024-11-17 14:11:23.570988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.630 ms 00:22:45.399 [2024-11-17 14:11:23.570994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.399 [2024-11-17 14:11:23.571037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.399 [2024-11-17 14:11:23.571045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:45.399 [2024-11-17 14:11:23.571052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:22:45.399 [2024-11-17 14:11:23.571058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.399 [2024-11-17 14:11:23.577206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.399 [2024-11-17 14:11:23.577230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:45.399 [2024-11-17 14:11:23.577248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.110 ms 00:22:45.399 [2024-11-17 14:11:23.577256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.399 [2024-11-17 14:11:23.577321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.399 [2024-11-17 14:11:23.577329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:45.399 [2024-11-17 14:11:23.577335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:22:45.399 [2024-11-17 14:11:23.577344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.399 [2024-11-17 14:11:23.577381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.399 [2024-11-17 14:11:23.577392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:45.399 [2024-11-17 14:11:23.577400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:45.399 [2024-11-17 14:11:23.577408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.399 [2024-11-17 14:11:23.577426] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:45.399 [2024-11-17 14:11:23.578942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.399 [2024-11-17 14:11:23.578964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:45.399 [2024-11-17 14:11:23.578972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.520 ms 00:22:45.399 [2024-11-17 14:11:23.578978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.399 [2024-11-17 14:11:23.579009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.399 [2024-11-17 14:11:23.579019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:45.399 [2024-11-17 14:11:23.579026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:22:45.399 [2024-11-17 14:11:23.579033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.399 [2024-11-17 14:11:23.579048] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:45.399 [2024-11-17 14:11:23.579064] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:45.399 [2024-11-17 14:11:23.579096] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:45.399 [2024-11-17 14:11:23.579111] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:45.399 [2024-11-17 14:11:23.579194] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:45.399 [2024-11-17 14:11:23.579205] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:45.399 [2024-11-17 14:11:23.579215] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:45.399 [2024-11-17 14:11:23.579223] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:45.399 [2024-11-17 14:11:23.579231] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:45.399 [2024-11-17 14:11:23.579249] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:45.399 [2024-11-17 14:11:23.579266] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:45.399 [2024-11-17 14:11:23.579272] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:45.399 [2024-11-17 14:11:23.579278] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:45.399 [2024-11-17 14:11:23.579286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.399 [2024-11-17 14:11:23.579297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:45.400 [2024-11-17 14:11:23.579304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.240 ms 00:22:45.400 [2024-11-17 14:11:23.579311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.400 [2024-11-17 14:11:23.579374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.400 [2024-11-17 14:11:23.579381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:45.400 [2024-11-17 14:11:23.579387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:22:45.400 [2024-11-17 14:11:23.579401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.400 [2024-11-17 14:11:23.579477] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:45.400 [2024-11-17 14:11:23.579487] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:45.400 [2024-11-17 14:11:23.579500] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:45.400 [2024-11-17 14:11:23.579507] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:45.400 [2024-11-17 14:11:23.579513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:45.400 [2024-11-17 14:11:23.579518] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:45.400 [2024-11-17 14:11:23.579523] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:45.400 [2024-11-17 14:11:23.579529] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:45.400 [2024-11-17 14:11:23.579535] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:45.400 [2024-11-17 14:11:23.579540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:45.400 [2024-11-17 14:11:23.579545] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:45.400 [2024-11-17 14:11:23.579551] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:45.400 [2024-11-17 14:11:23.579555] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:45.400 [2024-11-17 14:11:23.579561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:45.400 [2024-11-17 14:11:23.579567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:45.400 [2024-11-17 14:11:23.579576] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:45.400 [2024-11-17 14:11:23.579581] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:45.400 [2024-11-17 14:11:23.579586] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:45.400 [2024-11-17 14:11:23.579592] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:45.400 [2024-11-17 14:11:23.579599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:45.400 [2024-11-17 14:11:23.579604] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:45.400 [2024-11-17 14:11:23.579610] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:45.400 [2024-11-17 14:11:23.579616] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:45.400 [2024-11-17 14:11:23.579623] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:45.400 [2024-11-17 14:11:23.579629] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:45.400 [2024-11-17 14:11:23.579637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:45.400 [2024-11-17 14:11:23.579643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:45.400 [2024-11-17 14:11:23.579649] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:45.400 [2024-11-17 14:11:23.579655] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:45.400 [2024-11-17 14:11:23.579661] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:45.400 [2024-11-17 14:11:23.579668] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:45.400 [2024-11-17 14:11:23.579675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:45.400 [2024-11-17 14:11:23.579682] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:45.400 [2024-11-17 14:11:23.579687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:45.400 [2024-11-17 14:11:23.579694] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:45.400 [2024-11-17 14:11:23.579699] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:45.400 [2024-11-17 14:11:23.579705] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:45.400 [2024-11-17 14:11:23.579711] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:45.400 [2024-11-17 14:11:23.579718] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:45.400 [2024-11-17 14:11:23.579724] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:45.400 [2024-11-17 14:11:23.579730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:45.400 [2024-11-17 14:11:23.579736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:45.400 [2024-11-17 14:11:23.579742] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:45.400 [2024-11-17 14:11:23.579748] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:45.400 [2024-11-17 14:11:23.579754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:45.400 [2024-11-17 14:11:23.579760] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:45.400 [2024-11-17 14:11:23.579767] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:45.400 [2024-11-17 14:11:23.579776] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:45.400 [2024-11-17 14:11:23.579781] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:45.400 [2024-11-17 14:11:23.579787] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:45.400 [2024-11-17 14:11:23.579792] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:45.400 [2024-11-17 14:11:23.579799] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:45.400 [2024-11-17 14:11:23.579805] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:45.400 [2024-11-17 14:11:23.579812] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:45.400 [2024-11-17 14:11:23.579820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:45.400 [2024-11-17 14:11:23.579828] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:45.400 [2024-11-17 14:11:23.579835] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:45.400 [2024-11-17 14:11:23.579843] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:45.400 [2024-11-17 14:11:23.579850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:45.400 [2024-11-17 14:11:23.579856] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:45.400 [2024-11-17 14:11:23.579862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:45.400 [2024-11-17 14:11:23.579868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:45.400 [2024-11-17 14:11:23.579875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:45.400 [2024-11-17 14:11:23.579883] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:45.400 [2024-11-17 14:11:23.579889] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:45.400 [2024-11-17 14:11:23.579896] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:45.400 [2024-11-17 14:11:23.579902] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:45.400 [2024-11-17 14:11:23.579908] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:45.400 [2024-11-17 14:11:23.579914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:45.400 [2024-11-17 14:11:23.579921] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:45.400 [2024-11-17 14:11:23.579928] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:45.400 [2024-11-17 14:11:23.579936] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:45.400 [2024-11-17 14:11:23.579942] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:45.400 [2024-11-17 14:11:23.579949] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:45.400 [2024-11-17 14:11:23.579955] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:45.400 [2024-11-17 14:11:23.579961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.400 [2024-11-17 14:11:23.579970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:45.400 [2024-11-17 14:11:23.579975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.536 ms 00:22:45.400 [2024-11-17 14:11:23.579981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.400 [2024-11-17 14:11:23.602532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.400 [2024-11-17 14:11:23.602577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:45.400 [2024-11-17 14:11:23.602593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.510 ms 00:22:45.400 [2024-11-17 14:11:23.602613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.400 [2024-11-17 14:11:23.602739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.400 [2024-11-17 14:11:23.602755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:45.400 [2024-11-17 14:11:23.602771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:22:45.400 [2024-11-17 14:11:23.602787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.400 [2024-11-17 14:11:23.613132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.400 [2024-11-17 14:11:23.613160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:45.400 [2024-11-17 14:11:23.613170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.269 ms 00:22:45.400 [2024-11-17 14:11:23.613178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.400 [2024-11-17 14:11:23.613208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.400 [2024-11-17 14:11:23.613221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:45.401 [2024-11-17 14:11:23.613229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:45.401 [2024-11-17 14:11:23.613269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.401 [2024-11-17 14:11:23.613696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.401 [2024-11-17 14:11:23.613718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:45.401 [2024-11-17 14:11:23.613736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.383 ms 00:22:45.401 [2024-11-17 14:11:23.613745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.401 [2024-11-17 14:11:23.613879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.401 [2024-11-17 14:11:23.613894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:45.401 [2024-11-17 14:11:23.613906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:22:45.401 [2024-11-17 14:11:23.613919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.401 [2024-11-17 14:11:23.619689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.401 [2024-11-17 14:11:23.619715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:45.401 [2024-11-17 14:11:23.619733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.750 ms 00:22:45.401 [2024-11-17 14:11:23.619741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.401 [2024-11-17 14:11:23.622510] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:45.401 [2024-11-17 14:11:23.622538] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:45.401 [2024-11-17 14:11:23.622548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.401 [2024-11-17 14:11:23.622555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:45.401 [2024-11-17 14:11:23.622562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.730 ms 00:22:45.401 [2024-11-17 14:11:23.622572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.401 [2024-11-17 14:11:23.634359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.401 [2024-11-17 14:11:23.634390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:45.401 [2024-11-17 14:11:23.634400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.755 ms 00:22:45.401 [2024-11-17 14:11:23.634407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.401 [2024-11-17 14:11:23.636256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.401 [2024-11-17 14:11:23.636280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:45.401 [2024-11-17 14:11:23.636288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.797 ms 00:22:45.401 [2024-11-17 14:11:23.636293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.401 [2024-11-17 14:11:23.638027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.401 [2024-11-17 14:11:23.638042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:45.401 [2024-11-17 14:11:23.638048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.642 ms 00:22:45.401 [2024-11-17 14:11:23.638054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.401 [2024-11-17 14:11:23.638320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.401 [2024-11-17 14:11:23.638332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:45.401 [2024-11-17 14:11:23.638347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.220 ms 00:22:45.401 [2024-11-17 14:11:23.638353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.401 [2024-11-17 14:11:23.655820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.401 [2024-11-17 14:11:23.655847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:45.401 [2024-11-17 14:11:23.655856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.451 ms 00:22:45.401 [2024-11-17 14:11:23.655862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.401 [2024-11-17 14:11:23.661869] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:45.401 [2024-11-17 14:11:23.663957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.401 [2024-11-17 14:11:23.663977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:45.401 [2024-11-17 14:11:23.663985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.063 ms 00:22:45.401 [2024-11-17 14:11:23.663992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.401 [2024-11-17 14:11:23.664039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.401 [2024-11-17 14:11:23.664047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:45.401 [2024-11-17 14:11:23.664054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:45.401 [2024-11-17 14:11:23.664062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.401 [2024-11-17 14:11:23.664143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.401 [2024-11-17 14:11:23.664151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:45.401 [2024-11-17 14:11:23.664158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:22:45.401 [2024-11-17 14:11:23.664164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.401 [2024-11-17 14:11:23.664180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.401 [2024-11-17 14:11:23.664187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:45.401 [2024-11-17 14:11:23.664194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:45.401 [2024-11-17 14:11:23.664200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.401 [2024-11-17 14:11:23.664231] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:45.401 [2024-11-17 14:11:23.664255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.401 [2024-11-17 14:11:23.664261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:45.401 [2024-11-17 14:11:23.664268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:22:45.401 [2024-11-17 14:11:23.664274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.401 [2024-11-17 14:11:23.668128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.401 [2024-11-17 14:11:23.668153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:45.401 [2024-11-17 14:11:23.668161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.838 ms 00:22:45.401 [2024-11-17 14:11:23.668167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.401 [2024-11-17 14:11:23.668293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.401 [2024-11-17 14:11:23.668309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:45.401 [2024-11-17 14:11:23.668321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:22:45.401 [2024-11-17 14:11:23.668330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.401 [2024-11-17 14:11:23.669193] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 102.134 ms, result 0 00:22:46.783  [2024-11-17T14:11:26.021Z] Copying: 26/1024 [MB] (26 MBps) [2024-11-17T14:11:26.956Z] Copying: 38/1024 [MB] (11 MBps) [2024-11-17T14:11:27.893Z] Copying: 55/1024 [MB] (17 MBps) [2024-11-17T14:11:28.827Z] Copying: 70/1024 [MB] (14 MBps) [2024-11-17T14:11:29.764Z] Copying: 84/1024 [MB] (13 MBps) [2024-11-17T14:11:30.713Z] Copying: 97/1024 [MB] (13 MBps) [2024-11-17T14:11:31.751Z] Copying: 110280/1048576 [kB] (10176 kBps) [2024-11-17T14:11:32.688Z] Copying: 124/1024 [MB] (16 MBps) [2024-11-17T14:11:34.071Z] Copying: 136/1024 [MB] (12 MBps) [2024-11-17T14:11:35.013Z] Copying: 150344/1048576 [kB] (10200 kBps) [2024-11-17T14:11:35.957Z] Copying: 157/1024 [MB] (10 MBps) [2024-11-17T14:11:36.898Z] Copying: 170/1024 [MB] (12 MBps) [2024-11-17T14:11:37.832Z] Copying: 182/1024 [MB] (12 MBps) [2024-11-17T14:11:38.768Z] Copying: 195/1024 [MB] (13 MBps) [2024-11-17T14:11:39.710Z] Copying: 207/1024 [MB] (11 MBps) [2024-11-17T14:11:41.090Z] Copying: 217/1024 [MB] (10 MBps) [2024-11-17T14:11:42.023Z] Copying: 233056/1048576 [kB] (10168 kBps) [2024-11-17T14:11:42.957Z] Copying: 238/1024 [MB] (11 MBps) [2024-11-17T14:11:43.891Z] Copying: 251/1024 [MB] (12 MBps) [2024-11-17T14:11:44.832Z] Copying: 263/1024 [MB] (11 MBps) [2024-11-17T14:11:45.766Z] Copying: 273/1024 [MB] (10 MBps) [2024-11-17T14:11:46.705Z] Copying: 287/1024 [MB] (13 MBps) [2024-11-17T14:11:48.085Z] Copying: 300/1024 [MB] (13 MBps) [2024-11-17T14:11:49.030Z] Copying: 311/1024 [MB] (11 MBps) [2024-11-17T14:11:49.970Z] Copying: 321/1024 [MB] (10 MBps) [2024-11-17T14:11:50.905Z] Copying: 332/1024 [MB] (10 MBps) [2024-11-17T14:11:51.844Z] Copying: 343/1024 [MB] (11 MBps) [2024-11-17T14:11:52.786Z] Copying: 354/1024 [MB] (10 MBps) [2024-11-17T14:11:53.728Z] Copying: 364/1024 [MB] (10 MBps) [2024-11-17T14:11:55.106Z] Copying: 375/1024 [MB] (10 MBps) [2024-11-17T14:11:56.041Z] Copying: 386/1024 [MB] (10 MBps) [2024-11-17T14:11:56.977Z] Copying: 397/1024 [MB] (11 MBps) [2024-11-17T14:11:57.915Z] Copying: 409/1024 [MB] (11 MBps) [2024-11-17T14:11:58.847Z] Copying: 419/1024 [MB] (10 MBps) [2024-11-17T14:11:59.776Z] Copying: 430/1024 [MB] (11 MBps) [2024-11-17T14:12:00.737Z] Copying: 443/1024 [MB] (12 MBps) [2024-11-17T14:12:02.117Z] Copying: 455/1024 [MB] (11 MBps) [2024-11-17T14:12:02.689Z] Copying: 467/1024 [MB] (12 MBps) [2024-11-17T14:12:04.064Z] Copying: 478/1024 [MB] (10 MBps) [2024-11-17T14:12:04.998Z] Copying: 489/1024 [MB] (11 MBps) [2024-11-17T14:12:05.933Z] Copying: 501/1024 [MB] (11 MBps) [2024-11-17T14:12:06.870Z] Copying: 513/1024 [MB] (11 MBps) [2024-11-17T14:12:07.805Z] Copying: 525/1024 [MB] (11 MBps) [2024-11-17T14:12:08.741Z] Copying: 537/1024 [MB] (11 MBps) [2024-11-17T14:12:10.116Z] Copying: 548/1024 [MB] (11 MBps) [2024-11-17T14:12:10.684Z] Copying: 560/1024 [MB] (11 MBps) [2024-11-17T14:12:12.059Z] Copying: 571/1024 [MB] (11 MBps) [2024-11-17T14:12:12.998Z] Copying: 582/1024 [MB] (11 MBps) [2024-11-17T14:12:13.949Z] Copying: 593/1024 [MB] (10 MBps) [2024-11-17T14:12:14.900Z] Copying: 604/1024 [MB] (10 MBps) [2024-11-17T14:12:15.840Z] Copying: 616/1024 [MB] (12 MBps) [2024-11-17T14:12:16.780Z] Copying: 627/1024 [MB] (10 MBps) [2024-11-17T14:12:17.718Z] Copying: 652496/1048576 [kB] (10144 kBps) [2024-11-17T14:12:19.101Z] Copying: 648/1024 [MB] (10 MBps) [2024-11-17T14:12:20.051Z] Copying: 659/1024 [MB] (11 MBps) [2024-11-17T14:12:20.988Z] Copying: 670/1024 [MB] (11 MBps) [2024-11-17T14:12:21.928Z] Copying: 681/1024 [MB] (11 MBps) [2024-11-17T14:12:22.868Z] Copying: 692/1024 [MB] (10 MBps) [2024-11-17T14:12:23.808Z] Copying: 702/1024 [MB] (10 MBps) [2024-11-17T14:12:24.747Z] Copying: 714/1024 [MB] (11 MBps) [2024-11-17T14:12:25.689Z] Copying: 728/1024 [MB] (13 MBps) [2024-11-17T14:12:27.073Z] Copying: 776/1024 [MB] (48 MBps) [2024-11-17T14:12:28.033Z] Copying: 804/1024 [MB] (27 MBps) [2024-11-17T14:12:28.977Z] Copying: 824/1024 [MB] (19 MBps) [2024-11-17T14:12:29.921Z] Copying: 843/1024 [MB] (19 MBps) [2024-11-17T14:12:30.866Z] Copying: 855/1024 [MB] (11 MBps) [2024-11-17T14:12:31.810Z] Copying: 868/1024 [MB] (13 MBps) [2024-11-17T14:12:32.752Z] Copying: 883/1024 [MB] (14 MBps) [2024-11-17T14:12:33.697Z] Copying: 899/1024 [MB] (16 MBps) [2024-11-17T14:12:35.084Z] Copying: 915/1024 [MB] (15 MBps) [2024-11-17T14:12:36.028Z] Copying: 929/1024 [MB] (14 MBps) [2024-11-17T14:12:36.970Z] Copying: 942/1024 [MB] (12 MBps) [2024-11-17T14:12:37.911Z] Copying: 957/1024 [MB] (15 MBps) [2024-11-17T14:12:38.853Z] Copying: 972/1024 [MB] (14 MBps) [2024-11-17T14:12:39.794Z] Copying: 983/1024 [MB] (11 MBps) [2024-11-17T14:12:40.739Z] Copying: 999/1024 [MB] (16 MBps) [2024-11-17T14:12:42.126Z] Copying: 1020/1024 [MB] (20 MBps) [2024-11-17T14:12:42.126Z] Copying: 1048552/1048576 [kB] (3736 kBps) [2024-11-17T14:12:42.126Z] Copying: 1024/1024 [MB] (average 13 MBps)[2024-11-17 14:12:41.711664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.825 [2024-11-17 14:12:41.711886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:03.825 [2024-11-17 14:12:41.711912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:03.825 [2024-11-17 14:12:41.711922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.825 [2024-11-17 14:12:41.713694] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:03.825 [2024-11-17 14:12:41.715621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.825 [2024-11-17 14:12:41.715669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:03.825 [2024-11-17 14:12:41.715681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.875 ms 00:24:03.825 [2024-11-17 14:12:41.715691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.825 [2024-11-17 14:12:41.728895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.825 [2024-11-17 14:12:41.728948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:03.825 [2024-11-17 14:12:41.728960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.787 ms 00:24:03.825 [2024-11-17 14:12:41.728980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.825 [2024-11-17 14:12:41.752407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.825 [2024-11-17 14:12:41.752459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:03.825 [2024-11-17 14:12:41.752480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.410 ms 00:24:03.825 [2024-11-17 14:12:41.752489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.825 [2024-11-17 14:12:41.758662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.825 [2024-11-17 14:12:41.758831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:03.825 [2024-11-17 14:12:41.758850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.134 ms 00:24:03.825 [2024-11-17 14:12:41.758869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.825 [2024-11-17 14:12:41.761735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.825 [2024-11-17 14:12:41.761788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:03.825 [2024-11-17 14:12:41.761801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.817 ms 00:24:03.825 [2024-11-17 14:12:41.761809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.825 [2024-11-17 14:12:41.767036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.825 [2024-11-17 14:12:41.767097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:03.825 [2024-11-17 14:12:41.767115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.184 ms 00:24:03.825 [2024-11-17 14:12:41.767131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.825 [2024-11-17 14:12:42.054769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.825 [2024-11-17 14:12:42.054838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:03.825 [2024-11-17 14:12:42.054852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 287.585 ms 00:24:03.825 [2024-11-17 14:12:42.054861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.825 [2024-11-17 14:12:42.057986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.825 [2024-11-17 14:12:42.058164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:03.825 [2024-11-17 14:12:42.058182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.094 ms 00:24:03.825 [2024-11-17 14:12:42.058191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.825 [2024-11-17 14:12:42.060922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.825 [2024-11-17 14:12:42.060970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:03.825 [2024-11-17 14:12:42.060980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.642 ms 00:24:03.825 [2024-11-17 14:12:42.060988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.825 [2024-11-17 14:12:42.063158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.825 [2024-11-17 14:12:42.063347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:03.825 [2024-11-17 14:12:42.063424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.128 ms 00:24:03.825 [2024-11-17 14:12:42.063448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.825 [2024-11-17 14:12:42.065672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.825 [2024-11-17 14:12:42.065703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:03.825 [2024-11-17 14:12:42.065712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.146 ms 00:24:03.825 [2024-11-17 14:12:42.065719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.825 [2024-11-17 14:12:42.065746] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:03.825 [2024-11-17 14:12:42.065759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 103168 / 261120 wr_cnt: 1 state: open 00:24:03.825 [2024-11-17 14:12:42.065769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:03.825 [2024-11-17 14:12:42.065777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:03.825 [2024-11-17 14:12:42.065785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:03.825 [2024-11-17 14:12:42.065793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:03.825 [2024-11-17 14:12:42.065800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:03.825 [2024-11-17 14:12:42.065807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:03.825 [2024-11-17 14:12:42.065815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:03.825 [2024-11-17 14:12:42.065822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.065830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.065838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.065845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.065852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.065860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.065867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.065875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.065883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.065890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.065897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.065904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.065911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.065918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.065925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.065932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.065939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.065947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.065955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.065962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.065969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.065978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.065986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.065993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.066001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.066008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.066015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.066023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.066030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.066037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.066044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.066051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.066059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.066066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.066074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.066081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.066088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.066096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.066104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.066111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.066118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.066126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.066133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.066140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.066148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.066155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.066162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.066170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.066178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.066185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.066192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.066200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.066207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.066215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.066223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.066230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.066253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.066262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.066270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.066278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.066285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.066293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.066301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.066309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.066316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.066323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.066331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.066339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.066346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.066354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.066361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.066368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.066376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.066383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.066390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.066398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.066406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.066413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.066420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.066428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.066435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.066442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.066450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.066457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.066464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.066483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.066491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.066498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.066506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:03.826 [2024-11-17 14:12:42.066514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:03.827 [2024-11-17 14:12:42.066521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:03.827 [2024-11-17 14:12:42.066529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:03.827 [2024-11-17 14:12:42.066544] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:03.827 [2024-11-17 14:12:42.066556] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0c4083d2-4361-48d2-b921-3a0d7b4163ef 00:24:03.827 [2024-11-17 14:12:42.066568] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 103168 00:24:03.827 [2024-11-17 14:12:42.066576] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 104128 00:24:03.827 [2024-11-17 14:12:42.066582] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 103168 00:24:03.827 [2024-11-17 14:12:42.066591] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0093 00:24:03.827 [2024-11-17 14:12:42.066603] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:03.827 [2024-11-17 14:12:42.066611] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:03.827 [2024-11-17 14:12:42.066618] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:03.827 [2024-11-17 14:12:42.066624] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:03.827 [2024-11-17 14:12:42.066631] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:03.827 [2024-11-17 14:12:42.066643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.827 [2024-11-17 14:12:42.066651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:03.827 [2024-11-17 14:12:42.066660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.898 ms 00:24:03.827 [2024-11-17 14:12:42.066670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.827 [2024-11-17 14:12:42.068157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.827 [2024-11-17 14:12:42.068179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:03.827 [2024-11-17 14:12:42.068189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.471 ms 00:24:03.827 [2024-11-17 14:12:42.068197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.827 [2024-11-17 14:12:42.068297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.827 [2024-11-17 14:12:42.068306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:03.827 [2024-11-17 14:12:42.068318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:24:03.827 [2024-11-17 14:12:42.068326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.827 [2024-11-17 14:12:42.072906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:03.827 [2024-11-17 14:12:42.073012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:03.827 [2024-11-17 14:12:42.073060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:03.827 [2024-11-17 14:12:42.073088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.827 [2024-11-17 14:12:42.073147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:03.827 [2024-11-17 14:12:42.073167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:03.827 [2024-11-17 14:12:42.073189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:03.827 [2024-11-17 14:12:42.073207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.827 [2024-11-17 14:12:42.073282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:03.827 [2024-11-17 14:12:42.073373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:03.827 [2024-11-17 14:12:42.073393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:03.827 [2024-11-17 14:12:42.073410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.827 [2024-11-17 14:12:42.073436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:03.827 [2024-11-17 14:12:42.073456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:03.827 [2024-11-17 14:12:42.073474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:03.827 [2024-11-17 14:12:42.073529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.827 [2024-11-17 14:12:42.082178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:03.827 [2024-11-17 14:12:42.082326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:03.827 [2024-11-17 14:12:42.082375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:03.827 [2024-11-17 14:12:42.082397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.827 [2024-11-17 14:12:42.089327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:03.827 [2024-11-17 14:12:42.089445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:03.827 [2024-11-17 14:12:42.089498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:03.827 [2024-11-17 14:12:42.089520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.827 [2024-11-17 14:12:42.089574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:03.827 [2024-11-17 14:12:42.089597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:03.827 [2024-11-17 14:12:42.089616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:03.827 [2024-11-17 14:12:42.089634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.827 [2024-11-17 14:12:42.089679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:03.827 [2024-11-17 14:12:42.089701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:03.827 [2024-11-17 14:12:42.089756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:03.827 [2024-11-17 14:12:42.089777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.827 [2024-11-17 14:12:42.089862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:03.827 [2024-11-17 14:12:42.089886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:03.827 [2024-11-17 14:12:42.089905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:03.827 [2024-11-17 14:12:42.089923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.827 [2024-11-17 14:12:42.089961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:03.827 [2024-11-17 14:12:42.090052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:03.827 [2024-11-17 14:12:42.090071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:03.827 [2024-11-17 14:12:42.090090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.827 [2024-11-17 14:12:42.090138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:03.827 [2024-11-17 14:12:42.090160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:03.827 [2024-11-17 14:12:42.090178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:03.827 [2024-11-17 14:12:42.090230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.827 [2024-11-17 14:12:42.090298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:03.827 [2024-11-17 14:12:42.090322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:03.827 [2024-11-17 14:12:42.090341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:03.827 [2024-11-17 14:12:42.090359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.827 [2024-11-17 14:12:42.090492] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 380.957 ms, result 0 00:24:04.770 00:24:04.770 00:24:04.770 14:12:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:24:06.686 14:12:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:06.948 [2024-11-17 14:12:45.000826] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:24:06.948 [2024-11-17 14:12:45.000953] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90434 ] 00:24:06.948 [2024-11-17 14:12:45.149164] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:06.948 [2024-11-17 14:12:45.178624] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:07.211 [2024-11-17 14:12:45.259533] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:07.211 [2024-11-17 14:12:45.259586] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:07.211 [2024-11-17 14:12:45.406148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.211 [2024-11-17 14:12:45.406187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:07.211 [2024-11-17 14:12:45.406199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:07.211 [2024-11-17 14:12:45.406205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.211 [2024-11-17 14:12:45.406251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.211 [2024-11-17 14:12:45.406259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:07.211 [2024-11-17 14:12:45.406266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:24:07.211 [2024-11-17 14:12:45.406272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.211 [2024-11-17 14:12:45.406289] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:07.211 [2024-11-17 14:12:45.406470] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:07.211 [2024-11-17 14:12:45.406483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.211 [2024-11-17 14:12:45.406489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:07.211 [2024-11-17 14:12:45.406498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.200 ms 00:24:07.211 [2024-11-17 14:12:45.406505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.211 [2024-11-17 14:12:45.407416] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:07.211 [2024-11-17 14:12:45.409327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.211 [2024-11-17 14:12:45.409355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:07.212 [2024-11-17 14:12:45.409366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.912 ms 00:24:07.212 [2024-11-17 14:12:45.409372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.212 [2024-11-17 14:12:45.409415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.212 [2024-11-17 14:12:45.409426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:07.212 [2024-11-17 14:12:45.409432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:24:07.212 [2024-11-17 14:12:45.409438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.212 [2024-11-17 14:12:45.413732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.212 [2024-11-17 14:12:45.413758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:07.212 [2024-11-17 14:12:45.413766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.265 ms 00:24:07.212 [2024-11-17 14:12:45.413774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.212 [2024-11-17 14:12:45.413838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.212 [2024-11-17 14:12:45.413846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:07.212 [2024-11-17 14:12:45.413852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:24:07.212 [2024-11-17 14:12:45.413857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.212 [2024-11-17 14:12:45.413894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.212 [2024-11-17 14:12:45.413902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:07.212 [2024-11-17 14:12:45.413912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:07.212 [2024-11-17 14:12:45.413917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.212 [2024-11-17 14:12:45.413937] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:07.212 [2024-11-17 14:12:45.415097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.212 [2024-11-17 14:12:45.415126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:07.212 [2024-11-17 14:12:45.415134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.168 ms 00:24:07.212 [2024-11-17 14:12:45.415139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.212 [2024-11-17 14:12:45.415161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.212 [2024-11-17 14:12:45.415170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:07.212 [2024-11-17 14:12:45.415176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:07.212 [2024-11-17 14:12:45.415182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.212 [2024-11-17 14:12:45.415199] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:07.212 [2024-11-17 14:12:45.415215] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:07.212 [2024-11-17 14:12:45.415267] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:07.212 [2024-11-17 14:12:45.415280] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:07.212 [2024-11-17 14:12:45.415358] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:07.212 [2024-11-17 14:12:45.415366] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:07.212 [2024-11-17 14:12:45.415374] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:07.212 [2024-11-17 14:12:45.415385] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:07.212 [2024-11-17 14:12:45.415394] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:07.212 [2024-11-17 14:12:45.415400] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:07.212 [2024-11-17 14:12:45.415406] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:07.212 [2024-11-17 14:12:45.415414] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:07.212 [2024-11-17 14:12:45.415419] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:07.212 [2024-11-17 14:12:45.415429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.212 [2024-11-17 14:12:45.415435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:07.212 [2024-11-17 14:12:45.415444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.230 ms 00:24:07.212 [2024-11-17 14:12:45.415450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.212 [2024-11-17 14:12:45.415512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.212 [2024-11-17 14:12:45.415520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:07.212 [2024-11-17 14:12:45.415529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:24:07.212 [2024-11-17 14:12:45.415534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.212 [2024-11-17 14:12:45.415605] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:07.212 [2024-11-17 14:12:45.415612] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:07.212 [2024-11-17 14:12:45.415621] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:07.212 [2024-11-17 14:12:45.415632] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:07.212 [2024-11-17 14:12:45.415638] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:07.212 [2024-11-17 14:12:45.415644] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:07.212 [2024-11-17 14:12:45.415648] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:07.212 [2024-11-17 14:12:45.415657] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:07.212 [2024-11-17 14:12:45.415663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:07.212 [2024-11-17 14:12:45.415669] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:07.212 [2024-11-17 14:12:45.415674] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:07.212 [2024-11-17 14:12:45.415680] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:07.212 [2024-11-17 14:12:45.415685] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:07.212 [2024-11-17 14:12:45.415690] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:07.212 [2024-11-17 14:12:45.415695] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:07.212 [2024-11-17 14:12:45.415700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:07.212 [2024-11-17 14:12:45.415705] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:07.212 [2024-11-17 14:12:45.415710] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:07.212 [2024-11-17 14:12:45.415714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:07.212 [2024-11-17 14:12:45.415720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:07.212 [2024-11-17 14:12:45.415725] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:07.212 [2024-11-17 14:12:45.415730] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:07.212 [2024-11-17 14:12:45.415735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:07.212 [2024-11-17 14:12:45.415742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:07.212 [2024-11-17 14:12:45.415747] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:07.212 [2024-11-17 14:12:45.415753] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:07.212 [2024-11-17 14:12:45.415758] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:07.212 [2024-11-17 14:12:45.415764] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:07.212 [2024-11-17 14:12:45.415770] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:07.212 [2024-11-17 14:12:45.415776] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:07.212 [2024-11-17 14:12:45.415782] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:07.212 [2024-11-17 14:12:45.415787] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:07.212 [2024-11-17 14:12:45.415793] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:07.212 [2024-11-17 14:12:45.415799] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:07.213 [2024-11-17 14:12:45.415804] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:07.213 [2024-11-17 14:12:45.415810] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:07.213 [2024-11-17 14:12:45.415816] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:07.213 [2024-11-17 14:12:45.415821] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:07.213 [2024-11-17 14:12:45.415827] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:07.213 [2024-11-17 14:12:45.415834] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:07.213 [2024-11-17 14:12:45.415840] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:07.213 [2024-11-17 14:12:45.415845] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:07.213 [2024-11-17 14:12:45.415851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:07.213 [2024-11-17 14:12:45.415857] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:07.213 [2024-11-17 14:12:45.415864] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:07.213 [2024-11-17 14:12:45.415870] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:07.213 [2024-11-17 14:12:45.415878] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:07.213 [2024-11-17 14:12:45.415884] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:07.213 [2024-11-17 14:12:45.415890] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:07.213 [2024-11-17 14:12:45.415896] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:07.213 [2024-11-17 14:12:45.415902] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:07.213 [2024-11-17 14:12:45.415908] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:07.213 [2024-11-17 14:12:45.415913] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:07.213 [2024-11-17 14:12:45.415920] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:07.213 [2024-11-17 14:12:45.415928] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:07.213 [2024-11-17 14:12:45.415936] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:07.213 [2024-11-17 14:12:45.415943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:07.213 [2024-11-17 14:12:45.415950] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:07.213 [2024-11-17 14:12:45.415955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:07.213 [2024-11-17 14:12:45.415962] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:07.213 [2024-11-17 14:12:45.415968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:07.213 [2024-11-17 14:12:45.415975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:07.213 [2024-11-17 14:12:45.415981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:07.213 [2024-11-17 14:12:45.415987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:07.213 [2024-11-17 14:12:45.415993] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:07.213 [2024-11-17 14:12:45.415999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:07.213 [2024-11-17 14:12:45.416005] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:07.213 [2024-11-17 14:12:45.416011] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:07.213 [2024-11-17 14:12:45.416017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:07.213 [2024-11-17 14:12:45.416023] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:07.213 [2024-11-17 14:12:45.416030] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:07.213 [2024-11-17 14:12:45.416039] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:07.213 [2024-11-17 14:12:45.416046] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:07.213 [2024-11-17 14:12:45.416052] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:07.213 [2024-11-17 14:12:45.416058] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:07.213 [2024-11-17 14:12:45.416065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.213 [2024-11-17 14:12:45.416074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:07.213 [2024-11-17 14:12:45.416080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.512 ms 00:24:07.213 [2024-11-17 14:12:45.416091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.213 [2024-11-17 14:12:45.433257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.213 [2024-11-17 14:12:45.433299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:07.213 [2024-11-17 14:12:45.433319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.131 ms 00:24:07.213 [2024-11-17 14:12:45.433328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.213 [2024-11-17 14:12:45.433422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.213 [2024-11-17 14:12:45.433431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:07.213 [2024-11-17 14:12:45.433440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:24:07.213 [2024-11-17 14:12:45.433449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.213 [2024-11-17 14:12:45.441355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.213 [2024-11-17 14:12:45.441387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:07.213 [2024-11-17 14:12:45.441397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.847 ms 00:24:07.213 [2024-11-17 14:12:45.441404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.213 [2024-11-17 14:12:45.441439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.213 [2024-11-17 14:12:45.441448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:07.213 [2024-11-17 14:12:45.441456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:07.213 [2024-11-17 14:12:45.441463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.213 [2024-11-17 14:12:45.441768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.213 [2024-11-17 14:12:45.441800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:07.213 [2024-11-17 14:12:45.441809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:24:07.213 [2024-11-17 14:12:45.441816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.213 [2024-11-17 14:12:45.441934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.213 [2024-11-17 14:12:45.441943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:07.213 [2024-11-17 14:12:45.441950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:24:07.213 [2024-11-17 14:12:45.441957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.213 [2024-11-17 14:12:45.446261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.213 [2024-11-17 14:12:45.446289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:07.213 [2024-11-17 14:12:45.446302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.286 ms 00:24:07.213 [2024-11-17 14:12:45.446309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.213 [2024-11-17 14:12:45.449025] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:24:07.213 [2024-11-17 14:12:45.449170] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:07.213 [2024-11-17 14:12:45.449185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.213 [2024-11-17 14:12:45.449192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:07.213 [2024-11-17 14:12:45.449200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.786 ms 00:24:07.213 [2024-11-17 14:12:45.449208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.213 [2024-11-17 14:12:45.460839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.213 [2024-11-17 14:12:45.460865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:07.213 [2024-11-17 14:12:45.460879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.582 ms 00:24:07.213 [2024-11-17 14:12:45.460884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.213 [2024-11-17 14:12:45.462463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.213 [2024-11-17 14:12:45.462487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:07.213 [2024-11-17 14:12:45.462494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.548 ms 00:24:07.213 [2024-11-17 14:12:45.462499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.213 [2024-11-17 14:12:45.463950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.214 [2024-11-17 14:12:45.464047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:07.214 [2024-11-17 14:12:45.464062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.426 ms 00:24:07.214 [2024-11-17 14:12:45.464068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.214 [2024-11-17 14:12:45.464320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.214 [2024-11-17 14:12:45.464335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:07.214 [2024-11-17 14:12:45.464342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.212 ms 00:24:07.214 [2024-11-17 14:12:45.464347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.214 [2024-11-17 14:12:45.478090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.214 [2024-11-17 14:12:45.478130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:07.214 [2024-11-17 14:12:45.478139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.730 ms 00:24:07.214 [2024-11-17 14:12:45.478145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.214 [2024-11-17 14:12:45.483862] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:07.214 [2024-11-17 14:12:45.485639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.214 [2024-11-17 14:12:45.485669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:07.214 [2024-11-17 14:12:45.485679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.463 ms 00:24:07.214 [2024-11-17 14:12:45.485685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.214 [2024-11-17 14:12:45.485725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.214 [2024-11-17 14:12:45.485733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:07.214 [2024-11-17 14:12:45.485743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:07.214 [2024-11-17 14:12:45.485748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.214 [2024-11-17 14:12:45.486878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.214 [2024-11-17 14:12:45.486991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:07.214 [2024-11-17 14:12:45.487004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.100 ms 00:24:07.214 [2024-11-17 14:12:45.487010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.214 [2024-11-17 14:12:45.487031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.214 [2024-11-17 14:12:45.487037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:07.214 [2024-11-17 14:12:45.487043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:07.214 [2024-11-17 14:12:45.487049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.214 [2024-11-17 14:12:45.487074] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:07.214 [2024-11-17 14:12:45.487082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.214 [2024-11-17 14:12:45.487091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:07.214 [2024-11-17 14:12:45.487097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:07.214 [2024-11-17 14:12:45.487103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.214 [2024-11-17 14:12:45.490478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.214 [2024-11-17 14:12:45.490504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:07.214 [2024-11-17 14:12:45.490517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.360 ms 00:24:07.214 [2024-11-17 14:12:45.490526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.214 [2024-11-17 14:12:45.490580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.214 [2024-11-17 14:12:45.490587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:07.214 [2024-11-17 14:12:45.490594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:24:07.214 [2024-11-17 14:12:45.490600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.214 [2024-11-17 14:12:45.491319] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 84.829 ms, result 0 00:24:08.601  [2024-11-17T14:12:47.844Z] Copying: 1080/1048576 [kB] (1080 kBps) [2024-11-17T14:12:48.787Z] Copying: 4360/1048576 [kB] (3280 kBps) [2024-11-17T14:12:49.728Z] Copying: 18/1024 [MB] (14 MBps) [2024-11-17T14:12:50.671Z] Copying: 52/1024 [MB] (33 MBps) [2024-11-17T14:12:52.058Z] Copying: 94/1024 [MB] (42 MBps) [2024-11-17T14:12:52.630Z] Copying: 125/1024 [MB] (30 MBps) [2024-11-17T14:12:54.014Z] Copying: 155/1024 [MB] (30 MBps) [2024-11-17T14:12:54.957Z] Copying: 183/1024 [MB] (28 MBps) [2024-11-17T14:12:55.901Z] Copying: 211/1024 [MB] (27 MBps) [2024-11-17T14:12:56.843Z] Copying: 242/1024 [MB] (30 MBps) [2024-11-17T14:12:57.883Z] Copying: 271/1024 [MB] (29 MBps) [2024-11-17T14:12:58.827Z] Copying: 301/1024 [MB] (29 MBps) [2024-11-17T14:12:59.769Z] Copying: 333/1024 [MB] (32 MBps) [2024-11-17T14:13:00.712Z] Copying: 379/1024 [MB] (45 MBps) [2024-11-17T14:13:01.656Z] Copying: 408/1024 [MB] (29 MBps) [2024-11-17T14:13:03.043Z] Copying: 434/1024 [MB] (26 MBps) [2024-11-17T14:13:03.987Z] Copying: 452/1024 [MB] (17 MBps) [2024-11-17T14:13:04.930Z] Copying: 479/1024 [MB] (26 MBps) [2024-11-17T14:13:05.870Z] Copying: 509/1024 [MB] (30 MBps) [2024-11-17T14:13:06.810Z] Copying: 540/1024 [MB] (31 MBps) [2024-11-17T14:13:07.750Z] Copying: 576/1024 [MB] (36 MBps) [2024-11-17T14:13:08.694Z] Copying: 605/1024 [MB] (28 MBps) [2024-11-17T14:13:09.636Z] Copying: 632/1024 [MB] (27 MBps) [2024-11-17T14:13:11.021Z] Copying: 655/1024 [MB] (22 MBps) [2024-11-17T14:13:11.961Z] Copying: 684/1024 [MB] (29 MBps) [2024-11-17T14:13:12.904Z] Copying: 717/1024 [MB] (33 MBps) [2024-11-17T14:13:13.844Z] Copying: 746/1024 [MB] (29 MBps) [2024-11-17T14:13:14.786Z] Copying: 772/1024 [MB] (26 MBps) [2024-11-17T14:13:15.727Z] Copying: 803/1024 [MB] (30 MBps) [2024-11-17T14:13:16.668Z] Copying: 828/1024 [MB] (25 MBps) [2024-11-17T14:13:18.053Z] Copying: 854/1024 [MB] (25 MBps) [2024-11-17T14:13:19.010Z] Copying: 874/1024 [MB] (20 MBps) [2024-11-17T14:13:19.992Z] Copying: 905/1024 [MB] (30 MBps) [2024-11-17T14:13:20.934Z] Copying: 931/1024 [MB] (25 MBps) [2024-11-17T14:13:21.877Z] Copying: 960/1024 [MB] (29 MBps) [2024-11-17T14:13:22.821Z] Copying: 979/1024 [MB] (19 MBps) [2024-11-17T14:13:23.085Z] Copying: 1016/1024 [MB] (36 MBps) [2024-11-17T14:13:23.085Z] Copying: 1024/1024 [MB] (average 27 MBps)[2024-11-17 14:13:22.977186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.784 [2024-11-17 14:13:22.977310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:44.784 [2024-11-17 14:13:22.977331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:44.784 [2024-11-17 14:13:22.977344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.784 [2024-11-17 14:13:22.977373] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:44.784 [2024-11-17 14:13:22.978382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.784 [2024-11-17 14:13:22.978419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:44.784 [2024-11-17 14:13:22.978442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.907 ms 00:24:44.784 [2024-11-17 14:13:22.978454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.784 [2024-11-17 14:13:22.978746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.784 [2024-11-17 14:13:22.978760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:44.784 [2024-11-17 14:13:22.978771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:24:44.784 [2024-11-17 14:13:22.978782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.784 [2024-11-17 14:13:22.992706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.784 [2024-11-17 14:13:22.992760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:44.784 [2024-11-17 14:13:22.992775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.902 ms 00:24:44.784 [2024-11-17 14:13:22.992791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.784 [2024-11-17 14:13:22.999025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.784 [2024-11-17 14:13:22.999224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:44.784 [2024-11-17 14:13:22.999283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.196 ms 00:24:44.784 [2024-11-17 14:13:22.999293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.784 [2024-11-17 14:13:23.002326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.784 [2024-11-17 14:13:23.002374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:44.784 [2024-11-17 14:13:23.002385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.962 ms 00:24:44.784 [2024-11-17 14:13:23.002393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.784 [2024-11-17 14:13:23.007662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.784 [2024-11-17 14:13:23.007714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:44.784 [2024-11-17 14:13:23.007724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.225 ms 00:24:44.784 [2024-11-17 14:13:23.007742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.784 [2024-11-17 14:13:23.012292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.784 [2024-11-17 14:13:23.012338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:44.784 [2024-11-17 14:13:23.012360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.502 ms 00:24:44.784 [2024-11-17 14:13:23.012369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.784 [2024-11-17 14:13:23.015940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.784 [2024-11-17 14:13:23.015988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:44.784 [2024-11-17 14:13:23.015998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.555 ms 00:24:44.784 [2024-11-17 14:13:23.016005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.784 [2024-11-17 14:13:23.018840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.784 [2024-11-17 14:13:23.018902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:44.784 [2024-11-17 14:13:23.018912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.793 ms 00:24:44.784 [2024-11-17 14:13:23.018920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.784 [2024-11-17 14:13:23.021009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.784 [2024-11-17 14:13:23.021055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:44.784 [2024-11-17 14:13:23.021065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.047 ms 00:24:44.784 [2024-11-17 14:13:23.021073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.784 [2024-11-17 14:13:23.023493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.785 [2024-11-17 14:13:23.023539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:44.785 [2024-11-17 14:13:23.023549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.348 ms 00:24:44.785 [2024-11-17 14:13:23.023556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.785 [2024-11-17 14:13:23.023595] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:44.785 [2024-11-17 14:13:23.023621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:24:44.785 [2024-11-17 14:13:23.023632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:24:44.785 [2024-11-17 14:13:23.023640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.023648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.023656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.023664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.023672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.023680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.023687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.023695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.023703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.023711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.023719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.023727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.023735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.023743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.023750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.023758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.023766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.023773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.023780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.023788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.023795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.023802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.023810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.023817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.023827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.023834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.023842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.023851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.023861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.023869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.023877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.023885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.023892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.023901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.023909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.023917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.023924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.023932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.023940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.023948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.023956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.023963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.023970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.023978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.023985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.023993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.024000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.024008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.024015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.024023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.024030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.024038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.024045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.024053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.024061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.024068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.024075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.024083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.024090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.024098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.024107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.024115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.024123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.024131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.024139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.024146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.024154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.024162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.024170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.024177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.024185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.024193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.024202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.024209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.024217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.024224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.024232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.024262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.024279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.024287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.024295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.024303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.024312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:44.785 [2024-11-17 14:13:23.024321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:44.786 [2024-11-17 14:13:23.024330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:44.786 [2024-11-17 14:13:23.024338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:44.786 [2024-11-17 14:13:23.024345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:44.786 [2024-11-17 14:13:23.024353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:44.786 [2024-11-17 14:13:23.024361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:44.786 [2024-11-17 14:13:23.024370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:44.786 [2024-11-17 14:13:23.024378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:44.786 [2024-11-17 14:13:23.024386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:44.786 [2024-11-17 14:13:23.024395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:44.786 [2024-11-17 14:13:23.024403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:44.786 [2024-11-17 14:13:23.024411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:44.786 [2024-11-17 14:13:23.024419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:44.786 [2024-11-17 14:13:23.024427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:44.786 [2024-11-17 14:13:23.024434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:44.786 [2024-11-17 14:13:23.024451] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:44.786 [2024-11-17 14:13:23.024460] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0c4083d2-4361-48d2-b921-3a0d7b4163ef 00:24:44.786 [2024-11-17 14:13:23.024473] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:24:44.786 [2024-11-17 14:13:23.024486] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 161472 00:24:44.786 [2024-11-17 14:13:23.024494] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 159488 00:24:44.786 [2024-11-17 14:13:23.024506] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0124 00:24:44.786 [2024-11-17 14:13:23.024513] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:44.786 [2024-11-17 14:13:23.024523] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:44.786 [2024-11-17 14:13:23.024531] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:44.786 [2024-11-17 14:13:23.024537] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:44.786 [2024-11-17 14:13:23.024544] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:44.786 [2024-11-17 14:13:23.024551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.786 [2024-11-17 14:13:23.024559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:44.786 [2024-11-17 14:13:23.024567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.957 ms 00:24:44.786 [2024-11-17 14:13:23.024575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.786 [2024-11-17 14:13:23.027130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.786 [2024-11-17 14:13:23.027313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:44.786 [2024-11-17 14:13:23.027448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.537 ms 00:24:44.786 [2024-11-17 14:13:23.027475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.786 [2024-11-17 14:13:23.027622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:44.786 [2024-11-17 14:13:23.027916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:44.786 [2024-11-17 14:13:23.027960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:24:44.786 [2024-11-17 14:13:23.027989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.786 [2024-11-17 14:13:23.034764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:44.786 [2024-11-17 14:13:23.034920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:44.786 [2024-11-17 14:13:23.034977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:44.786 [2024-11-17 14:13:23.035001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.786 [2024-11-17 14:13:23.035070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:44.786 [2024-11-17 14:13:23.035093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:44.786 [2024-11-17 14:13:23.035114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:44.786 [2024-11-17 14:13:23.035141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.786 [2024-11-17 14:13:23.035216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:44.786 [2024-11-17 14:13:23.035370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:44.786 [2024-11-17 14:13:23.035394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:44.786 [2024-11-17 14:13:23.035415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.786 [2024-11-17 14:13:23.035445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:44.786 [2024-11-17 14:13:23.035467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:44.786 [2024-11-17 14:13:23.035486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:44.786 [2024-11-17 14:13:23.035555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.786 [2024-11-17 14:13:23.049029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:44.786 [2024-11-17 14:13:23.049214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:44.786 [2024-11-17 14:13:23.049293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:44.786 [2024-11-17 14:13:23.049327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.786 [2024-11-17 14:13:23.059814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:44.786 [2024-11-17 14:13:23.059981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:44.786 [2024-11-17 14:13:23.060047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:44.786 [2024-11-17 14:13:23.060070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.786 [2024-11-17 14:13:23.060135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:44.786 [2024-11-17 14:13:23.060159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:44.786 [2024-11-17 14:13:23.060179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:44.786 [2024-11-17 14:13:23.060198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.786 [2024-11-17 14:13:23.060257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:44.786 [2024-11-17 14:13:23.060327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:44.786 [2024-11-17 14:13:23.060350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:44.786 [2024-11-17 14:13:23.060370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.786 [2024-11-17 14:13:23.060467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:44.786 [2024-11-17 14:13:23.060497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:44.786 [2024-11-17 14:13:23.060507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:44.786 [2024-11-17 14:13:23.060516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.786 [2024-11-17 14:13:23.060552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:44.786 [2024-11-17 14:13:23.060563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:44.786 [2024-11-17 14:13:23.060572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:44.786 [2024-11-17 14:13:23.060580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.786 [2024-11-17 14:13:23.060620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:44.786 [2024-11-17 14:13:23.060633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:44.786 [2024-11-17 14:13:23.060641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:44.786 [2024-11-17 14:13:23.060651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.786 [2024-11-17 14:13:23.060699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:44.786 [2024-11-17 14:13:23.060710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:44.786 [2024-11-17 14:13:23.060718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:44.786 [2024-11-17 14:13:23.060727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:44.786 [2024-11-17 14:13:23.060858] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 83.653 ms, result 0 00:24:45.048 00:24:45.048 00:24:45.048 14:13:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:47.633 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:24:47.633 14:13:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:47.633 [2024-11-17 14:13:25.605734] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:24:47.633 [2024-11-17 14:13:25.605877] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90854 ] 00:24:47.633 [2024-11-17 14:13:25.759083] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:47.633 [2024-11-17 14:13:25.809974] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:47.633 [2024-11-17 14:13:25.925334] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:47.633 [2024-11-17 14:13:25.925685] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:47.895 [2024-11-17 14:13:26.087060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.896 [2024-11-17 14:13:26.087124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:47.896 [2024-11-17 14:13:26.087147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:47.896 [2024-11-17 14:13:26.087156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.896 [2024-11-17 14:13:26.087218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.896 [2024-11-17 14:13:26.087229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:47.896 [2024-11-17 14:13:26.087285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:24:47.896 [2024-11-17 14:13:26.087295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.896 [2024-11-17 14:13:26.087318] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:47.896 [2024-11-17 14:13:26.087859] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:47.896 [2024-11-17 14:13:26.087910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.896 [2024-11-17 14:13:26.087921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:47.896 [2024-11-17 14:13:26.087932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.594 ms 00:24:47.896 [2024-11-17 14:13:26.087944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.896 [2024-11-17 14:13:26.089684] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:47.896 [2024-11-17 14:13:26.093583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.896 [2024-11-17 14:13:26.093636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:47.896 [2024-11-17 14:13:26.093649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.901 ms 00:24:47.896 [2024-11-17 14:13:26.093657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.896 [2024-11-17 14:13:26.093740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.896 [2024-11-17 14:13:26.093753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:47.896 [2024-11-17 14:13:26.093762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:24:47.896 [2024-11-17 14:13:26.093770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.896 [2024-11-17 14:13:26.101869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.896 [2024-11-17 14:13:26.101924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:47.896 [2024-11-17 14:13:26.101935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.052 ms 00:24:47.896 [2024-11-17 14:13:26.101947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.896 [2024-11-17 14:13:26.102052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.896 [2024-11-17 14:13:26.102062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:47.896 [2024-11-17 14:13:26.102075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:24:47.896 [2024-11-17 14:13:26.102085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.896 [2024-11-17 14:13:26.102142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.896 [2024-11-17 14:13:26.102151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:47.896 [2024-11-17 14:13:26.102160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:47.896 [2024-11-17 14:13:26.102174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.896 [2024-11-17 14:13:26.102200] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:47.896 [2024-11-17 14:13:26.104313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.896 [2024-11-17 14:13:26.104503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:47.896 [2024-11-17 14:13:26.104522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.122 ms 00:24:47.896 [2024-11-17 14:13:26.104531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.896 [2024-11-17 14:13:26.104576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.896 [2024-11-17 14:13:26.104585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:47.896 [2024-11-17 14:13:26.104593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:47.896 [2024-11-17 14:13:26.104601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.896 [2024-11-17 14:13:26.104628] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:47.896 [2024-11-17 14:13:26.104651] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:47.896 [2024-11-17 14:13:26.104687] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:47.896 [2024-11-17 14:13:26.104703] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:47.896 [2024-11-17 14:13:26.104809] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:47.896 [2024-11-17 14:13:26.104820] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:47.896 [2024-11-17 14:13:26.104831] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:47.896 [2024-11-17 14:13:26.104842] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:47.896 [2024-11-17 14:13:26.104855] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:47.896 [2024-11-17 14:13:26.104863] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:47.896 [2024-11-17 14:13:26.104871] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:47.896 [2024-11-17 14:13:26.104879] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:47.896 [2024-11-17 14:13:26.104891] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:47.896 [2024-11-17 14:13:26.104900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.896 [2024-11-17 14:13:26.104908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:47.896 [2024-11-17 14:13:26.104916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:24:47.896 [2024-11-17 14:13:26.104926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.896 [2024-11-17 14:13:26.105010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.896 [2024-11-17 14:13:26.105022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:47.896 [2024-11-17 14:13:26.105036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:24:47.896 [2024-11-17 14:13:26.105045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.896 [2024-11-17 14:13:26.105143] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:47.896 [2024-11-17 14:13:26.105154] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:47.896 [2024-11-17 14:13:26.105164] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:47.896 [2024-11-17 14:13:26.105179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:47.896 [2024-11-17 14:13:26.105188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:47.896 [2024-11-17 14:13:26.105196] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:47.896 [2024-11-17 14:13:26.105204] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:47.896 [2024-11-17 14:13:26.105212] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:47.896 [2024-11-17 14:13:26.105221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:47.896 [2024-11-17 14:13:26.105230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:47.896 [2024-11-17 14:13:26.105267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:47.896 [2024-11-17 14:13:26.105276] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:47.896 [2024-11-17 14:13:26.105284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:47.896 [2024-11-17 14:13:26.105291] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:47.896 [2024-11-17 14:13:26.105300] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:47.896 [2024-11-17 14:13:26.105307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:47.897 [2024-11-17 14:13:26.105316] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:47.897 [2024-11-17 14:13:26.105324] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:47.897 [2024-11-17 14:13:26.105333] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:47.897 [2024-11-17 14:13:26.105342] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:47.897 [2024-11-17 14:13:26.105350] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:47.897 [2024-11-17 14:13:26.105359] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:47.897 [2024-11-17 14:13:26.105367] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:47.897 [2024-11-17 14:13:26.105375] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:47.897 [2024-11-17 14:13:26.105383] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:47.897 [2024-11-17 14:13:26.105391] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:47.897 [2024-11-17 14:13:26.105405] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:47.897 [2024-11-17 14:13:26.105412] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:47.897 [2024-11-17 14:13:26.105419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:47.897 [2024-11-17 14:13:26.105426] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:47.897 [2024-11-17 14:13:26.105433] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:47.897 [2024-11-17 14:13:26.105440] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:47.897 [2024-11-17 14:13:26.105446] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:47.897 [2024-11-17 14:13:26.105454] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:47.897 [2024-11-17 14:13:26.105461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:47.897 [2024-11-17 14:13:26.105468] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:47.897 [2024-11-17 14:13:26.105475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:47.897 [2024-11-17 14:13:26.105481] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:47.897 [2024-11-17 14:13:26.105487] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:47.897 [2024-11-17 14:13:26.105497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:47.897 [2024-11-17 14:13:26.105503] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:47.897 [2024-11-17 14:13:26.105510] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:47.897 [2024-11-17 14:13:26.105519] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:47.897 [2024-11-17 14:13:26.105526] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:47.897 [2024-11-17 14:13:26.105533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:47.897 [2024-11-17 14:13:26.105540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:47.897 [2024-11-17 14:13:26.105556] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:47.897 [2024-11-17 14:13:26.105564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:47.897 [2024-11-17 14:13:26.105571] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:47.897 [2024-11-17 14:13:26.105577] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:47.897 [2024-11-17 14:13:26.105587] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:47.897 [2024-11-17 14:13:26.105594] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:47.897 [2024-11-17 14:13:26.105601] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:47.897 [2024-11-17 14:13:26.105610] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:47.897 [2024-11-17 14:13:26.105619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:47.897 [2024-11-17 14:13:26.105629] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:47.897 [2024-11-17 14:13:26.105636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:47.897 [2024-11-17 14:13:26.105643] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:47.897 [2024-11-17 14:13:26.105653] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:47.897 [2024-11-17 14:13:26.105661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:47.897 [2024-11-17 14:13:26.105668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:47.897 [2024-11-17 14:13:26.105675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:47.897 [2024-11-17 14:13:26.105681] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:47.897 [2024-11-17 14:13:26.105688] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:47.897 [2024-11-17 14:13:26.105696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:47.897 [2024-11-17 14:13:26.105703] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:47.897 [2024-11-17 14:13:26.105710] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:47.897 [2024-11-17 14:13:26.105717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:47.897 [2024-11-17 14:13:26.105724] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:47.897 [2024-11-17 14:13:26.105732] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:47.897 [2024-11-17 14:13:26.105741] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:47.897 [2024-11-17 14:13:26.105748] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:47.897 [2024-11-17 14:13:26.105756] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:47.897 [2024-11-17 14:13:26.105763] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:47.897 [2024-11-17 14:13:26.105773] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:47.897 [2024-11-17 14:13:26.105781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.897 [2024-11-17 14:13:26.105789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:47.897 [2024-11-17 14:13:26.105796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.706 ms 00:24:47.897 [2024-11-17 14:13:26.105804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.897 [2024-11-17 14:13:26.131669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.897 [2024-11-17 14:13:26.131734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:47.897 [2024-11-17 14:13:26.131751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.812 ms 00:24:47.897 [2024-11-17 14:13:26.131762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.897 [2024-11-17 14:13:26.131883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.897 [2024-11-17 14:13:26.131905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:47.897 [2024-11-17 14:13:26.131916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:24:47.897 [2024-11-17 14:13:26.131925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.897 [2024-11-17 14:13:26.144328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.897 [2024-11-17 14:13:26.144367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:47.897 [2024-11-17 14:13:26.144379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.324 ms 00:24:47.897 [2024-11-17 14:13:26.144387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.897 [2024-11-17 14:13:26.144424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.897 [2024-11-17 14:13:26.144434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:47.897 [2024-11-17 14:13:26.144442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:47.897 [2024-11-17 14:13:26.144454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.897 [2024-11-17 14:13:26.145008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.897 [2024-11-17 14:13:26.145038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:47.897 [2024-11-17 14:13:26.145050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.501 ms 00:24:47.897 [2024-11-17 14:13:26.145059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.897 [2024-11-17 14:13:26.145216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.897 [2024-11-17 14:13:26.145266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:47.897 [2024-11-17 14:13:26.145277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.134 ms 00:24:47.897 [2024-11-17 14:13:26.145286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.897 [2024-11-17 14:13:26.152399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.898 [2024-11-17 14:13:26.152440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:47.898 [2024-11-17 14:13:26.152458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.087 ms 00:24:47.898 [2024-11-17 14:13:26.152466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.898 [2024-11-17 14:13:26.156426] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:47.898 [2024-11-17 14:13:26.156475] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:47.898 [2024-11-17 14:13:26.156487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.898 [2024-11-17 14:13:26.156496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:47.898 [2024-11-17 14:13:26.156506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.918 ms 00:24:47.898 [2024-11-17 14:13:26.156513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.898 [2024-11-17 14:13:26.172425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.898 [2024-11-17 14:13:26.172474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:47.898 [2024-11-17 14:13:26.172490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.850 ms 00:24:47.898 [2024-11-17 14:13:26.172498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.898 [2024-11-17 14:13:26.175530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.898 [2024-11-17 14:13:26.175576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:47.898 [2024-11-17 14:13:26.175586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.979 ms 00:24:47.898 [2024-11-17 14:13:26.175594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.898 [2024-11-17 14:13:26.178530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.898 [2024-11-17 14:13:26.178578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:47.898 [2024-11-17 14:13:26.178588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.889 ms 00:24:47.898 [2024-11-17 14:13:26.178595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.898 [2024-11-17 14:13:26.178949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.898 [2024-11-17 14:13:26.178961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:47.898 [2024-11-17 14:13:26.178970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:24:47.898 [2024-11-17 14:13:26.178983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.159 [2024-11-17 14:13:26.203595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.159 [2024-11-17 14:13:26.203661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:48.159 [2024-11-17 14:13:26.203675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.593 ms 00:24:48.159 [2024-11-17 14:13:26.203685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.159 [2024-11-17 14:13:26.212011] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:48.159 [2024-11-17 14:13:26.214996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.159 [2024-11-17 14:13:26.215036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:48.159 [2024-11-17 14:13:26.215056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.262 ms 00:24:48.159 [2024-11-17 14:13:26.215067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.159 [2024-11-17 14:13:26.215141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.159 [2024-11-17 14:13:26.215151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:48.159 [2024-11-17 14:13:26.215161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:24:48.159 [2024-11-17 14:13:26.215169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.159 [2024-11-17 14:13:26.215961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.159 [2024-11-17 14:13:26.215996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:48.159 [2024-11-17 14:13:26.216007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.755 ms 00:24:48.159 [2024-11-17 14:13:26.216020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.159 [2024-11-17 14:13:26.216046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.159 [2024-11-17 14:13:26.216061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:48.159 [2024-11-17 14:13:26.216069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:48.159 [2024-11-17 14:13:26.216077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.159 [2024-11-17 14:13:26.216115] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:48.159 [2024-11-17 14:13:26.216125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.159 [2024-11-17 14:13:26.216137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:48.159 [2024-11-17 14:13:26.216149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:48.159 [2024-11-17 14:13:26.216159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.159 [2024-11-17 14:13:26.221480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.159 [2024-11-17 14:13:26.221673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:48.159 [2024-11-17 14:13:26.221693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.300 ms 00:24:48.159 [2024-11-17 14:13:26.221702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.159 [2024-11-17 14:13:26.222097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.159 [2024-11-17 14:13:26.222140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:48.159 [2024-11-17 14:13:26.222153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:24:48.159 [2024-11-17 14:13:26.222162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.159 [2024-11-17 14:13:26.223572] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 135.988 ms, result 0 00:24:49.100  [2024-11-17T14:13:28.788Z] Copying: 19/1024 [MB] (19 MBps) [2024-11-17T14:13:29.746Z] Copying: 39/1024 [MB] (20 MBps) [2024-11-17T14:13:30.689Z] Copying: 60/1024 [MB] (20 MBps) [2024-11-17T14:13:31.634Z] Copying: 76/1024 [MB] (16 MBps) [2024-11-17T14:13:32.576Z] Copying: 101/1024 [MB] (25 MBps) [2024-11-17T14:13:33.519Z] Copying: 130/1024 [MB] (28 MBps) [2024-11-17T14:13:34.464Z] Copying: 153/1024 [MB] (23 MBps) [2024-11-17T14:13:35.410Z] Copying: 173/1024 [MB] (19 MBps) [2024-11-17T14:13:36.798Z] Copying: 190/1024 [MB] (16 MBps) [2024-11-17T14:13:37.742Z] Copying: 205/1024 [MB] (15 MBps) [2024-11-17T14:13:38.687Z] Copying: 224/1024 [MB] (19 MBps) [2024-11-17T14:13:39.630Z] Copying: 248/1024 [MB] (23 MBps) [2024-11-17T14:13:40.574Z] Copying: 269/1024 [MB] (21 MBps) [2024-11-17T14:13:41.517Z] Copying: 289/1024 [MB] (19 MBps) [2024-11-17T14:13:42.460Z] Copying: 310/1024 [MB] (20 MBps) [2024-11-17T14:13:43.404Z] Copying: 329/1024 [MB] (19 MBps) [2024-11-17T14:13:44.792Z] Copying: 348/1024 [MB] (19 MBps) [2024-11-17T14:13:45.735Z] Copying: 365/1024 [MB] (16 MBps) [2024-11-17T14:13:46.680Z] Copying: 387/1024 [MB] (22 MBps) [2024-11-17T14:13:47.623Z] Copying: 407/1024 [MB] (19 MBps) [2024-11-17T14:13:48.566Z] Copying: 422/1024 [MB] (15 MBps) [2024-11-17T14:13:49.508Z] Copying: 433/1024 [MB] (10 MBps) [2024-11-17T14:13:50.453Z] Copying: 443/1024 [MB] (10 MBps) [2024-11-17T14:13:51.841Z] Copying: 454/1024 [MB] (10 MBps) [2024-11-17T14:13:52.412Z] Copying: 464/1024 [MB] (10 MBps) [2024-11-17T14:13:53.799Z] Copying: 474/1024 [MB] (10 MBps) [2024-11-17T14:13:54.742Z] Copying: 484/1024 [MB] (10 MBps) [2024-11-17T14:13:55.684Z] Copying: 495/1024 [MB] (10 MBps) [2024-11-17T14:13:56.628Z] Copying: 516/1024 [MB] (21 MBps) [2024-11-17T14:13:57.569Z] Copying: 527/1024 [MB] (11 MBps) [2024-11-17T14:13:58.554Z] Copying: 538/1024 [MB] (10 MBps) [2024-11-17T14:13:59.544Z] Copying: 551/1024 [MB] (13 MBps) [2024-11-17T14:14:00.488Z] Copying: 565/1024 [MB] (14 MBps) [2024-11-17T14:14:01.430Z] Copying: 582/1024 [MB] (17 MBps) [2024-11-17T14:14:02.816Z] Copying: 599/1024 [MB] (16 MBps) [2024-11-17T14:14:03.759Z] Copying: 615/1024 [MB] (15 MBps) [2024-11-17T14:14:04.703Z] Copying: 629/1024 [MB] (14 MBps) [2024-11-17T14:14:05.643Z] Copying: 648/1024 [MB] (18 MBps) [2024-11-17T14:14:06.587Z] Copying: 663/1024 [MB] (15 MBps) [2024-11-17T14:14:07.532Z] Copying: 678/1024 [MB] (15 MBps) [2024-11-17T14:14:08.475Z] Copying: 693/1024 [MB] (14 MBps) [2024-11-17T14:14:09.419Z] Copying: 710/1024 [MB] (17 MBps) [2024-11-17T14:14:10.804Z] Copying: 725/1024 [MB] (14 MBps) [2024-11-17T14:14:11.747Z] Copying: 739/1024 [MB] (14 MBps) [2024-11-17T14:14:12.691Z] Copying: 758/1024 [MB] (18 MBps) [2024-11-17T14:14:13.635Z] Copying: 770/1024 [MB] (11 MBps) [2024-11-17T14:14:14.579Z] Copying: 785/1024 [MB] (15 MBps) [2024-11-17T14:14:15.522Z] Copying: 799/1024 [MB] (14 MBps) [2024-11-17T14:14:16.467Z] Copying: 816/1024 [MB] (16 MBps) [2024-11-17T14:14:17.409Z] Copying: 830/1024 [MB] (14 MBps) [2024-11-17T14:14:18.798Z] Copying: 850/1024 [MB] (20 MBps) [2024-11-17T14:14:19.743Z] Copying: 864/1024 [MB] (14 MBps) [2024-11-17T14:14:20.690Z] Copying: 876/1024 [MB] (11 MBps) [2024-11-17T14:14:21.633Z] Copying: 887/1024 [MB] (10 MBps) [2024-11-17T14:14:22.576Z] Copying: 897/1024 [MB] (10 MBps) [2024-11-17T14:14:23.518Z] Copying: 908/1024 [MB] (10 MBps) [2024-11-17T14:14:24.460Z] Copying: 920/1024 [MB] (12 MBps) [2024-11-17T14:14:25.402Z] Copying: 931/1024 [MB] (10 MBps) [2024-11-17T14:14:26.783Z] Copying: 944/1024 [MB] (12 MBps) [2024-11-17T14:14:27.725Z] Copying: 961/1024 [MB] (17 MBps) [2024-11-17T14:14:28.667Z] Copying: 978/1024 [MB] (17 MBps) [2024-11-17T14:14:29.614Z] Copying: 990/1024 [MB] (11 MBps) [2024-11-17T14:14:30.614Z] Copying: 1006/1024 [MB] (16 MBps) [2024-11-17T14:14:31.187Z] Copying: 1017/1024 [MB] (10 MBps) [2024-11-17T14:14:31.187Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-17 14:14:31.173892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.886 [2024-11-17 14:14:31.173994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:52.886 [2024-11-17 14:14:31.174016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:52.886 [2024-11-17 14:14:31.174029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.886 [2024-11-17 14:14:31.174070] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:52.886 [2024-11-17 14:14:31.174934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.886 [2024-11-17 14:14:31.174971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:52.886 [2024-11-17 14:14:31.174987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.842 ms 00:25:52.886 [2024-11-17 14:14:31.175002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.886 [2024-11-17 14:14:31.175435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.886 [2024-11-17 14:14:31.175460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:52.886 [2024-11-17 14:14:31.175474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.400 ms 00:25:52.886 [2024-11-17 14:14:31.175485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.886 [2024-11-17 14:14:31.181957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.886 [2024-11-17 14:14:31.182208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:52.886 [2024-11-17 14:14:31.182233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.449 ms 00:25:52.886 [2024-11-17 14:14:31.182262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.149 [2024-11-17 14:14:31.190233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.149 [2024-11-17 14:14:31.190298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:53.149 [2024-11-17 14:14:31.190309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.939 ms 00:25:53.149 [2024-11-17 14:14:31.190318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.149 [2024-11-17 14:14:31.193544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.149 [2024-11-17 14:14:31.193598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:53.149 [2024-11-17 14:14:31.193609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.155 ms 00:25:53.149 [2024-11-17 14:14:31.193617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.149 [2024-11-17 14:14:31.198924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.149 [2024-11-17 14:14:31.198979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:53.149 [2024-11-17 14:14:31.198991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.261 ms 00:25:53.149 [2024-11-17 14:14:31.199000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.149 [2024-11-17 14:14:31.204013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.149 [2024-11-17 14:14:31.204059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:53.149 [2024-11-17 14:14:31.204082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.964 ms 00:25:53.149 [2024-11-17 14:14:31.204091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.149 [2024-11-17 14:14:31.207412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.149 [2024-11-17 14:14:31.207461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:53.149 [2024-11-17 14:14:31.207472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.303 ms 00:25:53.149 [2024-11-17 14:14:31.207481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.149 [2024-11-17 14:14:31.210399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.149 [2024-11-17 14:14:31.210446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:53.149 [2024-11-17 14:14:31.210456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.858 ms 00:25:53.149 [2024-11-17 14:14:31.210464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.149 [2024-11-17 14:14:31.213084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.149 [2024-11-17 14:14:31.213134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:53.149 [2024-11-17 14:14:31.213144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.577 ms 00:25:53.149 [2024-11-17 14:14:31.213153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.149 [2024-11-17 14:14:31.215660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.149 [2024-11-17 14:14:31.215707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:53.149 [2024-11-17 14:14:31.215717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.435 ms 00:25:53.149 [2024-11-17 14:14:31.215725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.149 [2024-11-17 14:14:31.215765] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:53.149 [2024-11-17 14:14:31.215789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:25:53.149 [2024-11-17 14:14:31.215800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:25:53.149 [2024-11-17 14:14:31.215808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:53.149 [2024-11-17 14:14:31.215817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:53.149 [2024-11-17 14:14:31.215825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:53.149 [2024-11-17 14:14:31.215833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:53.149 [2024-11-17 14:14:31.215841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:53.149 [2024-11-17 14:14:31.215849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:53.149 [2024-11-17 14:14:31.215857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:53.149 [2024-11-17 14:14:31.215865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:53.149 [2024-11-17 14:14:31.215873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:53.149 [2024-11-17 14:14:31.215881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:53.149 [2024-11-17 14:14:31.215890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:53.149 [2024-11-17 14:14:31.215899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:53.149 [2024-11-17 14:14:31.215907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.215916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.215924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.215931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.215938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.215946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.215954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.215961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.215969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.215976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.215984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.215992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.216001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.216009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.216018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.216027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.216035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.216044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.216053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.216060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.216068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.216076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.216084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.216092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.216099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.216107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.216116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.216124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.216131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.216138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.216146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.216153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.216161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.216171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.216179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.216187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.216194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.216201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.216209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.216217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.216226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.216249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.216257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.216265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.216273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.216281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.216289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.216301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.216311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.216319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.216327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.216335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.216343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.216351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.216359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.216366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.216375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.216384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.216392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.216399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.216407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.216415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.216423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.216432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.216440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.216448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.216455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.216463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.216470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.216480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.216490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.216497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.216505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.216512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.216520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.216529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.216536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.216544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.216551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.216559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.216569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.216576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.216584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.216591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.216599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.216608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:53.150 [2024-11-17 14:14:31.216624] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:53.150 [2024-11-17 14:14:31.216632] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0c4083d2-4361-48d2-b921-3a0d7b4163ef 00:25:53.150 [2024-11-17 14:14:31.216646] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:25:53.151 [2024-11-17 14:14:31.216654] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:53.151 [2024-11-17 14:14:31.216663] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:53.151 [2024-11-17 14:14:31.216671] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:53.151 [2024-11-17 14:14:31.216678] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:53.151 [2024-11-17 14:14:31.216686] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:53.151 [2024-11-17 14:14:31.216695] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:53.151 [2024-11-17 14:14:31.216703] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:53.151 [2024-11-17 14:14:31.216713] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:53.151 [2024-11-17 14:14:31.216721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.151 [2024-11-17 14:14:31.216729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:53.151 [2024-11-17 14:14:31.216746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.957 ms 00:25:53.151 [2024-11-17 14:14:31.216762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.151 [2024-11-17 14:14:31.219129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.151 [2024-11-17 14:14:31.219155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:53.151 [2024-11-17 14:14:31.219167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.343 ms 00:25:53.151 [2024-11-17 14:14:31.219178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.151 [2024-11-17 14:14:31.219640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.151 [2024-11-17 14:14:31.219685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:53.151 [2024-11-17 14:14:31.219708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.438 ms 00:25:53.151 [2024-11-17 14:14:31.219729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.151 [2024-11-17 14:14:31.226579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.151 [2024-11-17 14:14:31.226743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:53.151 [2024-11-17 14:14:31.226799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.151 [2024-11-17 14:14:31.226822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.151 [2024-11-17 14:14:31.226905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.151 [2024-11-17 14:14:31.226929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:53.151 [2024-11-17 14:14:31.226950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.151 [2024-11-17 14:14:31.226969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.151 [2024-11-17 14:14:31.227056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.151 [2024-11-17 14:14:31.227170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:53.151 [2024-11-17 14:14:31.227193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.151 [2024-11-17 14:14:31.227212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.151 [2024-11-17 14:14:31.227276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.151 [2024-11-17 14:14:31.227306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:53.151 [2024-11-17 14:14:31.227327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.151 [2024-11-17 14:14:31.227393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.151 [2024-11-17 14:14:31.240858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.151 [2024-11-17 14:14:31.241041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:53.151 [2024-11-17 14:14:31.241098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.151 [2024-11-17 14:14:31.241122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.151 [2024-11-17 14:14:31.251581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.151 [2024-11-17 14:14:31.251765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:53.151 [2024-11-17 14:14:31.251819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.151 [2024-11-17 14:14:31.251844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.151 [2024-11-17 14:14:31.251905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.151 [2024-11-17 14:14:31.251927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:53.151 [2024-11-17 14:14:31.251962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.151 [2024-11-17 14:14:31.251984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.151 [2024-11-17 14:14:31.252036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.151 [2024-11-17 14:14:31.252440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:53.151 [2024-11-17 14:14:31.252584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.151 [2024-11-17 14:14:31.252617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.151 [2024-11-17 14:14:31.252784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.151 [2024-11-17 14:14:31.252816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:53.151 [2024-11-17 14:14:31.252884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.151 [2024-11-17 14:14:31.252910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.151 [2024-11-17 14:14:31.252966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.151 [2024-11-17 14:14:31.252992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:53.151 [2024-11-17 14:14:31.253012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.151 [2024-11-17 14:14:31.253162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.151 [2024-11-17 14:14:31.253223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.151 [2024-11-17 14:14:31.253272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:53.151 [2024-11-17 14:14:31.253301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.151 [2024-11-17 14:14:31.253320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.151 [2024-11-17 14:14:31.253382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.151 [2024-11-17 14:14:31.253446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:53.151 [2024-11-17 14:14:31.253477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.151 [2024-11-17 14:14:31.253496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.151 [2024-11-17 14:14:31.253652] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 79.733 ms, result 0 00:25:53.412 00:25:53.412 00:25:53.412 14:14:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:25:55.961 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:25:55.961 14:14:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:25:55.961 14:14:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:25:55.961 14:14:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:55.961 14:14:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:55.961 14:14:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:25:55.961 14:14:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:55.961 14:14:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:25:55.961 Process with pid 88781 is not found 00:25:55.961 14:14:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 88781 00:25:55.961 14:14:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@950 -- # '[' -z 88781 ']' 00:25:55.961 14:14:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # kill -0 88781 00:25:55.961 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (88781) - No such process 00:25:55.961 14:14:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@977 -- # echo 'Process with pid 88781 is not found' 00:25:55.961 14:14:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:25:56.223 Remove shared memory files 00:25:56.223 14:14:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:25:56.223 14:14:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:25:56.223 14:14:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:25:56.223 14:14:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:25:56.223 14:14:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:25:56.223 14:14:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:25:56.223 14:14:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:25:56.223 ************************************ 00:25:56.223 END TEST ftl_dirty_shutdown 00:25:56.223 ************************************ 00:25:56.223 00:25:56.223 real 4m22.009s 00:25:56.223 user 4m47.549s 00:25:56.223 sys 0m27.066s 00:25:56.223 14:14:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:56.223 14:14:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:56.223 14:14:34 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:25:56.223 14:14:34 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:25:56.223 14:14:34 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:56.223 14:14:34 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:56.223 ************************************ 00:25:56.223 START TEST ftl_upgrade_shutdown 00:25:56.223 ************************************ 00:25:56.223 14:14:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:25:56.223 * Looking for test storage... 00:25:56.223 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:56.223 14:14:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:56.223 14:14:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:25:56.223 14:14:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:56.223 14:14:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:56.223 14:14:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:56.223 14:14:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:56.223 14:14:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:56.223 14:14:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:25:56.223 14:14:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:25:56.223 14:14:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:25:56.223 14:14:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:25:56.223 14:14:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:25:56.223 14:14:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:25:56.223 14:14:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:25:56.223 14:14:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:56.223 14:14:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:25:56.223 14:14:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:25:56.223 14:14:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:56.223 14:14:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:56.223 14:14:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:25:56.223 14:14:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:25:56.223 14:14:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:56.223 14:14:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:25:56.223 14:14:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:25:56.223 14:14:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:25:56.223 14:14:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:25:56.223 14:14:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:56.223 14:14:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:25:56.223 14:14:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:25:56.223 14:14:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:56.223 14:14:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:56.224 14:14:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:25:56.224 14:14:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:56.224 14:14:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:56.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:56.224 --rc genhtml_branch_coverage=1 00:25:56.224 --rc genhtml_function_coverage=1 00:25:56.224 --rc genhtml_legend=1 00:25:56.224 --rc geninfo_all_blocks=1 00:25:56.224 --rc geninfo_unexecuted_blocks=1 00:25:56.224 00:25:56.224 ' 00:25:56.224 14:14:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:56.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:56.224 --rc genhtml_branch_coverage=1 00:25:56.224 --rc genhtml_function_coverage=1 00:25:56.224 --rc genhtml_legend=1 00:25:56.224 --rc geninfo_all_blocks=1 00:25:56.224 --rc geninfo_unexecuted_blocks=1 00:25:56.224 00:25:56.224 ' 00:25:56.224 14:14:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:56.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:56.224 --rc genhtml_branch_coverage=1 00:25:56.224 --rc genhtml_function_coverage=1 00:25:56.224 --rc genhtml_legend=1 00:25:56.224 --rc geninfo_all_blocks=1 00:25:56.224 --rc geninfo_unexecuted_blocks=1 00:25:56.224 00:25:56.224 ' 00:25:56.224 14:14:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:56.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:56.224 --rc genhtml_branch_coverage=1 00:25:56.224 --rc genhtml_function_coverage=1 00:25:56.224 --rc genhtml_legend=1 00:25:56.224 --rc geninfo_all_blocks=1 00:25:56.224 --rc geninfo_unexecuted_blocks=1 00:25:56.224 00:25:56.224 ' 00:25:56.224 14:14:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:56.224 14:14:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:25:56.485 14:14:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:56.485 14:14:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:56.485 14:14:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:56.485 14:14:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:56.485 14:14:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:56.485 14:14:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:56.485 14:14:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:56.485 14:14:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:56.485 14:14:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:56.485 14:14:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:56.485 14:14:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:56.485 14:14:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:56.485 14:14:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:56.485 14:14:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:56.485 14:14:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:56.485 14:14:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:56.485 14:14:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:56.485 14:14:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:56.485 14:14:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:56.485 14:14:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:56.485 14:14:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:56.485 14:14:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:56.485 14:14:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:56.485 14:14:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:56.485 14:14:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:56.485 14:14:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:56.485 14:14:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:56.485 14:14:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:56.485 14:14:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:25:56.485 14:14:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:25:56.485 14:14:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:25:56.485 14:14:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:25:56.485 14:14:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:25:56.485 14:14:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:25:56.485 14:14:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:25:56.485 14:14:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:25:56.485 14:14:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:25:56.485 14:14:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:25:56.485 14:14:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:25:56.485 14:14:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:25:56.485 14:14:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:25:56.485 14:14:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:25:56.485 14:14:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:25:56.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:56.485 14:14:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:25:56.485 14:14:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=91629 00:25:56.485 14:14:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:25:56.485 14:14:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 91629 00:25:56.485 14:14:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 91629 ']' 00:25:56.485 14:14:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:25:56.485 14:14:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:56.485 14:14:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:56.485 14:14:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:56.485 14:14:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:56.485 14:14:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:56.485 [2024-11-17 14:14:34.630603] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:25:56.485 [2024-11-17 14:14:34.630990] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91629 ] 00:25:56.747 [2024-11-17 14:14:34.784206] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:56.747 [2024-11-17 14:14:34.835083] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:57.319 14:14:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:57.319 14:14:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:25:57.319 14:14:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:25:57.319 14:14:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:25:57.319 14:14:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:25:57.319 14:14:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:57.319 14:14:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:25:57.319 14:14:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:57.319 14:14:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:25:57.319 14:14:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:57.319 14:14:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:25:57.319 14:14:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:57.319 14:14:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:25:57.319 14:14:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:57.319 14:14:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:25:57.319 14:14:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:57.319 14:14:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:25:57.319 14:14:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:25:57.319 14:14:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:25:57.319 14:14:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:25:57.319 14:14:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:25:57.319 14:14:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:25:57.319 14:14:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:25:57.581 14:14:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:25:57.581 14:14:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:25:57.581 14:14:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:25:57.581 14:14:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=basen1 00:25:57.581 14:14:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:25:57.581 14:14:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:25:57.581 14:14:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:25:57.581 14:14:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:25:57.842 14:14:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:25:57.842 { 00:25:57.842 "name": "basen1", 00:25:57.842 "aliases": [ 00:25:57.842 "b6cb4e92-12ea-4bd2-9ebf-d0b7a420a7c7" 00:25:57.842 ], 00:25:57.842 "product_name": "NVMe disk", 00:25:57.842 "block_size": 4096, 00:25:57.842 "num_blocks": 1310720, 00:25:57.842 "uuid": "b6cb4e92-12ea-4bd2-9ebf-d0b7a420a7c7", 00:25:57.842 "numa_id": -1, 00:25:57.842 "assigned_rate_limits": { 00:25:57.843 "rw_ios_per_sec": 0, 00:25:57.843 "rw_mbytes_per_sec": 0, 00:25:57.843 "r_mbytes_per_sec": 0, 00:25:57.843 "w_mbytes_per_sec": 0 00:25:57.843 }, 00:25:57.843 "claimed": true, 00:25:57.843 "claim_type": "read_many_write_one", 00:25:57.843 "zoned": false, 00:25:57.843 "supported_io_types": { 00:25:57.843 "read": true, 00:25:57.843 "write": true, 00:25:57.843 "unmap": true, 00:25:57.843 "flush": true, 00:25:57.843 "reset": true, 00:25:57.843 "nvme_admin": true, 00:25:57.843 "nvme_io": true, 00:25:57.843 "nvme_io_md": false, 00:25:57.843 "write_zeroes": true, 00:25:57.843 "zcopy": false, 00:25:57.843 "get_zone_info": false, 00:25:57.843 "zone_management": false, 00:25:57.843 "zone_append": false, 00:25:57.843 "compare": true, 00:25:57.843 "compare_and_write": false, 00:25:57.843 "abort": true, 00:25:57.843 "seek_hole": false, 00:25:57.843 "seek_data": false, 00:25:57.843 "copy": true, 00:25:57.843 "nvme_iov_md": false 00:25:57.843 }, 00:25:57.843 "driver_specific": { 00:25:57.843 "nvme": [ 00:25:57.843 { 00:25:57.843 "pci_address": "0000:00:11.0", 00:25:57.843 "trid": { 00:25:57.843 "trtype": "PCIe", 00:25:57.843 "traddr": "0000:00:11.0" 00:25:57.843 }, 00:25:57.843 "ctrlr_data": { 00:25:57.843 "cntlid": 0, 00:25:57.843 "vendor_id": "0x1b36", 00:25:57.843 "model_number": "QEMU NVMe Ctrl", 00:25:57.843 "serial_number": "12341", 00:25:57.843 "firmware_revision": "8.0.0", 00:25:57.843 "subnqn": "nqn.2019-08.org.qemu:12341", 00:25:57.843 "oacs": { 00:25:57.843 "security": 0, 00:25:57.843 "format": 1, 00:25:57.843 "firmware": 0, 00:25:57.843 "ns_manage": 1 00:25:57.843 }, 00:25:57.843 "multi_ctrlr": false, 00:25:57.843 "ana_reporting": false 00:25:57.843 }, 00:25:57.843 "vs": { 00:25:57.843 "nvme_version": "1.4" 00:25:57.843 }, 00:25:57.843 "ns_data": { 00:25:57.843 "id": 1, 00:25:57.843 "can_share": false 00:25:57.843 } 00:25:57.843 } 00:25:57.843 ], 00:25:57.843 "mp_policy": "active_passive" 00:25:57.843 } 00:25:57.843 } 00:25:57.843 ]' 00:25:57.843 14:14:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:25:57.843 14:14:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:25:57.843 14:14:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:25:57.843 14:14:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:25:57.843 14:14:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:25:57.843 14:14:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:25:57.843 14:14:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:25:57.843 14:14:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:25:57.843 14:14:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:25:57.843 14:14:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:57.843 14:14:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:58.104 14:14:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=54a45a5b-718e-4bc9-9d65-17d583d4cab7 00:25:58.104 14:14:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:25:58.104 14:14:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 54a45a5b-718e-4bc9-9d65-17d583d4cab7 00:25:58.366 14:14:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:25:58.627 14:14:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=8f2fe191-2a8d-4295-94f9-5bc7d4bc5970 00:25:58.627 14:14:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 8f2fe191-2a8d-4295-94f9-5bc7d4bc5970 00:25:58.888 14:14:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=a8e48a9b-6750-4a6c-9a1e-af61954ad245 00:25:58.888 14:14:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z a8e48a9b-6750-4a6c-9a1e-af61954ad245 ]] 00:25:58.888 14:14:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 a8e48a9b-6750-4a6c-9a1e-af61954ad245 5120 00:25:58.888 14:14:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:25:58.888 14:14:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:25:58.888 14:14:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=a8e48a9b-6750-4a6c-9a1e-af61954ad245 00:25:58.888 14:14:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:25:58.889 14:14:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size a8e48a9b-6750-4a6c-9a1e-af61954ad245 00:25:58.889 14:14:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=a8e48a9b-6750-4a6c-9a1e-af61954ad245 00:25:58.889 14:14:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:25:58.889 14:14:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:25:58.889 14:14:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:25:58.889 14:14:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a8e48a9b-6750-4a6c-9a1e-af61954ad245 00:25:59.150 14:14:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:25:59.150 { 00:25:59.150 "name": "a8e48a9b-6750-4a6c-9a1e-af61954ad245", 00:25:59.150 "aliases": [ 00:25:59.150 "lvs/basen1p0" 00:25:59.150 ], 00:25:59.150 "product_name": "Logical Volume", 00:25:59.150 "block_size": 4096, 00:25:59.150 "num_blocks": 5242880, 00:25:59.150 "uuid": "a8e48a9b-6750-4a6c-9a1e-af61954ad245", 00:25:59.150 "assigned_rate_limits": { 00:25:59.150 "rw_ios_per_sec": 0, 00:25:59.150 "rw_mbytes_per_sec": 0, 00:25:59.150 "r_mbytes_per_sec": 0, 00:25:59.150 "w_mbytes_per_sec": 0 00:25:59.150 }, 00:25:59.150 "claimed": false, 00:25:59.150 "zoned": false, 00:25:59.150 "supported_io_types": { 00:25:59.150 "read": true, 00:25:59.150 "write": true, 00:25:59.150 "unmap": true, 00:25:59.150 "flush": false, 00:25:59.150 "reset": true, 00:25:59.150 "nvme_admin": false, 00:25:59.150 "nvme_io": false, 00:25:59.150 "nvme_io_md": false, 00:25:59.150 "write_zeroes": true, 00:25:59.150 "zcopy": false, 00:25:59.150 "get_zone_info": false, 00:25:59.150 "zone_management": false, 00:25:59.150 "zone_append": false, 00:25:59.150 "compare": false, 00:25:59.150 "compare_and_write": false, 00:25:59.150 "abort": false, 00:25:59.150 "seek_hole": true, 00:25:59.150 "seek_data": true, 00:25:59.150 "copy": false, 00:25:59.150 "nvme_iov_md": false 00:25:59.150 }, 00:25:59.150 "driver_specific": { 00:25:59.150 "lvol": { 00:25:59.150 "lvol_store_uuid": "8f2fe191-2a8d-4295-94f9-5bc7d4bc5970", 00:25:59.150 "base_bdev": "basen1", 00:25:59.150 "thin_provision": true, 00:25:59.150 "num_allocated_clusters": 0, 00:25:59.150 "snapshot": false, 00:25:59.150 "clone": false, 00:25:59.150 "esnap_clone": false 00:25:59.150 } 00:25:59.150 } 00:25:59.150 } 00:25:59.150 ]' 00:25:59.150 14:14:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:25:59.150 14:14:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:25:59.150 14:14:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:25:59.150 14:14:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=5242880 00:25:59.150 14:14:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=20480 00:25:59.150 14:14:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 20480 00:25:59.150 14:14:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:25:59.150 14:14:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:25:59.150 14:14:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:25:59.411 14:14:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:25:59.411 14:14:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:25:59.411 14:14:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:25:59.671 14:14:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:25:59.671 14:14:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:25:59.672 14:14:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d a8e48a9b-6750-4a6c-9a1e-af61954ad245 -c cachen1p0 --l2p_dram_limit 2 00:25:59.672 [2024-11-17 14:14:37.969283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:59.672 [2024-11-17 14:14:37.969354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:25:59.672 [2024-11-17 14:14:37.969371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:25:59.672 [2024-11-17 14:14:37.969383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:59.672 [2024-11-17 14:14:37.969439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:59.672 [2024-11-17 14:14:37.969453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:25:59.672 [2024-11-17 14:14:37.969463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:25:59.672 [2024-11-17 14:14:37.969477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:59.672 [2024-11-17 14:14:37.969502] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:25:59.672 [2024-11-17 14:14:37.970105] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:25:59.672 [2024-11-17 14:14:37.970132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:59.672 [2024-11-17 14:14:37.970143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:25:59.672 [2024-11-17 14:14:37.970157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.635 ms 00:25:59.672 [2024-11-17 14:14:37.970167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:59.936 [2024-11-17 14:14:37.970267] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 3f2b34a3-3f30-4474-8b16-2775e53de0aa 00:25:59.936 [2024-11-17 14:14:37.972262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:59.936 [2024-11-17 14:14:37.972305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:25:59.936 [2024-11-17 14:14:37.972320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:25:59.936 [2024-11-17 14:14:37.972329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:59.936 [2024-11-17 14:14:37.980905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:59.936 [2024-11-17 14:14:37.980949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:25:59.936 [2024-11-17 14:14:37.980977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.523 ms 00:25:59.936 [2024-11-17 14:14:37.980986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:59.936 [2024-11-17 14:14:37.981040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:59.936 [2024-11-17 14:14:37.981050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:25:59.936 [2024-11-17 14:14:37.981061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:25:59.936 [2024-11-17 14:14:37.981072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:59.936 [2024-11-17 14:14:37.981136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:59.936 [2024-11-17 14:14:37.981146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:25:59.936 [2024-11-17 14:14:37.981159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:25:59.936 [2024-11-17 14:14:37.981167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:59.936 [2024-11-17 14:14:37.981194] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:25:59.936 [2024-11-17 14:14:37.983510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:59.936 [2024-11-17 14:14:37.983555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:25:59.936 [2024-11-17 14:14:37.983570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.324 ms 00:25:59.936 [2024-11-17 14:14:37.983581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:59.936 [2024-11-17 14:14:37.983613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:59.936 [2024-11-17 14:14:37.983625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:25:59.936 [2024-11-17 14:14:37.983635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:25:59.936 [2024-11-17 14:14:37.983647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:59.936 [2024-11-17 14:14:37.983665] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:25:59.936 [2024-11-17 14:14:37.983811] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:25:59.936 [2024-11-17 14:14:37.983826] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:25:59.936 [2024-11-17 14:14:37.983840] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:25:59.936 [2024-11-17 14:14:37.983851] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:25:59.936 [2024-11-17 14:14:37.983870] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:25:59.936 [2024-11-17 14:14:37.983879] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:25:59.936 [2024-11-17 14:14:37.983894] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:25:59.936 [2024-11-17 14:14:37.983902] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:25:59.936 [2024-11-17 14:14:37.983914] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:25:59.936 [2024-11-17 14:14:37.983928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:59.936 [2024-11-17 14:14:37.983943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:25:59.936 [2024-11-17 14:14:37.983953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.264 ms 00:25:59.936 [2024-11-17 14:14:37.983967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:59.936 [2024-11-17 14:14:37.984050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:59.936 [2024-11-17 14:14:37.984063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:25:59.936 [2024-11-17 14:14:37.984074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.067 ms 00:25:59.936 [2024-11-17 14:14:37.984084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:59.936 [2024-11-17 14:14:37.984179] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:25:59.936 [2024-11-17 14:14:37.984194] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:25:59.936 [2024-11-17 14:14:37.984203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:25:59.936 [2024-11-17 14:14:37.984213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:59.936 [2024-11-17 14:14:37.984221] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:25:59.936 [2024-11-17 14:14:37.984229] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:25:59.936 [2024-11-17 14:14:37.984258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:25:59.937 [2024-11-17 14:14:37.984270] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:25:59.937 [2024-11-17 14:14:37.984277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:25:59.937 [2024-11-17 14:14:37.984286] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:59.937 [2024-11-17 14:14:37.984293] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:25:59.937 [2024-11-17 14:14:37.984302] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:25:59.937 [2024-11-17 14:14:37.984309] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:59.937 [2024-11-17 14:14:37.984321] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:25:59.937 [2024-11-17 14:14:37.984328] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:25:59.937 [2024-11-17 14:14:37.984336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:59.937 [2024-11-17 14:14:37.984343] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:25:59.937 [2024-11-17 14:14:37.984352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:25:59.937 [2024-11-17 14:14:37.984361] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:59.937 [2024-11-17 14:14:37.984371] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:25:59.937 [2024-11-17 14:14:37.984379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:25:59.937 [2024-11-17 14:14:37.984388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:59.937 [2024-11-17 14:14:37.984396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:25:59.937 [2024-11-17 14:14:37.984405] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:25:59.937 [2024-11-17 14:14:37.984413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:59.937 [2024-11-17 14:14:37.984423] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:25:59.937 [2024-11-17 14:14:37.984430] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:25:59.937 [2024-11-17 14:14:37.984439] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:59.937 [2024-11-17 14:14:37.984446] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:25:59.937 [2024-11-17 14:14:37.984457] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:25:59.937 [2024-11-17 14:14:37.984464] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:59.937 [2024-11-17 14:14:37.984473] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:25:59.937 [2024-11-17 14:14:37.984479] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:25:59.937 [2024-11-17 14:14:37.984490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:59.937 [2024-11-17 14:14:37.984497] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:25:59.937 [2024-11-17 14:14:37.984505] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:25:59.937 [2024-11-17 14:14:37.984512] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:59.937 [2024-11-17 14:14:37.984521] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:25:59.937 [2024-11-17 14:14:37.984528] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:25:59.937 [2024-11-17 14:14:37.984536] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:59.937 [2024-11-17 14:14:37.984543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:25:59.937 [2024-11-17 14:14:37.984552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:25:59.937 [2024-11-17 14:14:37.984558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:59.937 [2024-11-17 14:14:37.984566] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:25:59.937 [2024-11-17 14:14:37.984575] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:25:59.937 [2024-11-17 14:14:37.984586] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:25:59.937 [2024-11-17 14:14:37.984605] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:59.937 [2024-11-17 14:14:37.984615] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:25:59.937 [2024-11-17 14:14:37.984622] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:25:59.937 [2024-11-17 14:14:37.984630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:25:59.937 [2024-11-17 14:14:37.984639] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:25:59.937 [2024-11-17 14:14:37.984647] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:25:59.937 [2024-11-17 14:14:37.984654] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:25:59.937 [2024-11-17 14:14:37.984668] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:25:59.937 [2024-11-17 14:14:37.984677] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:59.937 [2024-11-17 14:14:37.984687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:25:59.937 [2024-11-17 14:14:37.984695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:25:59.937 [2024-11-17 14:14:37.984708] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:25:59.937 [2024-11-17 14:14:37.984716] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:25:59.937 [2024-11-17 14:14:37.984727] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:25:59.937 [2024-11-17 14:14:37.984734] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:25:59.937 [2024-11-17 14:14:37.984746] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:25:59.937 [2024-11-17 14:14:37.984753] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:25:59.937 [2024-11-17 14:14:37.984761] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:25:59.937 [2024-11-17 14:14:37.984769] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:25:59.937 [2024-11-17 14:14:37.984778] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:25:59.937 [2024-11-17 14:14:37.984785] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:25:59.937 [2024-11-17 14:14:37.984794] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:25:59.937 [2024-11-17 14:14:37.984801] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:25:59.937 [2024-11-17 14:14:37.984812] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:25:59.937 [2024-11-17 14:14:37.984823] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:59.937 [2024-11-17 14:14:37.984832] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:59.937 [2024-11-17 14:14:37.984839] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:25:59.937 [2024-11-17 14:14:37.984848] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:25:59.937 [2024-11-17 14:14:37.984856] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:25:59.937 [2024-11-17 14:14:37.984866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:59.937 [2024-11-17 14:14:37.984873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:25:59.937 [2024-11-17 14:14:37.984884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.751 ms 00:25:59.937 [2024-11-17 14:14:37.984892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:59.937 [2024-11-17 14:14:37.984932] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:25:59.937 [2024-11-17 14:14:37.984942] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:26:04.149 [2024-11-17 14:14:42.161194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.149 [2024-11-17 14:14:42.161309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:26:04.149 [2024-11-17 14:14:42.161334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4176.238 ms 00:26:04.149 [2024-11-17 14:14:42.161345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.149 [2024-11-17 14:14:42.175528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.149 [2024-11-17 14:14:42.175590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:04.149 [2024-11-17 14:14:42.175610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.078 ms 00:26:04.149 [2024-11-17 14:14:42.175619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.149 [2024-11-17 14:14:42.175699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.149 [2024-11-17 14:14:42.175710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:26:04.149 [2024-11-17 14:14:42.175727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:26:04.149 [2024-11-17 14:14:42.175735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.149 [2024-11-17 14:14:42.188189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.149 [2024-11-17 14:14:42.188262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:04.149 [2024-11-17 14:14:42.188286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.408 ms 00:26:04.149 [2024-11-17 14:14:42.188294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.149 [2024-11-17 14:14:42.188347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.149 [2024-11-17 14:14:42.188362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:04.149 [2024-11-17 14:14:42.188374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:04.149 [2024-11-17 14:14:42.188382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.149 [2024-11-17 14:14:42.188977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.149 [2024-11-17 14:14:42.189023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:04.149 [2024-11-17 14:14:42.189037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.537 ms 00:26:04.149 [2024-11-17 14:14:42.189047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.149 [2024-11-17 14:14:42.189108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.149 [2024-11-17 14:14:42.189118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:04.149 [2024-11-17 14:14:42.189136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:26:04.149 [2024-11-17 14:14:42.189145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.149 [2024-11-17 14:14:42.207393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.149 [2024-11-17 14:14:42.207446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:04.149 [2024-11-17 14:14:42.207463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.216 ms 00:26:04.149 [2024-11-17 14:14:42.207475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.149 [2024-11-17 14:14:42.217800] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:26:04.149 [2024-11-17 14:14:42.219131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.149 [2024-11-17 14:14:42.219172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:26:04.149 [2024-11-17 14:14:42.219184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.549 ms 00:26:04.149 [2024-11-17 14:14:42.219195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.149 [2024-11-17 14:14:42.240434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.149 [2024-11-17 14:14:42.240495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:26:04.149 [2024-11-17 14:14:42.240509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.204 ms 00:26:04.149 [2024-11-17 14:14:42.240525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.149 [2024-11-17 14:14:42.240636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.149 [2024-11-17 14:14:42.240652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:26:04.149 [2024-11-17 14:14:42.240661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:26:04.149 [2024-11-17 14:14:42.240675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.149 [2024-11-17 14:14:42.246140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.149 [2024-11-17 14:14:42.246198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:26:04.149 [2024-11-17 14:14:42.246211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.413 ms 00:26:04.149 [2024-11-17 14:14:42.246224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.149 [2024-11-17 14:14:42.251473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.149 [2024-11-17 14:14:42.251526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:26:04.149 [2024-11-17 14:14:42.251538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.177 ms 00:26:04.149 [2024-11-17 14:14:42.251548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.149 [2024-11-17 14:14:42.251867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.149 [2024-11-17 14:14:42.251889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:26:04.150 [2024-11-17 14:14:42.251900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.271 ms 00:26:04.150 [2024-11-17 14:14:42.251914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.150 [2024-11-17 14:14:42.299669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.150 [2024-11-17 14:14:42.299917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:26:04.150 [2024-11-17 14:14:42.299947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 47.715 ms 00:26:04.150 [2024-11-17 14:14:42.299960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.150 [2024-11-17 14:14:42.307048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.150 [2024-11-17 14:14:42.307282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:26:04.150 [2024-11-17 14:14:42.307306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.024 ms 00:26:04.150 [2024-11-17 14:14:42.307318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.150 [2024-11-17 14:14:42.313154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.150 [2024-11-17 14:14:42.313210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:26:04.150 [2024-11-17 14:14:42.313222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.789 ms 00:26:04.150 [2024-11-17 14:14:42.313233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.150 [2024-11-17 14:14:42.319471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.150 [2024-11-17 14:14:42.319527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:26:04.150 [2024-11-17 14:14:42.319539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.160 ms 00:26:04.150 [2024-11-17 14:14:42.319554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.150 [2024-11-17 14:14:42.319608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.150 [2024-11-17 14:14:42.319623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:26:04.150 [2024-11-17 14:14:42.319633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:26:04.150 [2024-11-17 14:14:42.319643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.150 [2024-11-17 14:14:42.319735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:04.150 [2024-11-17 14:14:42.319750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:26:04.150 [2024-11-17 14:14:42.319759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:26:04.150 [2024-11-17 14:14:42.319771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:04.150 [2024-11-17 14:14:42.320917] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4351.168 ms, result 0 00:26:04.150 { 00:26:04.150 "name": "ftl", 00:26:04.150 "uuid": "3f2b34a3-3f30-4474-8b16-2775e53de0aa" 00:26:04.150 } 00:26:04.150 14:14:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:26:04.411 [2024-11-17 14:14:42.548107] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:04.411 14:14:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:26:04.672 14:14:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:26:04.933 [2024-11-17 14:14:42.992534] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:26:04.933 14:14:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:26:04.933 [2024-11-17 14:14:43.204959] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:04.933 14:14:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:26:05.507 14:14:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:26:05.507 Fill FTL, iteration 1 00:26:05.507 14:14:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:26:05.507 14:14:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:26:05.507 14:14:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:26:05.507 14:14:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:26:05.507 14:14:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:26:05.507 14:14:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:26:05.507 14:14:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:26:05.507 14:14:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:26:05.507 14:14:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:26:05.507 14:14:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:26:05.507 14:14:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:26:05.507 14:14:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:05.507 14:14:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:05.507 14:14:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:05.507 14:14:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:26:05.507 14:14:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=91756 00:26:05.507 14:14:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:26:05.507 14:14:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:26:05.507 14:14:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 91756 /var/tmp/spdk.tgt.sock 00:26:05.507 14:14:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 91756 ']' 00:26:05.507 14:14:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:26:05.507 14:14:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:05.507 14:14:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:26:05.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:26:05.507 14:14:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:05.507 14:14:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:05.507 [2024-11-17 14:14:43.653655] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:26:05.507 [2024-11-17 14:14:43.653780] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91756 ] 00:26:05.507 [2024-11-17 14:14:43.799794] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:05.767 [2024-11-17 14:14:43.841893] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:26:06.338 14:14:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:06.338 14:14:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:26:06.338 14:14:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:26:06.599 ftln1 00:26:06.599 14:14:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:26:06.599 14:14:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:26:06.860 14:14:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:26:06.860 14:14:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 91756 00:26:06.860 14:14:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 91756 ']' 00:26:06.860 14:14:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 91756 00:26:06.860 14:14:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:26:06.860 14:14:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:06.860 14:14:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91756 00:26:06.860 killing process with pid 91756 00:26:06.860 14:14:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:06.860 14:14:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:06.860 14:14:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91756' 00:26:06.860 14:14:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 91756 00:26:06.860 14:14:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 91756 00:26:07.122 14:14:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:26:07.122 14:14:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:26:07.122 [2024-11-17 14:14:45.387445] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:26:07.122 [2024-11-17 14:14:45.387560] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91788 ] 00:26:07.383 [2024-11-17 14:14:45.536556] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:07.383 [2024-11-17 14:14:45.578583] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:26:08.771  [2024-11-17T14:14:48.014Z] Copying: 177/1024 [MB] (177 MBps) [2024-11-17T14:14:48.948Z] Copying: 363/1024 [MB] (186 MBps) [2024-11-17T14:14:49.883Z] Copying: 613/1024 [MB] (250 MBps) [2024-11-17T14:14:50.449Z] Copying: 866/1024 [MB] (253 MBps) [2024-11-17T14:14:50.708Z] Copying: 1024/1024 [MB] (average 220 MBps) 00:26:12.407 00:26:12.407 Calculate MD5 checksum, iteration 1 00:26:12.407 14:14:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:26:12.407 14:14:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:26:12.408 14:14:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:12.408 14:14:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:12.408 14:14:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:12.408 14:14:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:12.408 14:14:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:12.408 14:14:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:12.408 [2024-11-17 14:14:50.697019] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:26:12.408 [2024-11-17 14:14:50.697143] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91846 ] 00:26:12.666 [2024-11-17 14:14:50.845400] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:12.666 [2024-11-17 14:14:50.884894] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:26:14.038  [2024-11-17T14:14:52.904Z] Copying: 645/1024 [MB] (645 MBps) [2024-11-17T14:14:53.163Z] Copying: 1024/1024 [MB] (average 624 MBps) 00:26:14.862 00:26:14.862 14:14:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:26:14.862 14:14:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:17.405 14:14:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:26:17.405 Fill FTL, iteration 2 00:26:17.405 14:14:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=288a4bc3c96d2a211980d90dc53091b9 00:26:17.405 14:14:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:26:17.405 14:14:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:26:17.405 14:14:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:26:17.405 14:14:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:26:17.405 14:14:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:17.405 14:14:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:17.405 14:14:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:17.405 14:14:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:17.405 14:14:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:26:17.405 [2024-11-17 14:14:55.179812] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:26:17.405 [2024-11-17 14:14:55.180070] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91901 ] 00:26:17.405 [2024-11-17 14:14:55.330221] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:17.405 [2024-11-17 14:14:55.372586] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:26:18.347  [2024-11-17T14:14:57.582Z] Copying: 198/1024 [MB] (198 MBps) [2024-11-17T14:14:58.955Z] Copying: 422/1024 [MB] (224 MBps) [2024-11-17T14:14:59.890Z] Copying: 664/1024 [MB] (242 MBps) [2024-11-17T14:15:00.148Z] Copying: 908/1024 [MB] (244 MBps) [2024-11-17T14:15:00.408Z] Copying: 1024/1024 [MB] (average 228 MBps) 00:26:22.107 00:26:22.107 Calculate MD5 checksum, iteration 2 00:26:22.107 14:15:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:26:22.107 14:15:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:26:22.107 14:15:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:22.107 14:15:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:22.107 14:15:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:22.107 14:15:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:22.107 14:15:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:22.107 14:15:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:22.107 [2024-11-17 14:15:00.323318] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:26:22.107 [2024-11-17 14:15:00.323444] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91956 ] 00:26:22.366 [2024-11-17 14:15:00.473374] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:22.366 [2024-11-17 14:15:00.515825] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:26:23.742  [2024-11-17T14:15:02.695Z] Copying: 615/1024 [MB] (615 MBps) [2024-11-17T14:15:03.261Z] Copying: 1024/1024 [MB] (average 606 MBps) 00:26:24.960 00:26:24.960 14:15:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:26:24.960 14:15:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:26.860 14:15:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:26:26.860 14:15:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=442e0d55ed45c48cb462be115e3495ba 00:26:26.860 14:15:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:26:26.860 14:15:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:26:26.860 14:15:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:26:27.119 [2024-11-17 14:15:05.161602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:27.119 [2024-11-17 14:15:05.161645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:27.119 [2024-11-17 14:15:05.161658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:26:27.119 [2024-11-17 14:15:05.161665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.119 [2024-11-17 14:15:05.161684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:27.119 [2024-11-17 14:15:05.161691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:27.119 [2024-11-17 14:15:05.161701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:27.119 [2024-11-17 14:15:05.161708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.119 [2024-11-17 14:15:05.161724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:27.119 [2024-11-17 14:15:05.161731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:27.119 [2024-11-17 14:15:05.161738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:27.119 [2024-11-17 14:15:05.161747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.119 [2024-11-17 14:15:05.161802] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.233 ms, result 0 00:26:27.119 true 00:26:27.119 14:15:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:27.119 { 00:26:27.119 "name": "ftl", 00:26:27.119 "properties": [ 00:26:27.119 { 00:26:27.119 "name": "superblock_version", 00:26:27.119 "value": 5, 00:26:27.119 "read-only": true 00:26:27.119 }, 00:26:27.119 { 00:26:27.119 "name": "base_device", 00:26:27.119 "bands": [ 00:26:27.119 { 00:26:27.119 "id": 0, 00:26:27.119 "state": "FREE", 00:26:27.119 "validity": 0.0 00:26:27.119 }, 00:26:27.119 { 00:26:27.119 "id": 1, 00:26:27.119 "state": "FREE", 00:26:27.119 "validity": 0.0 00:26:27.119 }, 00:26:27.119 { 00:26:27.119 "id": 2, 00:26:27.119 "state": "FREE", 00:26:27.119 "validity": 0.0 00:26:27.119 }, 00:26:27.119 { 00:26:27.119 "id": 3, 00:26:27.119 "state": "FREE", 00:26:27.119 "validity": 0.0 00:26:27.119 }, 00:26:27.119 { 00:26:27.119 "id": 4, 00:26:27.119 "state": "FREE", 00:26:27.119 "validity": 0.0 00:26:27.119 }, 00:26:27.119 { 00:26:27.119 "id": 5, 00:26:27.119 "state": "FREE", 00:26:27.119 "validity": 0.0 00:26:27.119 }, 00:26:27.119 { 00:26:27.119 "id": 6, 00:26:27.119 "state": "FREE", 00:26:27.119 "validity": 0.0 00:26:27.119 }, 00:26:27.119 { 00:26:27.119 "id": 7, 00:26:27.119 "state": "FREE", 00:26:27.119 "validity": 0.0 00:26:27.119 }, 00:26:27.119 { 00:26:27.119 "id": 8, 00:26:27.119 "state": "FREE", 00:26:27.119 "validity": 0.0 00:26:27.119 }, 00:26:27.119 { 00:26:27.119 "id": 9, 00:26:27.119 "state": "FREE", 00:26:27.119 "validity": 0.0 00:26:27.119 }, 00:26:27.119 { 00:26:27.119 "id": 10, 00:26:27.119 "state": "FREE", 00:26:27.119 "validity": 0.0 00:26:27.119 }, 00:26:27.119 { 00:26:27.119 "id": 11, 00:26:27.119 "state": "FREE", 00:26:27.119 "validity": 0.0 00:26:27.119 }, 00:26:27.119 { 00:26:27.119 "id": 12, 00:26:27.119 "state": "FREE", 00:26:27.119 "validity": 0.0 00:26:27.119 }, 00:26:27.119 { 00:26:27.119 "id": 13, 00:26:27.119 "state": "FREE", 00:26:27.119 "validity": 0.0 00:26:27.119 }, 00:26:27.119 { 00:26:27.119 "id": 14, 00:26:27.119 "state": "FREE", 00:26:27.119 "validity": 0.0 00:26:27.119 }, 00:26:27.119 { 00:26:27.119 "id": 15, 00:26:27.119 "state": "FREE", 00:26:27.119 "validity": 0.0 00:26:27.119 }, 00:26:27.119 { 00:26:27.119 "id": 16, 00:26:27.119 "state": "FREE", 00:26:27.119 "validity": 0.0 00:26:27.119 }, 00:26:27.119 { 00:26:27.119 "id": 17, 00:26:27.119 "state": "FREE", 00:26:27.119 "validity": 0.0 00:26:27.119 } 00:26:27.119 ], 00:26:27.119 "read-only": true 00:26:27.119 }, 00:26:27.119 { 00:26:27.119 "name": "cache_device", 00:26:27.119 "type": "bdev", 00:26:27.119 "chunks": [ 00:26:27.119 { 00:26:27.119 "id": 0, 00:26:27.119 "state": "INACTIVE", 00:26:27.119 "utilization": 0.0 00:26:27.119 }, 00:26:27.119 { 00:26:27.119 "id": 1, 00:26:27.119 "state": "CLOSED", 00:26:27.119 "utilization": 1.0 00:26:27.119 }, 00:26:27.119 { 00:26:27.119 "id": 2, 00:26:27.119 "state": "CLOSED", 00:26:27.119 "utilization": 1.0 00:26:27.119 }, 00:26:27.119 { 00:26:27.119 "id": 3, 00:26:27.119 "state": "OPEN", 00:26:27.119 "utilization": 0.001953125 00:26:27.119 }, 00:26:27.119 { 00:26:27.119 "id": 4, 00:26:27.119 "state": "OPEN", 00:26:27.119 "utilization": 0.0 00:26:27.119 } 00:26:27.119 ], 00:26:27.119 "read-only": true 00:26:27.119 }, 00:26:27.119 { 00:26:27.119 "name": "verbose_mode", 00:26:27.119 "value": true, 00:26:27.119 "unit": "", 00:26:27.119 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:26:27.119 }, 00:26:27.119 { 00:26:27.119 "name": "prep_upgrade_on_shutdown", 00:26:27.119 "value": false, 00:26:27.119 "unit": "", 00:26:27.119 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:26:27.119 } 00:26:27.119 ] 00:26:27.119 } 00:26:27.119 14:15:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:26:27.378 [2024-11-17 14:15:05.577853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:27.378 [2024-11-17 14:15:05.577889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:27.378 [2024-11-17 14:15:05.577899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:27.378 [2024-11-17 14:15:05.577906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.378 [2024-11-17 14:15:05.577922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:27.378 [2024-11-17 14:15:05.577929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:27.378 [2024-11-17 14:15:05.577936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:27.378 [2024-11-17 14:15:05.577942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.378 [2024-11-17 14:15:05.577957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:27.378 [2024-11-17 14:15:05.577963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:27.378 [2024-11-17 14:15:05.577969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:27.378 [2024-11-17 14:15:05.577975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.378 [2024-11-17 14:15:05.578022] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.157 ms, result 0 00:26:27.378 true 00:26:27.378 14:15:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:26:27.378 14:15:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:26:27.378 14:15:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:27.636 14:15:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:26:27.636 14:15:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:26:27.636 14:15:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:26:27.896 [2024-11-17 14:15:05.986169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:27.896 [2024-11-17 14:15:05.986299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:27.896 [2024-11-17 14:15:05.986345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:27.896 [2024-11-17 14:15:05.986363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.896 [2024-11-17 14:15:05.986394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:27.896 [2024-11-17 14:15:05.986411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:27.896 [2024-11-17 14:15:05.986427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:27.896 [2024-11-17 14:15:05.986446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.896 [2024-11-17 14:15:05.986474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:27.896 [2024-11-17 14:15:05.986491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:27.896 [2024-11-17 14:15:05.986507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:27.896 [2024-11-17 14:15:05.986952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:27.896 [2024-11-17 14:15:05.987371] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 1.001 ms, result 0 00:26:27.896 true 00:26:27.896 14:15:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:28.157 { 00:26:28.157 "name": "ftl", 00:26:28.157 "properties": [ 00:26:28.157 { 00:26:28.157 "name": "superblock_version", 00:26:28.157 "value": 5, 00:26:28.157 "read-only": true 00:26:28.157 }, 00:26:28.157 { 00:26:28.157 "name": "base_device", 00:26:28.157 "bands": [ 00:26:28.157 { 00:26:28.157 "id": 0, 00:26:28.157 "state": "FREE", 00:26:28.157 "validity": 0.0 00:26:28.157 }, 00:26:28.157 { 00:26:28.157 "id": 1, 00:26:28.157 "state": "FREE", 00:26:28.157 "validity": 0.0 00:26:28.157 }, 00:26:28.157 { 00:26:28.157 "id": 2, 00:26:28.157 "state": "FREE", 00:26:28.157 "validity": 0.0 00:26:28.157 }, 00:26:28.157 { 00:26:28.157 "id": 3, 00:26:28.157 "state": "FREE", 00:26:28.157 "validity": 0.0 00:26:28.157 }, 00:26:28.157 { 00:26:28.157 "id": 4, 00:26:28.157 "state": "FREE", 00:26:28.157 "validity": 0.0 00:26:28.157 }, 00:26:28.157 { 00:26:28.157 "id": 5, 00:26:28.157 "state": "FREE", 00:26:28.157 "validity": 0.0 00:26:28.157 }, 00:26:28.157 { 00:26:28.157 "id": 6, 00:26:28.157 "state": "FREE", 00:26:28.157 "validity": 0.0 00:26:28.157 }, 00:26:28.157 { 00:26:28.157 "id": 7, 00:26:28.157 "state": "FREE", 00:26:28.157 "validity": 0.0 00:26:28.157 }, 00:26:28.157 { 00:26:28.157 "id": 8, 00:26:28.157 "state": "FREE", 00:26:28.157 "validity": 0.0 00:26:28.157 }, 00:26:28.157 { 00:26:28.157 "id": 9, 00:26:28.157 "state": "FREE", 00:26:28.157 "validity": 0.0 00:26:28.157 }, 00:26:28.157 { 00:26:28.157 "id": 10, 00:26:28.157 "state": "FREE", 00:26:28.157 "validity": 0.0 00:26:28.157 }, 00:26:28.157 { 00:26:28.157 "id": 11, 00:26:28.157 "state": "FREE", 00:26:28.157 "validity": 0.0 00:26:28.157 }, 00:26:28.157 { 00:26:28.157 "id": 12, 00:26:28.157 "state": "FREE", 00:26:28.157 "validity": 0.0 00:26:28.157 }, 00:26:28.157 { 00:26:28.157 "id": 13, 00:26:28.157 "state": "FREE", 00:26:28.157 "validity": 0.0 00:26:28.157 }, 00:26:28.157 { 00:26:28.157 "id": 14, 00:26:28.157 "state": "FREE", 00:26:28.157 "validity": 0.0 00:26:28.157 }, 00:26:28.157 { 00:26:28.157 "id": 15, 00:26:28.157 "state": "FREE", 00:26:28.157 "validity": 0.0 00:26:28.157 }, 00:26:28.157 { 00:26:28.157 "id": 16, 00:26:28.157 "state": "FREE", 00:26:28.157 "validity": 0.0 00:26:28.157 }, 00:26:28.157 { 00:26:28.157 "id": 17, 00:26:28.157 "state": "FREE", 00:26:28.157 "validity": 0.0 00:26:28.157 } 00:26:28.157 ], 00:26:28.157 "read-only": true 00:26:28.157 }, 00:26:28.157 { 00:26:28.158 "name": "cache_device", 00:26:28.158 "type": "bdev", 00:26:28.158 "chunks": [ 00:26:28.158 { 00:26:28.158 "id": 0, 00:26:28.158 "state": "INACTIVE", 00:26:28.158 "utilization": 0.0 00:26:28.158 }, 00:26:28.158 { 00:26:28.158 "id": 1, 00:26:28.158 "state": "CLOSED", 00:26:28.158 "utilization": 1.0 00:26:28.158 }, 00:26:28.158 { 00:26:28.158 "id": 2, 00:26:28.158 "state": "CLOSED", 00:26:28.158 "utilization": 1.0 00:26:28.158 }, 00:26:28.158 { 00:26:28.158 "id": 3, 00:26:28.158 "state": "OPEN", 00:26:28.158 "utilization": 0.001953125 00:26:28.158 }, 00:26:28.158 { 00:26:28.158 "id": 4, 00:26:28.158 "state": "OPEN", 00:26:28.158 "utilization": 0.0 00:26:28.158 } 00:26:28.158 ], 00:26:28.158 "read-only": true 00:26:28.158 }, 00:26:28.158 { 00:26:28.158 "name": "verbose_mode", 00:26:28.158 "value": true, 00:26:28.158 "unit": "", 00:26:28.158 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:26:28.158 }, 00:26:28.158 { 00:26:28.158 "name": "prep_upgrade_on_shutdown", 00:26:28.158 "value": true, 00:26:28.158 "unit": "", 00:26:28.158 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:26:28.158 } 00:26:28.158 ] 00:26:28.158 } 00:26:28.158 14:15:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:26:28.158 14:15:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 91629 ]] 00:26:28.158 14:15:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 91629 00:26:28.158 14:15:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 91629 ']' 00:26:28.158 14:15:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 91629 00:26:28.158 14:15:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:26:28.158 14:15:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:28.158 14:15:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91629 00:26:28.158 killing process with pid 91629 00:26:28.158 14:15:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:28.158 14:15:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:28.158 14:15:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91629' 00:26:28.158 14:15:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 91629 00:26:28.158 14:15:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 91629 00:26:28.158 [2024-11-17 14:15:06.430880] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:26:28.158 [2024-11-17 14:15:06.438742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:28.158 [2024-11-17 14:15:06.438793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:26:28.158 [2024-11-17 14:15:06.438809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:28.158 [2024-11-17 14:15:06.438818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:28.158 [2024-11-17 14:15:06.438843] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:26:28.158 [2024-11-17 14:15:06.439695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:28.158 [2024-11-17 14:15:06.439737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:26:28.158 [2024-11-17 14:15:06.439757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.836 ms 00:26:28.158 [2024-11-17 14:15:06.439767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.166 [2024-11-17 14:15:15.132151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:38.166 [2024-11-17 14:15:15.132196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:26:38.166 [2024-11-17 14:15:15.132210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8692.322 ms 00:26:38.166 [2024-11-17 14:15:15.132217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.166 [2024-11-17 14:15:15.133108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:38.166 [2024-11-17 14:15:15.133124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:26:38.166 [2024-11-17 14:15:15.133132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.879 ms 00:26:38.166 [2024-11-17 14:15:15.133138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.166 [2024-11-17 14:15:15.134024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:38.166 [2024-11-17 14:15:15.134037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:26:38.166 [2024-11-17 14:15:15.134048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.861 ms 00:26:38.166 [2024-11-17 14:15:15.134056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.166 [2024-11-17 14:15:15.135830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:38.166 [2024-11-17 14:15:15.135865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:26:38.166 [2024-11-17 14:15:15.135874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.749 ms 00:26:38.166 [2024-11-17 14:15:15.135881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.166 [2024-11-17 14:15:15.137698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:38.166 [2024-11-17 14:15:15.137726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:26:38.166 [2024-11-17 14:15:15.137735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.789 ms 00:26:38.166 [2024-11-17 14:15:15.137741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.166 [2024-11-17 14:15:15.137796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:38.166 [2024-11-17 14:15:15.137808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:26:38.166 [2024-11-17 14:15:15.137818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:26:38.166 [2024-11-17 14:15:15.137825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.166 [2024-11-17 14:15:15.139499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:38.166 [2024-11-17 14:15:15.139607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:26:38.166 [2024-11-17 14:15:15.139642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.653 ms 00:26:38.166 [2024-11-17 14:15:15.139664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.166 [2024-11-17 14:15:15.141812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:38.166 [2024-11-17 14:15:15.141890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:26:38.166 [2024-11-17 14:15:15.141917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.023 ms 00:26:38.166 [2024-11-17 14:15:15.141940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.166 [2024-11-17 14:15:15.144177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:38.166 [2024-11-17 14:15:15.144283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:26:38.166 [2024-11-17 14:15:15.144309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.158 ms 00:26:38.166 [2024-11-17 14:15:15.144330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.166 [2024-11-17 14:15:15.146417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:38.166 [2024-11-17 14:15:15.146490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:26:38.166 [2024-11-17 14:15:15.146518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.926 ms 00:26:38.166 [2024-11-17 14:15:15.146543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.166 [2024-11-17 14:15:15.146619] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:26:38.166 [2024-11-17 14:15:15.146658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:38.166 [2024-11-17 14:15:15.146689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:26:38.166 [2024-11-17 14:15:15.146713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:26:38.166 [2024-11-17 14:15:15.146737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:38.166 [2024-11-17 14:15:15.146761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:38.166 [2024-11-17 14:15:15.146783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:38.166 [2024-11-17 14:15:15.146807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:38.166 [2024-11-17 14:15:15.146830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:38.166 [2024-11-17 14:15:15.146853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:38.166 [2024-11-17 14:15:15.146876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:38.166 [2024-11-17 14:15:15.146899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:38.166 [2024-11-17 14:15:15.146922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:38.166 [2024-11-17 14:15:15.146945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:38.166 [2024-11-17 14:15:15.146968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:38.166 [2024-11-17 14:15:15.146991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:38.166 [2024-11-17 14:15:15.147014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:38.166 [2024-11-17 14:15:15.147036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:38.166 [2024-11-17 14:15:15.147059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:38.166 [2024-11-17 14:15:15.147087] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:26:38.166 [2024-11-17 14:15:15.147110] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 3f2b34a3-3f30-4474-8b16-2775e53de0aa 00:26:38.166 [2024-11-17 14:15:15.147151] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:26:38.166 [2024-11-17 14:15:15.147172] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:26:38.166 [2024-11-17 14:15:15.147201] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:26:38.166 [2024-11-17 14:15:15.147224] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:26:38.166 [2024-11-17 14:15:15.147293] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:26:38.166 [2024-11-17 14:15:15.147325] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:26:38.166 [2024-11-17 14:15:15.147346] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:26:38.166 [2024-11-17 14:15:15.147366] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:26:38.166 [2024-11-17 14:15:15.147386] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:26:38.166 [2024-11-17 14:15:15.147409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:38.167 [2024-11-17 14:15:15.147432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:26:38.167 [2024-11-17 14:15:15.147466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.792 ms 00:26:38.167 [2024-11-17 14:15:15.147488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.167 [2024-11-17 14:15:15.150175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:38.167 [2024-11-17 14:15:15.150271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:26:38.167 [2024-11-17 14:15:15.150298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.644 ms 00:26:38.167 [2024-11-17 14:15:15.150329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.167 [2024-11-17 14:15:15.150472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:38.167 [2024-11-17 14:15:15.150496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:26:38.167 [2024-11-17 14:15:15.150520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.097 ms 00:26:38.167 [2024-11-17 14:15:15.150541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.167 [2024-11-17 14:15:15.156136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:38.167 [2024-11-17 14:15:15.156288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:38.167 [2024-11-17 14:15:15.156309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:38.167 [2024-11-17 14:15:15.156316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.167 [2024-11-17 14:15:15.156342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:38.167 [2024-11-17 14:15:15.156350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:38.167 [2024-11-17 14:15:15.156357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:38.167 [2024-11-17 14:15:15.156369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.167 [2024-11-17 14:15:15.156434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:38.167 [2024-11-17 14:15:15.156444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:38.167 [2024-11-17 14:15:15.156455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:38.167 [2024-11-17 14:15:15.156465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.167 [2024-11-17 14:15:15.156480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:38.167 [2024-11-17 14:15:15.156488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:38.167 [2024-11-17 14:15:15.156495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:38.167 [2024-11-17 14:15:15.156502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.167 [2024-11-17 14:15:15.164844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:38.167 [2024-11-17 14:15:15.164878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:38.167 [2024-11-17 14:15:15.164896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:38.167 [2024-11-17 14:15:15.164903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.167 [2024-11-17 14:15:15.172088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:38.167 [2024-11-17 14:15:15.172216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:38.167 [2024-11-17 14:15:15.172230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:38.167 [2024-11-17 14:15:15.172251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.167 [2024-11-17 14:15:15.172290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:38.167 [2024-11-17 14:15:15.172299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:38.167 [2024-11-17 14:15:15.172307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:38.167 [2024-11-17 14:15:15.172314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.167 [2024-11-17 14:15:15.172366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:38.167 [2024-11-17 14:15:15.172375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:38.167 [2024-11-17 14:15:15.172383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:38.167 [2024-11-17 14:15:15.172390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.167 [2024-11-17 14:15:15.172450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:38.167 [2024-11-17 14:15:15.172459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:38.167 [2024-11-17 14:15:15.172467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:38.167 [2024-11-17 14:15:15.172474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.167 [2024-11-17 14:15:15.172505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:38.167 [2024-11-17 14:15:15.172520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:26:38.167 [2024-11-17 14:15:15.172528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:38.167 [2024-11-17 14:15:15.172539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.167 [2024-11-17 14:15:15.172574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:38.167 [2024-11-17 14:15:15.172583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:38.167 [2024-11-17 14:15:15.172590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:38.167 [2024-11-17 14:15:15.172597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.167 [2024-11-17 14:15:15.172642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:38.167 [2024-11-17 14:15:15.172652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:38.167 [2024-11-17 14:15:15.172663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:38.167 [2024-11-17 14:15:15.172670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:38.167 [2024-11-17 14:15:15.172785] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8733.998 ms, result 0 00:26:40.715 14:15:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:26:40.715 14:15:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:26:40.715 14:15:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:26:40.715 14:15:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:26:40.715 14:15:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:40.715 14:15:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=92140 00:26:40.715 14:15:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:26:40.715 14:15:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 92140 00:26:40.715 14:15:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:40.715 14:15:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 92140 ']' 00:26:40.715 14:15:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:40.715 14:15:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:40.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:40.715 14:15:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:40.715 14:15:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:40.715 14:15:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:40.715 [2024-11-17 14:15:18.756931] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:26:40.715 [2024-11-17 14:15:18.757064] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92140 ] 00:26:40.715 [2024-11-17 14:15:18.911525] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:40.715 [2024-11-17 14:15:18.961724] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:41.288 [2024-11-17 14:15:19.293811] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:41.288 [2024-11-17 14:15:19.293896] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:41.288 [2024-11-17 14:15:19.446605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:41.288 [2024-11-17 14:15:19.446661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:26:41.288 [2024-11-17 14:15:19.446679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:41.288 [2024-11-17 14:15:19.446688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:41.288 [2024-11-17 14:15:19.446760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:41.288 [2024-11-17 14:15:19.446772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:41.288 [2024-11-17 14:15:19.446781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.048 ms 00:26:41.289 [2024-11-17 14:15:19.446789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:41.289 [2024-11-17 14:15:19.446819] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:26:41.289 [2024-11-17 14:15:19.447091] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:26:41.289 [2024-11-17 14:15:19.447107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:41.289 [2024-11-17 14:15:19.447115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:41.289 [2024-11-17 14:15:19.447127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.298 ms 00:26:41.289 [2024-11-17 14:15:19.447135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:41.289 [2024-11-17 14:15:19.448953] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:26:41.289 [2024-11-17 14:15:19.453414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:41.289 [2024-11-17 14:15:19.453467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:26:41.289 [2024-11-17 14:15:19.453480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.462 ms 00:26:41.289 [2024-11-17 14:15:19.453496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:41.289 [2024-11-17 14:15:19.453579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:41.289 [2024-11-17 14:15:19.453597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:26:41.289 [2024-11-17 14:15:19.453608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:26:41.289 [2024-11-17 14:15:19.453617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:41.289 [2024-11-17 14:15:19.462425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:41.289 [2024-11-17 14:15:19.462605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:41.289 [2024-11-17 14:15:19.462632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.752 ms 00:26:41.289 [2024-11-17 14:15:19.462641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:41.289 [2024-11-17 14:15:19.462691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:41.289 [2024-11-17 14:15:19.462701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:41.289 [2024-11-17 14:15:19.462710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:26:41.289 [2024-11-17 14:15:19.462718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:41.289 [2024-11-17 14:15:19.462789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:41.289 [2024-11-17 14:15:19.462799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:26:41.289 [2024-11-17 14:15:19.462808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:26:41.289 [2024-11-17 14:15:19.462819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:41.289 [2024-11-17 14:15:19.462848] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:26:41.289 [2024-11-17 14:15:19.464941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:41.289 [2024-11-17 14:15:19.464977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:41.289 [2024-11-17 14:15:19.464988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.101 ms 00:26:41.289 [2024-11-17 14:15:19.464995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:41.289 [2024-11-17 14:15:19.465025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:41.289 [2024-11-17 14:15:19.465041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:26:41.289 [2024-11-17 14:15:19.465051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:41.289 [2024-11-17 14:15:19.465061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:41.289 [2024-11-17 14:15:19.465087] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:26:41.289 [2024-11-17 14:15:19.465109] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:26:41.289 [2024-11-17 14:15:19.465147] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:26:41.289 [2024-11-17 14:15:19.465162] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:26:41.289 [2024-11-17 14:15:19.465293] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:26:41.289 [2024-11-17 14:15:19.465307] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:26:41.289 [2024-11-17 14:15:19.465324] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:26:41.289 [2024-11-17 14:15:19.465335] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:26:41.289 [2024-11-17 14:15:19.465350] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:26:41.289 [2024-11-17 14:15:19.465359] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:26:41.289 [2024-11-17 14:15:19.465367] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:26:41.289 [2024-11-17 14:15:19.465376] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:26:41.289 [2024-11-17 14:15:19.465387] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:26:41.289 [2024-11-17 14:15:19.465396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:41.289 [2024-11-17 14:15:19.465407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:26:41.289 [2024-11-17 14:15:19.465415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.311 ms 00:26:41.289 [2024-11-17 14:15:19.465424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:41.289 [2024-11-17 14:15:19.465514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:41.289 [2024-11-17 14:15:19.465522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:26:41.289 [2024-11-17 14:15:19.465530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:26:41.289 [2024-11-17 14:15:19.465538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:41.289 [2024-11-17 14:15:19.465645] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:26:41.289 [2024-11-17 14:15:19.465661] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:26:41.289 [2024-11-17 14:15:19.465672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:41.289 [2024-11-17 14:15:19.465682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:41.289 [2024-11-17 14:15:19.465691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:26:41.289 [2024-11-17 14:15:19.465699] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:26:41.289 [2024-11-17 14:15:19.465706] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:26:41.289 [2024-11-17 14:15:19.465714] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:26:41.289 [2024-11-17 14:15:19.465722] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:26:41.289 [2024-11-17 14:15:19.465731] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:41.289 [2024-11-17 14:15:19.465739] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:26:41.289 [2024-11-17 14:15:19.465747] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:26:41.289 [2024-11-17 14:15:19.465755] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:41.289 [2024-11-17 14:15:19.465764] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:26:41.289 [2024-11-17 14:15:19.465778] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:26:41.289 [2024-11-17 14:15:19.465786] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:41.289 [2024-11-17 14:15:19.465799] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:26:41.289 [2024-11-17 14:15:19.465808] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:26:41.289 [2024-11-17 14:15:19.465829] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:41.289 [2024-11-17 14:15:19.465838] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:26:41.289 [2024-11-17 14:15:19.465847] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:26:41.289 [2024-11-17 14:15:19.465856] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:41.289 [2024-11-17 14:15:19.465866] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:26:41.289 [2024-11-17 14:15:19.465874] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:26:41.289 [2024-11-17 14:15:19.465883] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:41.289 [2024-11-17 14:15:19.465891] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:26:41.289 [2024-11-17 14:15:19.465899] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:26:41.289 [2024-11-17 14:15:19.465907] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:41.289 [2024-11-17 14:15:19.465915] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:26:41.290 [2024-11-17 14:15:19.465922] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:26:41.290 [2024-11-17 14:15:19.465930] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:41.290 [2024-11-17 14:15:19.465938] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:26:41.290 [2024-11-17 14:15:19.465950] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:26:41.290 [2024-11-17 14:15:19.465958] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:41.290 [2024-11-17 14:15:19.465965] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:26:41.290 [2024-11-17 14:15:19.465973] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:26:41.290 [2024-11-17 14:15:19.465981] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:41.290 [2024-11-17 14:15:19.465989] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:26:41.290 [2024-11-17 14:15:19.465997] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:26:41.290 [2024-11-17 14:15:19.466005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:41.290 [2024-11-17 14:15:19.466013] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:26:41.290 [2024-11-17 14:15:19.466021] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:26:41.290 [2024-11-17 14:15:19.466029] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:41.290 [2024-11-17 14:15:19.466037] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:26:41.290 [2024-11-17 14:15:19.466049] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:26:41.290 [2024-11-17 14:15:19.466058] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:41.290 [2024-11-17 14:15:19.466067] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:41.290 [2024-11-17 14:15:19.466075] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:26:41.290 [2024-11-17 14:15:19.466088] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:26:41.290 [2024-11-17 14:15:19.466096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:26:41.290 [2024-11-17 14:15:19.466103] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:26:41.290 [2024-11-17 14:15:19.466111] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:26:41.290 [2024-11-17 14:15:19.466117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:26:41.290 [2024-11-17 14:15:19.466126] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:26:41.290 [2024-11-17 14:15:19.466138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:41.290 [2024-11-17 14:15:19.466147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:26:41.290 [2024-11-17 14:15:19.466155] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:26:41.290 [2024-11-17 14:15:19.466162] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:26:41.290 [2024-11-17 14:15:19.466170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:26:41.290 [2024-11-17 14:15:19.466178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:26:41.290 [2024-11-17 14:15:19.466185] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:26:41.290 [2024-11-17 14:15:19.466192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:26:41.290 [2024-11-17 14:15:19.466199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:26:41.290 [2024-11-17 14:15:19.466207] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:26:41.290 [2024-11-17 14:15:19.466218] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:26:41.290 [2024-11-17 14:15:19.466225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:26:41.290 [2024-11-17 14:15:19.466232] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:26:41.290 [2024-11-17 14:15:19.466253] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:26:41.290 [2024-11-17 14:15:19.466261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:26:41.290 [2024-11-17 14:15:19.466269] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:26:41.290 [2024-11-17 14:15:19.466279] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:41.290 [2024-11-17 14:15:19.466287] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:41.290 [2024-11-17 14:15:19.466294] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:26:41.290 [2024-11-17 14:15:19.466301] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:26:41.290 [2024-11-17 14:15:19.466308] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:26:41.290 [2024-11-17 14:15:19.466318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:41.290 [2024-11-17 14:15:19.466325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:26:41.290 [2024-11-17 14:15:19.466333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.744 ms 00:26:41.290 [2024-11-17 14:15:19.466343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:41.290 [2024-11-17 14:15:19.466387] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:26:41.290 [2024-11-17 14:15:19.466405] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:26:46.585 [2024-11-17 14:15:23.864429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:46.585 [2024-11-17 14:15:23.864513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:26:46.585 [2024-11-17 14:15:23.864531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4398.023 ms 00:26:46.585 [2024-11-17 14:15:23.864552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:46.585 [2024-11-17 14:15:23.877786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:46.585 [2024-11-17 14:15:23.877841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:46.585 [2024-11-17 14:15:23.877856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.112 ms 00:26:46.585 [2024-11-17 14:15:23.877865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:46.585 [2024-11-17 14:15:23.877918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:46.585 [2024-11-17 14:15:23.877927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:26:46.585 [2024-11-17 14:15:23.877945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:26:46.585 [2024-11-17 14:15:23.877953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:46.585 [2024-11-17 14:15:23.899923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:46.585 [2024-11-17 14:15:23.899987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:46.585 [2024-11-17 14:15:23.900002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.913 ms 00:26:46.585 [2024-11-17 14:15:23.900012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:46.585 [2024-11-17 14:15:23.900071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:46.585 [2024-11-17 14:15:23.900082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:46.585 [2024-11-17 14:15:23.900093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:46.585 [2024-11-17 14:15:23.900102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:46.585 [2024-11-17 14:15:23.900725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:46.585 [2024-11-17 14:15:23.900750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:46.585 [2024-11-17 14:15:23.900763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.550 ms 00:26:46.585 [2024-11-17 14:15:23.900773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:46.585 [2024-11-17 14:15:23.900843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:46.585 [2024-11-17 14:15:23.900856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:46.585 [2024-11-17 14:15:23.900866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:26:46.585 [2024-11-17 14:15:23.900882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:46.585 [2024-11-17 14:15:23.909198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:46.585 [2024-11-17 14:15:23.909277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:46.585 [2024-11-17 14:15:23.909289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.292 ms 00:26:46.585 [2024-11-17 14:15:23.909297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:46.585 [2024-11-17 14:15:23.912885] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:26:46.585 [2024-11-17 14:15:23.912937] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:26:46.585 [2024-11-17 14:15:23.912951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:46.585 [2024-11-17 14:15:23.912960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:26:46.585 [2024-11-17 14:15:23.912969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.561 ms 00:26:46.585 [2024-11-17 14:15:23.912977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:46.585 [2024-11-17 14:15:23.917583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:46.585 [2024-11-17 14:15:23.917643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:26:46.585 [2024-11-17 14:15:23.917662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.555 ms 00:26:46.585 [2024-11-17 14:15:23.917670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:46.585 [2024-11-17 14:15:23.920105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:46.585 [2024-11-17 14:15:23.920151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:26:46.585 [2024-11-17 14:15:23.920161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.382 ms 00:26:46.585 [2024-11-17 14:15:23.920169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:46.585 [2024-11-17 14:15:23.922465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:46.585 [2024-11-17 14:15:23.922643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:26:46.585 [2024-11-17 14:15:23.922662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.248 ms 00:26:46.585 [2024-11-17 14:15:23.922669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:46.585 [2024-11-17 14:15:23.923113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:46.585 [2024-11-17 14:15:23.923137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:26:46.585 [2024-11-17 14:15:23.923146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.262 ms 00:26:46.585 [2024-11-17 14:15:23.923155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:46.585 [2024-11-17 14:15:23.947228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:46.585 [2024-11-17 14:15:23.947316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:26:46.585 [2024-11-17 14:15:23.947335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.053 ms 00:26:46.585 [2024-11-17 14:15:23.947344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:46.585 [2024-11-17 14:15:23.955310] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:26:46.585 [2024-11-17 14:15:23.956269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:46.585 [2024-11-17 14:15:23.956303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:26:46.585 [2024-11-17 14:15:23.956314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.870 ms 00:26:46.585 [2024-11-17 14:15:23.956328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:46.585 [2024-11-17 14:15:23.956427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:46.585 [2024-11-17 14:15:23.956438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:26:46.585 [2024-11-17 14:15:23.956448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:26:46.585 [2024-11-17 14:15:23.956461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:46.585 [2024-11-17 14:15:23.956507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:46.585 [2024-11-17 14:15:23.956518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:26:46.585 [2024-11-17 14:15:23.956526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:26:46.585 [2024-11-17 14:15:23.956534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:46.585 [2024-11-17 14:15:23.956559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:46.585 [2024-11-17 14:15:23.956568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:26:46.585 [2024-11-17 14:15:23.956576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:46.585 [2024-11-17 14:15:23.956584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:46.585 [2024-11-17 14:15:23.956619] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:26:46.585 [2024-11-17 14:15:23.956630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:46.585 [2024-11-17 14:15:23.956639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:26:46.585 [2024-11-17 14:15:23.956655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:26:46.585 [2024-11-17 14:15:23.956662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:46.585 [2024-11-17 14:15:23.961361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:46.585 [2024-11-17 14:15:23.961413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:26:46.585 [2024-11-17 14:15:23.961424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.677 ms 00:26:46.585 [2024-11-17 14:15:23.961432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:46.585 [2024-11-17 14:15:23.961520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:46.585 [2024-11-17 14:15:23.961530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:26:46.585 [2024-11-17 14:15:23.961544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.044 ms 00:26:46.585 [2024-11-17 14:15:23.961552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:46.585 [2024-11-17 14:15:23.962650] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4515.582 ms, result 0 00:26:46.585 [2024-11-17 14:15:23.976703] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:46.585 [2024-11-17 14:15:23.992699] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:26:46.585 [2024-11-17 14:15:24.000849] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:46.585 14:15:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:46.585 14:15:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:26:46.585 14:15:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:46.585 14:15:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:26:46.585 14:15:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:26:46.585 [2024-11-17 14:15:24.244949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:46.585 [2024-11-17 14:15:24.245003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:46.585 [2024-11-17 14:15:24.245016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:26:46.585 [2024-11-17 14:15:24.245025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:46.585 [2024-11-17 14:15:24.245048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:46.586 [2024-11-17 14:15:24.245057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:46.586 [2024-11-17 14:15:24.245065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:46.586 [2024-11-17 14:15:24.245074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:46.586 [2024-11-17 14:15:24.245099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:46.586 [2024-11-17 14:15:24.245107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:46.586 [2024-11-17 14:15:24.245116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:46.586 [2024-11-17 14:15:24.245123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:46.586 [2024-11-17 14:15:24.245183] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.227 ms, result 0 00:26:46.586 true 00:26:46.586 14:15:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:46.586 { 00:26:46.586 "name": "ftl", 00:26:46.586 "properties": [ 00:26:46.586 { 00:26:46.586 "name": "superblock_version", 00:26:46.586 "value": 5, 00:26:46.586 "read-only": true 00:26:46.586 }, 00:26:46.586 { 00:26:46.586 "name": "base_device", 00:26:46.586 "bands": [ 00:26:46.586 { 00:26:46.586 "id": 0, 00:26:46.586 "state": "CLOSED", 00:26:46.586 "validity": 1.0 00:26:46.586 }, 00:26:46.586 { 00:26:46.586 "id": 1, 00:26:46.586 "state": "CLOSED", 00:26:46.586 "validity": 1.0 00:26:46.586 }, 00:26:46.586 { 00:26:46.586 "id": 2, 00:26:46.586 "state": "CLOSED", 00:26:46.586 "validity": 0.007843137254901933 00:26:46.586 }, 00:26:46.586 { 00:26:46.586 "id": 3, 00:26:46.586 "state": "FREE", 00:26:46.586 "validity": 0.0 00:26:46.586 }, 00:26:46.586 { 00:26:46.586 "id": 4, 00:26:46.586 "state": "FREE", 00:26:46.586 "validity": 0.0 00:26:46.586 }, 00:26:46.586 { 00:26:46.586 "id": 5, 00:26:46.586 "state": "FREE", 00:26:46.586 "validity": 0.0 00:26:46.586 }, 00:26:46.586 { 00:26:46.586 "id": 6, 00:26:46.586 "state": "FREE", 00:26:46.586 "validity": 0.0 00:26:46.586 }, 00:26:46.586 { 00:26:46.586 "id": 7, 00:26:46.586 "state": "FREE", 00:26:46.586 "validity": 0.0 00:26:46.586 }, 00:26:46.586 { 00:26:46.586 "id": 8, 00:26:46.586 "state": "FREE", 00:26:46.586 "validity": 0.0 00:26:46.586 }, 00:26:46.586 { 00:26:46.586 "id": 9, 00:26:46.586 "state": "FREE", 00:26:46.586 "validity": 0.0 00:26:46.586 }, 00:26:46.586 { 00:26:46.586 "id": 10, 00:26:46.586 "state": "FREE", 00:26:46.586 "validity": 0.0 00:26:46.586 }, 00:26:46.586 { 00:26:46.586 "id": 11, 00:26:46.586 "state": "FREE", 00:26:46.586 "validity": 0.0 00:26:46.586 }, 00:26:46.586 { 00:26:46.586 "id": 12, 00:26:46.586 "state": "FREE", 00:26:46.586 "validity": 0.0 00:26:46.586 }, 00:26:46.586 { 00:26:46.586 "id": 13, 00:26:46.586 "state": "FREE", 00:26:46.586 "validity": 0.0 00:26:46.586 }, 00:26:46.586 { 00:26:46.586 "id": 14, 00:26:46.586 "state": "FREE", 00:26:46.586 "validity": 0.0 00:26:46.586 }, 00:26:46.586 { 00:26:46.586 "id": 15, 00:26:46.586 "state": "FREE", 00:26:46.586 "validity": 0.0 00:26:46.586 }, 00:26:46.586 { 00:26:46.586 "id": 16, 00:26:46.586 "state": "FREE", 00:26:46.586 "validity": 0.0 00:26:46.586 }, 00:26:46.586 { 00:26:46.586 "id": 17, 00:26:46.586 "state": "FREE", 00:26:46.586 "validity": 0.0 00:26:46.586 } 00:26:46.586 ], 00:26:46.586 "read-only": true 00:26:46.586 }, 00:26:46.586 { 00:26:46.586 "name": "cache_device", 00:26:46.586 "type": "bdev", 00:26:46.586 "chunks": [ 00:26:46.586 { 00:26:46.586 "id": 0, 00:26:46.586 "state": "INACTIVE", 00:26:46.586 "utilization": 0.0 00:26:46.586 }, 00:26:46.586 { 00:26:46.586 "id": 1, 00:26:46.586 "state": "OPEN", 00:26:46.586 "utilization": 0.0 00:26:46.586 }, 00:26:46.586 { 00:26:46.586 "id": 2, 00:26:46.586 "state": "OPEN", 00:26:46.586 "utilization": 0.0 00:26:46.586 }, 00:26:46.586 { 00:26:46.586 "id": 3, 00:26:46.586 "state": "FREE", 00:26:46.586 "utilization": 0.0 00:26:46.586 }, 00:26:46.586 { 00:26:46.586 "id": 4, 00:26:46.586 "state": "FREE", 00:26:46.586 "utilization": 0.0 00:26:46.586 } 00:26:46.586 ], 00:26:46.586 "read-only": true 00:26:46.586 }, 00:26:46.586 { 00:26:46.586 "name": "verbose_mode", 00:26:46.586 "value": true, 00:26:46.586 "unit": "", 00:26:46.586 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:26:46.586 }, 00:26:46.586 { 00:26:46.586 "name": "prep_upgrade_on_shutdown", 00:26:46.586 "value": false, 00:26:46.586 "unit": "", 00:26:46.586 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:26:46.586 } 00:26:46.586 ] 00:26:46.586 } 00:26:46.586 14:15:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:26:46.586 14:15:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:26:46.586 14:15:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:46.586 14:15:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:26:46.586 14:15:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:26:46.586 14:15:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:26:46.586 14:15:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:46.586 14:15:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:26:46.847 Validate MD5 checksum, iteration 1 00:26:46.847 14:15:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:26:46.847 14:15:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:26:46.847 14:15:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:26:46.847 14:15:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:26:46.847 14:15:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:26:46.847 14:15:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:46.847 14:15:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:26:46.847 14:15:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:46.847 14:15:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:46.847 14:15:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:46.847 14:15:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:46.847 14:15:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:46.847 14:15:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:46.847 [2024-11-17 14:15:24.989696] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:26:46.847 [2024-11-17 14:15:24.989836] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92221 ] 00:26:46.847 [2024-11-17 14:15:25.143205] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:47.107 [2024-11-17 14:15:25.193113] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:26:48.492  [2024-11-17T14:15:27.736Z] Copying: 530/1024 [MB] (530 MBps) [2024-11-17T14:15:28.307Z] Copying: 1024/1024 [MB] (average 522 MBps) 00:26:50.006 00:26:50.267 14:15:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:26:50.267 14:15:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:52.812 14:15:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:26:52.812 Validate MD5 checksum, iteration 2 00:26:52.812 14:15:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=288a4bc3c96d2a211980d90dc53091b9 00:26:52.812 14:15:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 288a4bc3c96d2a211980d90dc53091b9 != \2\8\8\a\4\b\c\3\c\9\6\d\2\a\2\1\1\9\8\0\d\9\0\d\c\5\3\0\9\1\b\9 ]] 00:26:52.812 14:15:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:26:52.812 14:15:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:52.812 14:15:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:26:52.813 14:15:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:52.813 14:15:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:52.813 14:15:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:52.813 14:15:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:52.813 14:15:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:52.813 14:15:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:52.813 [2024-11-17 14:15:30.602323] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:26:52.813 [2024-11-17 14:15:30.602441] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92281 ] 00:26:52.813 [2024-11-17 14:15:30.749395] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:52.813 [2024-11-17 14:15:30.800792] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:26:54.198  [2024-11-17T14:15:33.437Z] Copying: 480/1024 [MB] (480 MBps) [2024-11-17T14:15:33.698Z] Copying: 1024/1024 [MB] (average 518 MBps) 00:26:55.397 00:26:55.397 14:15:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:26:55.397 14:15:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:57.302 14:15:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:26:57.302 14:15:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=442e0d55ed45c48cb462be115e3495ba 00:26:57.302 14:15:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 442e0d55ed45c48cb462be115e3495ba != \4\4\2\e\0\d\5\5\e\d\4\5\c\4\8\c\b\4\6\2\b\e\1\1\5\e\3\4\9\5\b\a ]] 00:26:57.302 14:15:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:26:57.302 14:15:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:57.302 14:15:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:26:57.302 14:15:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 92140 ]] 00:26:57.302 14:15:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 92140 00:26:57.302 14:15:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:26:57.302 14:15:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:26:57.302 14:15:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:26:57.302 14:15:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:26:57.302 14:15:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:57.302 14:15:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=92338 00:26:57.302 14:15:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:57.302 14:15:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:26:57.302 14:15:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 92338 00:26:57.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:57.302 14:15:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 92338 ']' 00:26:57.302 14:15:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:57.302 14:15:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:57.302 14:15:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:57.302 14:15:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:57.302 14:15:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:57.561 [2024-11-17 14:15:35.648142] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:26:57.561 [2024-11-17 14:15:35.648270] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92338 ] 00:26:57.561 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 830: 92140 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:26:57.561 [2024-11-17 14:15:35.794409] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:57.561 [2024-11-17 14:15:35.835076] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:58.129 [2024-11-17 14:15:36.125926] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:58.129 [2024-11-17 14:15:36.125982] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:58.129 [2024-11-17 14:15:36.264212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:58.129 [2024-11-17 14:15:36.264258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:26:58.129 [2024-11-17 14:15:36.264269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:58.129 [2024-11-17 14:15:36.264300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:58.129 [2024-11-17 14:15:36.264346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:58.129 [2024-11-17 14:15:36.264354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:58.129 [2024-11-17 14:15:36.264361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:26:58.129 [2024-11-17 14:15:36.264366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:58.129 [2024-11-17 14:15:36.264390] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:26:58.129 [2024-11-17 14:15:36.264577] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:26:58.129 [2024-11-17 14:15:36.264592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:58.129 [2024-11-17 14:15:36.264598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:58.129 [2024-11-17 14:15:36.264606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.211 ms 00:26:58.129 [2024-11-17 14:15:36.264614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:58.129 [2024-11-17 14:15:36.264804] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:26:58.129 [2024-11-17 14:15:36.269595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:58.129 [2024-11-17 14:15:36.269628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:26:58.129 [2024-11-17 14:15:36.269638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.792 ms 00:26:58.129 [2024-11-17 14:15:36.269649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:58.129 [2024-11-17 14:15:36.270581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:58.129 [2024-11-17 14:15:36.270612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:26:58.129 [2024-11-17 14:15:36.270621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:26:58.129 [2024-11-17 14:15:36.270628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:58.129 [2024-11-17 14:15:36.270840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:58.129 [2024-11-17 14:15:36.270851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:58.129 [2024-11-17 14:15:36.270861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.173 ms 00:26:58.129 [2024-11-17 14:15:36.270867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:58.129 [2024-11-17 14:15:36.270894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:58.129 [2024-11-17 14:15:36.270901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:58.129 [2024-11-17 14:15:36.270908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:26:58.129 [2024-11-17 14:15:36.270914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:58.129 [2024-11-17 14:15:36.270936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:58.129 [2024-11-17 14:15:36.270943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:26:58.129 [2024-11-17 14:15:36.270954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:26:58.129 [2024-11-17 14:15:36.270962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:58.130 [2024-11-17 14:15:36.270985] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:26:58.130 [2024-11-17 14:15:36.271708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:58.130 [2024-11-17 14:15:36.271722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:58.130 [2024-11-17 14:15:36.271733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.728 ms 00:26:58.130 [2024-11-17 14:15:36.271740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:58.130 [2024-11-17 14:15:36.271763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:58.130 [2024-11-17 14:15:36.271771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:26:58.130 [2024-11-17 14:15:36.271778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:58.130 [2024-11-17 14:15:36.271788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:58.130 [2024-11-17 14:15:36.271804] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:26:58.130 [2024-11-17 14:15:36.271823] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:26:58.130 [2024-11-17 14:15:36.271853] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:26:58.130 [2024-11-17 14:15:36.271864] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:26:58.130 [2024-11-17 14:15:36.271946] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:26:58.130 [2024-11-17 14:15:36.271957] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:26:58.130 [2024-11-17 14:15:36.271968] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:26:58.130 [2024-11-17 14:15:36.271976] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:26:58.130 [2024-11-17 14:15:36.271983] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:26:58.130 [2024-11-17 14:15:36.271991] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:26:58.130 [2024-11-17 14:15:36.271997] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:26:58.130 [2024-11-17 14:15:36.272003] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:26:58.130 [2024-11-17 14:15:36.272009] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:26:58.130 [2024-11-17 14:15:36.272015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:58.130 [2024-11-17 14:15:36.272024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:26:58.130 [2024-11-17 14:15:36.272030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.213 ms 00:26:58.130 [2024-11-17 14:15:36.272035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:58.130 [2024-11-17 14:15:36.272103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:58.130 [2024-11-17 14:15:36.272110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:26:58.130 [2024-11-17 14:15:36.272117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:26:58.130 [2024-11-17 14:15:36.272122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:58.130 [2024-11-17 14:15:36.272200] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:26:58.130 [2024-11-17 14:15:36.272209] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:26:58.130 [2024-11-17 14:15:36.272217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:58.130 [2024-11-17 14:15:36.272223] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:58.130 [2024-11-17 14:15:36.272400] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:26:58.130 [2024-11-17 14:15:36.272429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:26:58.130 [2024-11-17 14:15:36.272445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:26:58.130 [2024-11-17 14:15:36.272461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:26:58.130 [2024-11-17 14:15:36.272476] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:26:58.130 [2024-11-17 14:15:36.272490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:58.130 [2024-11-17 14:15:36.272505] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:26:58.130 [2024-11-17 14:15:36.272519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:26:58.130 [2024-11-17 14:15:36.272538] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:58.130 [2024-11-17 14:15:36.272553] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:26:58.130 [2024-11-17 14:15:36.272567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:26:58.130 [2024-11-17 14:15:36.272580] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:58.130 [2024-11-17 14:15:36.272598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:26:58.130 [2024-11-17 14:15:36.272613] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:26:58.130 [2024-11-17 14:15:36.272627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:58.130 [2024-11-17 14:15:36.272640] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:26:58.130 [2024-11-17 14:15:36.272654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:26:58.130 [2024-11-17 14:15:36.272669] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:58.130 [2024-11-17 14:15:36.272727] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:26:58.130 [2024-11-17 14:15:36.272745] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:26:58.130 [2024-11-17 14:15:36.272759] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:58.130 [2024-11-17 14:15:36.272773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:26:58.130 [2024-11-17 14:15:36.272788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:26:58.130 [2024-11-17 14:15:36.272802] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:58.130 [2024-11-17 14:15:36.272815] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:26:58.130 [2024-11-17 14:15:36.272830] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:26:58.130 [2024-11-17 14:15:36.272844] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:58.130 [2024-11-17 14:15:36.272858] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:26:58.130 [2024-11-17 14:15:36.272874] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:26:58.130 [2024-11-17 14:15:36.272889] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:58.130 [2024-11-17 14:15:36.272938] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:26:58.130 [2024-11-17 14:15:36.272955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:26:58.130 [2024-11-17 14:15:36.272972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:58.130 [2024-11-17 14:15:36.272986] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:26:58.130 [2024-11-17 14:15:36.273001] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:26:58.130 [2024-11-17 14:15:36.273014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:58.130 [2024-11-17 14:15:36.273030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:26:58.130 [2024-11-17 14:15:36.273043] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:26:58.130 [2024-11-17 14:15:36.273057] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:58.130 [2024-11-17 14:15:36.273071] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:26:58.130 [2024-11-17 14:15:36.273092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:26:58.130 [2024-11-17 14:15:36.273109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:58.130 [2024-11-17 14:15:36.273159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:58.130 [2024-11-17 14:15:36.273177] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:26:58.130 [2024-11-17 14:15:36.273194] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:26:58.130 [2024-11-17 14:15:36.273209] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:26:58.130 [2024-11-17 14:15:36.273223] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:26:58.130 [2024-11-17 14:15:36.273249] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:26:58.130 [2024-11-17 14:15:36.273265] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:26:58.130 [2024-11-17 14:15:36.273281] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:26:58.130 [2024-11-17 14:15:36.273306] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:58.130 [2024-11-17 14:15:36.273333] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:26:58.130 [2024-11-17 14:15:36.273392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:26:58.130 [2024-11-17 14:15:36.273416] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:26:58.130 [2024-11-17 14:15:36.273437] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:26:58.130 [2024-11-17 14:15:36.273459] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:26:58.130 [2024-11-17 14:15:36.273481] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:26:58.130 [2024-11-17 14:15:36.273502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:26:58.130 [2024-11-17 14:15:36.273524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:26:58.130 [2024-11-17 14:15:36.273570] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:26:58.130 [2024-11-17 14:15:36.273596] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:26:58.130 [2024-11-17 14:15:36.273619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:26:58.130 [2024-11-17 14:15:36.273641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:26:58.130 [2024-11-17 14:15:36.273663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:26:58.130 [2024-11-17 14:15:36.273685] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:26:58.130 [2024-11-17 14:15:36.273708] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:26:58.131 [2024-11-17 14:15:36.273758] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:58.131 [2024-11-17 14:15:36.273783] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:58.131 [2024-11-17 14:15:36.273805] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:26:58.131 [2024-11-17 14:15:36.273826] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:26:58.131 [2024-11-17 14:15:36.273848] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:26:58.131 [2024-11-17 14:15:36.273870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:58.131 [2024-11-17 14:15:36.273885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:26:58.131 [2024-11-17 14:15:36.273900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.725 ms 00:26:58.131 [2024-11-17 14:15:36.273919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:58.131 [2024-11-17 14:15:36.282369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:58.131 [2024-11-17 14:15:36.282473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:58.131 [2024-11-17 14:15:36.282524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.270 ms 00:26:58.131 [2024-11-17 14:15:36.282541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:58.131 [2024-11-17 14:15:36.282601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:58.131 [2024-11-17 14:15:36.282620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:26:58.131 [2024-11-17 14:15:36.282659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:26:58.131 [2024-11-17 14:15:36.282677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:58.131 [2024-11-17 14:15:36.301672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:58.131 [2024-11-17 14:15:36.301793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:58.131 [2024-11-17 14:15:36.301841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.943 ms 00:26:58.131 [2024-11-17 14:15:36.301876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:58.131 [2024-11-17 14:15:36.301925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:58.131 [2024-11-17 14:15:36.301963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:58.131 [2024-11-17 14:15:36.301981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:58.131 [2024-11-17 14:15:36.301998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:58.131 [2024-11-17 14:15:36.302106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:58.131 [2024-11-17 14:15:36.302176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:58.131 [2024-11-17 14:15:36.302196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:26:58.131 [2024-11-17 14:15:36.302214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:58.131 [2024-11-17 14:15:36.302275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:58.131 [2024-11-17 14:15:36.302307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:58.131 [2024-11-17 14:15:36.302323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:26:58.131 [2024-11-17 14:15:36.302337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:58.131 [2024-11-17 14:15:36.310425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:58.131 [2024-11-17 14:15:36.310469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:58.131 [2024-11-17 14:15:36.310487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.034 ms 00:26:58.131 [2024-11-17 14:15:36.310501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:58.131 [2024-11-17 14:15:36.310656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:58.131 [2024-11-17 14:15:36.310676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:26:58.131 [2024-11-17 14:15:36.310697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:26:58.131 [2024-11-17 14:15:36.310712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:58.131 [2024-11-17 14:15:36.315910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:58.131 [2024-11-17 14:15:36.315938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:26:58.131 [2024-11-17 14:15:36.315946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.158 ms 00:26:58.131 [2024-11-17 14:15:36.315952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:58.131 [2024-11-17 14:15:36.317050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:58.131 [2024-11-17 14:15:36.317144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:26:58.131 [2024-11-17 14:15:36.317161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.191 ms 00:26:58.131 [2024-11-17 14:15:36.317167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:58.131 [2024-11-17 14:15:36.334229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:58.131 [2024-11-17 14:15:36.334266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:26:58.131 [2024-11-17 14:15:36.334275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.039 ms 00:26:58.131 [2024-11-17 14:15:36.334281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:58.131 [2024-11-17 14:15:36.334394] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:26:58.131 [2024-11-17 14:15:36.334485] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:26:58.131 [2024-11-17 14:15:36.334570] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:26:58.131 [2024-11-17 14:15:36.334658] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:26:58.131 [2024-11-17 14:15:36.334665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:58.131 [2024-11-17 14:15:36.334672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:26:58.131 [2024-11-17 14:15:36.334679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.353 ms 00:26:58.131 [2024-11-17 14:15:36.334686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:58.131 [2024-11-17 14:15:36.334720] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:26:58.131 [2024-11-17 14:15:36.334730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:58.131 [2024-11-17 14:15:36.334737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:26:58.131 [2024-11-17 14:15:36.334744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:26:58.131 [2024-11-17 14:15:36.334753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:58.131 [2024-11-17 14:15:36.338357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:58.131 [2024-11-17 14:15:36.338384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:26:58.131 [2024-11-17 14:15:36.338392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.588 ms 00:26:58.131 [2024-11-17 14:15:36.338398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:58.131 [2024-11-17 14:15:36.338921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:58.131 [2024-11-17 14:15:36.338942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:26:58.131 [2024-11-17 14:15:36.338950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:26:58.131 [2024-11-17 14:15:36.338962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:58.131 [2024-11-17 14:15:36.339015] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:26:58.131 [2024-11-17 14:15:36.339180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:58.131 [2024-11-17 14:15:36.339191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:26:58.131 [2024-11-17 14:15:36.339198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.166 ms 00:26:58.131 [2024-11-17 14:15:36.339204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.072 [2024-11-17 14:15:37.329137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.072 [2024-11-17 14:15:37.329214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:26:59.072 [2024-11-17 14:15:37.329228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 989.669 ms 00:26:59.072 [2024-11-17 14:15:37.329257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.072 [2024-11-17 14:15:37.331502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.072 [2024-11-17 14:15:37.331542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:26:59.072 [2024-11-17 14:15:37.331552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.794 ms 00:26:59.072 [2024-11-17 14:15:37.331559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.072 [2024-11-17 14:15:37.332399] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:26:59.072 [2024-11-17 14:15:37.332432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.072 [2024-11-17 14:15:37.332440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:26:59.072 [2024-11-17 14:15:37.332449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.839 ms 00:26:59.072 [2024-11-17 14:15:37.332463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.072 [2024-11-17 14:15:37.332491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.072 [2024-11-17 14:15:37.332499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:26:59.072 [2024-11-17 14:15:37.332506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:59.072 [2024-11-17 14:15:37.332516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.072 [2024-11-17 14:15:37.332550] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 993.527 ms, result 0 00:26:59.072 [2024-11-17 14:15:37.332585] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:26:59.072 [2024-11-17 14:15:37.332761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.072 [2024-11-17 14:15:37.332770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:26:59.072 [2024-11-17 14:15:37.332776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.177 ms 00:26:59.072 [2024-11-17 14:15:37.332782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.025 [2024-11-17 14:15:37.977927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.025 [2024-11-17 14:15:37.977984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:27:00.025 [2024-11-17 14:15:37.977999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 644.740 ms 00:27:00.025 [2024-11-17 14:15:37.978007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.025 [2024-11-17 14:15:37.988513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.025 [2024-11-17 14:15:37.988557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:27:00.025 [2024-11-17 14:15:37.988570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.064 ms 00:27:00.025 [2024-11-17 14:15:37.988579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.025 [2024-11-17 14:15:37.989635] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:27:00.025 [2024-11-17 14:15:37.989672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.025 [2024-11-17 14:15:37.989681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:27:00.025 [2024-11-17 14:15:37.989690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.053 ms 00:27:00.025 [2024-11-17 14:15:37.989698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.025 [2024-11-17 14:15:37.989732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.025 [2024-11-17 14:15:37.989741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:27:00.025 [2024-11-17 14:15:37.989750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:00.025 [2024-11-17 14:15:37.989758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.025 [2024-11-17 14:15:37.989795] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 657.200 ms, result 0 00:27:00.025 [2024-11-17 14:15:37.989839] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:00.025 [2024-11-17 14:15:37.989850] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:27:00.025 [2024-11-17 14:15:37.989867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.025 [2024-11-17 14:15:37.989877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:27:00.025 [2024-11-17 14:15:37.989885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1650.863 ms 00:27:00.025 [2024-11-17 14:15:37.989893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.025 [2024-11-17 14:15:37.989922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.025 [2024-11-17 14:15:37.989939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:27:00.025 [2024-11-17 14:15:37.989947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:00.025 [2024-11-17 14:15:37.989955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.025 [2024-11-17 14:15:37.998619] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:00.025 [2024-11-17 14:15:37.998728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.025 [2024-11-17 14:15:37.998738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:00.025 [2024-11-17 14:15:37.998748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.757 ms 00:27:00.025 [2024-11-17 14:15:37.998755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.025 [2024-11-17 14:15:37.999496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.025 [2024-11-17 14:15:37.999519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:27:00.025 [2024-11-17 14:15:37.999528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.668 ms 00:27:00.025 [2024-11-17 14:15:37.999536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.025 [2024-11-17 14:15:38.001760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.025 [2024-11-17 14:15:38.001780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:27:00.025 [2024-11-17 14:15:38.001789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.203 ms 00:27:00.025 [2024-11-17 14:15:38.001801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.025 [2024-11-17 14:15:38.001839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.025 [2024-11-17 14:15:38.001848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:27:00.025 [2024-11-17 14:15:38.001857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:00.025 [2024-11-17 14:15:38.001864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.025 [2024-11-17 14:15:38.001969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.025 [2024-11-17 14:15:38.001979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:00.025 [2024-11-17 14:15:38.001987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:27:00.025 [2024-11-17 14:15:38.001994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.025 [2024-11-17 14:15:38.002017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.025 [2024-11-17 14:15:38.002025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:00.025 [2024-11-17 14:15:38.002037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:00.025 [2024-11-17 14:15:38.002045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.025 [2024-11-17 14:15:38.002074] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:27:00.025 [2024-11-17 14:15:38.002087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.025 [2024-11-17 14:15:38.002102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:27:00.025 [2024-11-17 14:15:38.002110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:27:00.025 [2024-11-17 14:15:38.002117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.025 [2024-11-17 14:15:38.002174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.025 [2024-11-17 14:15:38.002186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:00.025 [2024-11-17 14:15:38.002194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:27:00.025 [2024-11-17 14:15:38.002202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.025 [2024-11-17 14:15:38.003284] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1738.599 ms, result 0 00:27:00.025 [2024-11-17 14:15:38.015053] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:00.025 [2024-11-17 14:15:38.031062] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:00.025 [2024-11-17 14:15:38.039184] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:00.025 Validate MD5 checksum, iteration 1 00:27:00.025 14:15:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:00.026 14:15:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:27:00.026 14:15:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:00.026 14:15:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:27:00.026 14:15:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:27:00.026 14:15:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:27:00.026 14:15:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:27:00.026 14:15:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:00.026 14:15:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:27:00.026 14:15:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:00.026 14:15:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:00.026 14:15:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:00.026 14:15:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:00.026 14:15:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:00.026 14:15:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:00.026 [2024-11-17 14:15:38.210781] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:27:00.026 [2024-11-17 14:15:38.211032] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92373 ] 00:27:00.287 [2024-11-17 14:15:38.360186] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:00.287 [2024-11-17 14:15:38.396860] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:01.673  [2024-11-17T14:15:40.918Z] Copying: 497/1024 [MB] (497 MBps) [2024-11-17T14:15:40.918Z] Copying: 981/1024 [MB] (484 MBps) [2024-11-17T14:15:42.825Z] Copying: 1024/1024 [MB] (average 491 MBps) 00:27:04.524 00:27:04.524 14:15:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:27:04.524 14:15:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:06.462 14:15:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:06.462 14:15:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=288a4bc3c96d2a211980d90dc53091b9 00:27:06.462 14:15:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 288a4bc3c96d2a211980d90dc53091b9 != \2\8\8\a\4\b\c\3\c\9\6\d\2\a\2\1\1\9\8\0\d\9\0\d\c\5\3\0\9\1\b\9 ]] 00:27:06.462 14:15:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:06.462 14:15:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:06.462 14:15:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:27:06.462 Validate MD5 checksum, iteration 2 00:27:06.462 14:15:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:06.462 14:15:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:06.462 14:15:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:06.462 14:15:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:06.462 14:15:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:06.462 14:15:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:06.463 [2024-11-17 14:15:44.679855] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:27:06.463 [2024-11-17 14:15:44.680205] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92441 ] 00:27:06.720 [2024-11-17 14:15:44.839971] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:06.720 [2024-11-17 14:15:44.873139] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:08.101  [2024-11-17T14:15:47.343Z] Copying: 535/1024 [MB] (535 MBps) [2024-11-17T14:15:49.244Z] Copying: 1024/1024 [MB] (average 540 MBps) 00:27:10.943 00:27:10.943 14:15:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:27:10.943 14:15:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:12.835 14:15:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:13.093 14:15:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=442e0d55ed45c48cb462be115e3495ba 00:27:13.093 14:15:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 442e0d55ed45c48cb462be115e3495ba != \4\4\2\e\0\d\5\5\e\d\4\5\c\4\8\c\b\4\6\2\b\e\1\1\5\e\3\4\9\5\b\a ]] 00:27:13.093 14:15:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:13.093 14:15:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:13.093 14:15:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:27:13.094 14:15:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:27:13.094 14:15:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:27:13.094 14:15:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:13.094 14:15:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:27:13.094 14:15:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:27:13.094 14:15:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:27:13.094 14:15:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:27:13.094 14:15:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 92338 ]] 00:27:13.094 14:15:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 92338 00:27:13.094 14:15:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 92338 ']' 00:27:13.094 14:15:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 92338 00:27:13.094 14:15:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:27:13.094 14:15:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:13.094 14:15:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92338 00:27:13.094 killing process with pid 92338 00:27:13.094 14:15:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:13.094 14:15:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:13.094 14:15:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92338' 00:27:13.094 14:15:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 92338 00:27:13.094 14:15:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 92338 00:27:13.094 [2024-11-17 14:15:51.360389] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:27:13.094 [2024-11-17 14:15:51.364602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.094 [2024-11-17 14:15:51.364635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:27:13.094 [2024-11-17 14:15:51.364646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:13.094 [2024-11-17 14:15:51.364653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.094 [2024-11-17 14:15:51.364671] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:27:13.094 [2024-11-17 14:15:51.365184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.094 [2024-11-17 14:15:51.365201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:27:13.094 [2024-11-17 14:15:51.365208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.502 ms 00:27:13.094 [2024-11-17 14:15:51.365215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.094 [2024-11-17 14:15:51.365420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.094 [2024-11-17 14:15:51.365430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:27:13.094 [2024-11-17 14:15:51.365438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.171 ms 00:27:13.094 [2024-11-17 14:15:51.365445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.094 [2024-11-17 14:15:51.366864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.094 [2024-11-17 14:15:51.366888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:27:13.094 [2024-11-17 14:15:51.366896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.404 ms 00:27:13.094 [2024-11-17 14:15:51.366902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.094 [2024-11-17 14:15:51.367792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.094 [2024-11-17 14:15:51.367964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:27:13.094 [2024-11-17 14:15:51.367977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.864 ms 00:27:13.094 [2024-11-17 14:15:51.367984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.094 [2024-11-17 14:15:51.369704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.094 [2024-11-17 14:15:51.369731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:27:13.094 [2024-11-17 14:15:51.369738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.686 ms 00:27:13.094 [2024-11-17 14:15:51.369744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.094 [2024-11-17 14:15:51.371073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.094 [2024-11-17 14:15:51.371181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:27:13.094 [2024-11-17 14:15:51.371193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.301 ms 00:27:13.094 [2024-11-17 14:15:51.371200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.094 [2024-11-17 14:15:51.371290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.094 [2024-11-17 14:15:51.371299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:27:13.094 [2024-11-17 14:15:51.371306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:27:13.094 [2024-11-17 14:15:51.371313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.094 [2024-11-17 14:15:51.372678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.094 [2024-11-17 14:15:51.372704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:27:13.094 [2024-11-17 14:15:51.372712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.351 ms 00:27:13.094 [2024-11-17 14:15:51.372718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.094 [2024-11-17 14:15:51.373974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.094 [2024-11-17 14:15:51.374000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:27:13.094 [2024-11-17 14:15:51.374006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.229 ms 00:27:13.094 [2024-11-17 14:15:51.374013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.094 [2024-11-17 14:15:51.375087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.094 [2024-11-17 14:15:51.375111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:27:13.094 [2024-11-17 14:15:51.375118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.050 ms 00:27:13.094 [2024-11-17 14:15:51.375123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.094 [2024-11-17 14:15:51.376432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.094 [2024-11-17 14:15:51.376457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:27:13.094 [2024-11-17 14:15:51.376464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.262 ms 00:27:13.094 [2024-11-17 14:15:51.376470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.094 [2024-11-17 14:15:51.376495] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:27:13.094 [2024-11-17 14:15:51.376507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:13.094 [2024-11-17 14:15:51.376518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:27:13.094 [2024-11-17 14:15:51.376524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:27:13.094 [2024-11-17 14:15:51.376531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:13.094 [2024-11-17 14:15:51.376537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:13.094 [2024-11-17 14:15:51.376543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:13.094 [2024-11-17 14:15:51.376549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:13.094 [2024-11-17 14:15:51.376555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:13.094 [2024-11-17 14:15:51.376561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:13.094 [2024-11-17 14:15:51.376568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:13.094 [2024-11-17 14:15:51.376574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:13.094 [2024-11-17 14:15:51.376580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:13.094 [2024-11-17 14:15:51.376586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:13.094 [2024-11-17 14:15:51.376592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:13.094 [2024-11-17 14:15:51.376598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:13.094 [2024-11-17 14:15:51.376603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:13.094 [2024-11-17 14:15:51.376609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:13.094 [2024-11-17 14:15:51.376615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:13.094 [2024-11-17 14:15:51.376622] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:27:13.094 [2024-11-17 14:15:51.376628] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 3f2b34a3-3f30-4474-8b16-2775e53de0aa 00:27:13.094 [2024-11-17 14:15:51.376635] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:27:13.094 [2024-11-17 14:15:51.376644] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:27:13.094 [2024-11-17 14:15:51.376649] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:27:13.094 [2024-11-17 14:15:51.376655] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:27:13.094 [2024-11-17 14:15:51.376660] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:27:13.094 [2024-11-17 14:15:51.376666] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:27:13.094 [2024-11-17 14:15:51.376672] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:27:13.094 [2024-11-17 14:15:51.376678] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:27:13.094 [2024-11-17 14:15:51.376683] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:27:13.094 [2024-11-17 14:15:51.376689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.094 [2024-11-17 14:15:51.376696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:27:13.094 [2024-11-17 14:15:51.376704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.195 ms 00:27:13.094 [2024-11-17 14:15:51.376715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.094 [2024-11-17 14:15:51.378369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.094 [2024-11-17 14:15:51.378392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:27:13.094 [2024-11-17 14:15:51.378399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.642 ms 00:27:13.095 [2024-11-17 14:15:51.378410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.095 [2024-11-17 14:15:51.378496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:13.095 [2024-11-17 14:15:51.378504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:27:13.095 [2024-11-17 14:15:51.378512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.071 ms 00:27:13.095 [2024-11-17 14:15:51.378518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.095 [2024-11-17 14:15:51.384421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:13.095 [2024-11-17 14:15:51.384527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:13.095 [2024-11-17 14:15:51.384539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:13.095 [2024-11-17 14:15:51.384546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.095 [2024-11-17 14:15:51.384574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:13.095 [2024-11-17 14:15:51.384582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:13.095 [2024-11-17 14:15:51.384590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:13.095 [2024-11-17 14:15:51.384597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.095 [2024-11-17 14:15:51.384653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:13.095 [2024-11-17 14:15:51.384662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:13.095 [2024-11-17 14:15:51.384669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:13.095 [2024-11-17 14:15:51.384675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.095 [2024-11-17 14:15:51.384689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:13.095 [2024-11-17 14:15:51.384696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:13.095 [2024-11-17 14:15:51.384702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:13.095 [2024-11-17 14:15:51.384710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.354 [2024-11-17 14:15:51.395300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:13.354 [2024-11-17 14:15:51.395431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:13.354 [2024-11-17 14:15:51.395444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:13.354 [2024-11-17 14:15:51.395450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.354 [2024-11-17 14:15:51.403740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:13.354 [2024-11-17 14:15:51.403772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:13.354 [2024-11-17 14:15:51.403785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:13.354 [2024-11-17 14:15:51.403793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.354 [2024-11-17 14:15:51.403852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:13.354 [2024-11-17 14:15:51.403861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:13.354 [2024-11-17 14:15:51.403869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:13.354 [2024-11-17 14:15:51.403875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.354 [2024-11-17 14:15:51.403903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:13.354 [2024-11-17 14:15:51.403911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:13.354 [2024-11-17 14:15:51.403921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:13.354 [2024-11-17 14:15:51.403929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.354 [2024-11-17 14:15:51.403986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:13.354 [2024-11-17 14:15:51.403995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:13.354 [2024-11-17 14:15:51.404001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:13.354 [2024-11-17 14:15:51.404007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.354 [2024-11-17 14:15:51.404034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:13.354 [2024-11-17 14:15:51.404042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:27:13.354 [2024-11-17 14:15:51.404048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:13.354 [2024-11-17 14:15:51.404054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.354 [2024-11-17 14:15:51.404092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:13.354 [2024-11-17 14:15:51.404100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:13.354 [2024-11-17 14:15:51.404106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:13.354 [2024-11-17 14:15:51.404112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.354 [2024-11-17 14:15:51.404153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:13.354 [2024-11-17 14:15:51.404162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:13.354 [2024-11-17 14:15:51.404169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:13.354 [2024-11-17 14:15:51.404175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:13.354 [2024-11-17 14:15:51.404353] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 39.720 ms, result 0 00:27:13.354 14:15:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:27:13.354 14:15:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:13.354 14:15:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:27:13.354 14:15:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:27:13.354 14:15:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:27:13.354 14:15:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:13.354 Remove shared memory files 00:27:13.354 14:15:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:27:13.354 14:15:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:13.354 14:15:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:27:13.354 14:15:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:27:13.354 14:15:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid92140 00:27:13.354 14:15:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:13.354 14:15:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:27:13.354 ************************************ 00:27:13.354 END TEST ftl_upgrade_shutdown 00:27:13.354 ************************************ 00:27:13.354 00:27:13.354 real 1m17.277s 00:27:13.354 user 1m41.812s 00:27:13.354 sys 0m21.680s 00:27:13.354 14:15:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:13.354 14:15:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:13.616 14:15:51 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:27:13.616 14:15:51 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:27:13.616 14:15:51 ftl -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:27:13.616 14:15:51 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:13.616 14:15:51 ftl -- common/autotest_common.sh@10 -- # set +x 00:27:13.616 ************************************ 00:27:13.616 START TEST ftl_restore_fast 00:27:13.616 ************************************ 00:27:13.616 14:15:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:27:13.616 * Looking for test storage... 00:27:13.616 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:27:13.616 14:15:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:27:13.616 14:15:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1681 -- # lcov --version 00:27:13.616 14:15:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:27:13.616 14:15:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:27:13.616 14:15:51 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:13.616 14:15:51 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:13.616 14:15:51 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:13.616 14:15:51 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:27:13.616 14:15:51 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:27:13.616 14:15:51 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:27:13.616 14:15:51 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:27:13.616 14:15:51 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:27:13.616 14:15:51 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:27:13.616 14:15:51 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:27:13.616 14:15:51 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:13.616 14:15:51 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:27:13.616 14:15:51 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:27:13.616 14:15:51 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:13.616 14:15:51 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:13.616 14:15:51 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:27:13.616 14:15:51 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:27:13.616 14:15:51 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:13.616 14:15:51 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:27:13.616 14:15:51 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:27:13.616 14:15:51 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:27:13.616 14:15:51 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:27:13.616 14:15:51 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:13.616 14:15:51 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:27:13.616 14:15:51 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:27:13.616 14:15:51 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:13.616 14:15:51 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:13.616 14:15:51 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:27:13.616 14:15:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:13.616 14:15:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:27:13.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:13.616 --rc genhtml_branch_coverage=1 00:27:13.616 --rc genhtml_function_coverage=1 00:27:13.616 --rc genhtml_legend=1 00:27:13.616 --rc geninfo_all_blocks=1 00:27:13.616 --rc geninfo_unexecuted_blocks=1 00:27:13.616 00:27:13.616 ' 00:27:13.616 14:15:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:27:13.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:13.616 --rc genhtml_branch_coverage=1 00:27:13.616 --rc genhtml_function_coverage=1 00:27:13.616 --rc genhtml_legend=1 00:27:13.616 --rc geninfo_all_blocks=1 00:27:13.616 --rc geninfo_unexecuted_blocks=1 00:27:13.616 00:27:13.616 ' 00:27:13.616 14:15:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:27:13.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:13.616 --rc genhtml_branch_coverage=1 00:27:13.616 --rc genhtml_function_coverage=1 00:27:13.616 --rc genhtml_legend=1 00:27:13.616 --rc geninfo_all_blocks=1 00:27:13.616 --rc geninfo_unexecuted_blocks=1 00:27:13.616 00:27:13.616 ' 00:27:13.616 14:15:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:27:13.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:13.616 --rc genhtml_branch_coverage=1 00:27:13.616 --rc genhtml_function_coverage=1 00:27:13.616 --rc genhtml_legend=1 00:27:13.616 --rc geninfo_all_blocks=1 00:27:13.616 --rc geninfo_unexecuted_blocks=1 00:27:13.616 00:27:13.616 ' 00:27:13.616 14:15:51 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:27:13.616 14:15:51 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:27:13.616 14:15:51 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:27:13.616 14:15:51 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:27:13.616 14:15:51 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:27:13.616 14:15:51 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:13.616 14:15:51 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:13.616 14:15:51 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:27:13.616 14:15:51 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:27:13.616 14:15:51 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:13.616 14:15:51 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:13.616 14:15:51 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:27:13.616 14:15:51 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:27:13.616 14:15:51 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:13.616 14:15:51 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:13.616 14:15:51 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:27:13.616 14:15:51 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:27:13.616 14:15:51 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:13.616 14:15:51 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:13.616 14:15:51 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:27:13.616 14:15:51 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:27:13.616 14:15:51 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:13.616 14:15:51 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:13.616 14:15:51 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:13.616 14:15:51 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:13.616 14:15:51 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:27:13.616 14:15:51 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:27:13.616 14:15:51 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:13.616 14:15:51 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:13.616 14:15:51 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:13.616 14:15:51 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:27:13.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:13.616 14:15:51 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.XWnzfv412U 00:27:13.616 14:15:51 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:27:13.616 14:15:51 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:27:13.616 14:15:51 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:27:13.617 14:15:51 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:27:13.617 14:15:51 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:27:13.617 14:15:51 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:27:13.617 14:15:51 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:27:13.617 14:15:51 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:27:13.617 14:15:51 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:27:13.617 14:15:51 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:27:13.617 14:15:51 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:27:13.617 14:15:51 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=92595 00:27:13.617 14:15:51 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 92595 00:27:13.617 14:15:51 ftl.ftl_restore_fast -- common/autotest_common.sh@831 -- # '[' -z 92595 ']' 00:27:13.617 14:15:51 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:13.617 14:15:51 ftl.ftl_restore_fast -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:13.617 14:15:51 ftl.ftl_restore_fast -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:13.617 14:15:51 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:13.617 14:15:51 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:27:13.617 14:15:51 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:13.875 [2024-11-17 14:15:51.937585] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:27:13.875 [2024-11-17 14:15:51.937692] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92595 ] 00:27:13.875 [2024-11-17 14:15:52.082315] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:13.875 [2024-11-17 14:15:52.122082] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:14.438 14:15:52 ftl.ftl_restore_fast -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:14.438 14:15:52 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # return 0 00:27:14.438 14:15:52 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:27:14.438 14:15:52 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:27:14.438 14:15:52 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:27:14.438 14:15:52 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:27:14.438 14:15:52 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:27:14.438 14:15:52 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:27:15.002 14:15:52 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:27:15.002 14:15:52 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:27:15.002 14:15:52 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:27:15.002 14:15:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:27:15.002 14:15:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:27:15.002 14:15:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:27:15.002 14:15:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:27:15.002 14:15:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:27:15.002 14:15:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:27:15.002 { 00:27:15.002 "name": "nvme0n1", 00:27:15.002 "aliases": [ 00:27:15.002 "9c4ecc3e-1b3c-427c-8eef-303d4225b373" 00:27:15.002 ], 00:27:15.002 "product_name": "NVMe disk", 00:27:15.002 "block_size": 4096, 00:27:15.002 "num_blocks": 1310720, 00:27:15.002 "uuid": "9c4ecc3e-1b3c-427c-8eef-303d4225b373", 00:27:15.002 "numa_id": -1, 00:27:15.002 "assigned_rate_limits": { 00:27:15.002 "rw_ios_per_sec": 0, 00:27:15.002 "rw_mbytes_per_sec": 0, 00:27:15.002 "r_mbytes_per_sec": 0, 00:27:15.002 "w_mbytes_per_sec": 0 00:27:15.002 }, 00:27:15.002 "claimed": true, 00:27:15.002 "claim_type": "read_many_write_one", 00:27:15.002 "zoned": false, 00:27:15.002 "supported_io_types": { 00:27:15.002 "read": true, 00:27:15.002 "write": true, 00:27:15.002 "unmap": true, 00:27:15.002 "flush": true, 00:27:15.002 "reset": true, 00:27:15.002 "nvme_admin": true, 00:27:15.002 "nvme_io": true, 00:27:15.002 "nvme_io_md": false, 00:27:15.002 "write_zeroes": true, 00:27:15.002 "zcopy": false, 00:27:15.002 "get_zone_info": false, 00:27:15.002 "zone_management": false, 00:27:15.002 "zone_append": false, 00:27:15.002 "compare": true, 00:27:15.002 "compare_and_write": false, 00:27:15.002 "abort": true, 00:27:15.002 "seek_hole": false, 00:27:15.002 "seek_data": false, 00:27:15.002 "copy": true, 00:27:15.002 "nvme_iov_md": false 00:27:15.002 }, 00:27:15.002 "driver_specific": { 00:27:15.002 "nvme": [ 00:27:15.002 { 00:27:15.002 "pci_address": "0000:00:11.0", 00:27:15.002 "trid": { 00:27:15.002 "trtype": "PCIe", 00:27:15.002 "traddr": "0000:00:11.0" 00:27:15.002 }, 00:27:15.002 "ctrlr_data": { 00:27:15.002 "cntlid": 0, 00:27:15.002 "vendor_id": "0x1b36", 00:27:15.002 "model_number": "QEMU NVMe Ctrl", 00:27:15.002 "serial_number": "12341", 00:27:15.002 "firmware_revision": "8.0.0", 00:27:15.002 "subnqn": "nqn.2019-08.org.qemu:12341", 00:27:15.002 "oacs": { 00:27:15.002 "security": 0, 00:27:15.002 "format": 1, 00:27:15.002 "firmware": 0, 00:27:15.002 "ns_manage": 1 00:27:15.002 }, 00:27:15.002 "multi_ctrlr": false, 00:27:15.002 "ana_reporting": false 00:27:15.002 }, 00:27:15.002 "vs": { 00:27:15.002 "nvme_version": "1.4" 00:27:15.002 }, 00:27:15.002 "ns_data": { 00:27:15.002 "id": 1, 00:27:15.002 "can_share": false 00:27:15.002 } 00:27:15.002 } 00:27:15.002 ], 00:27:15.002 "mp_policy": "active_passive" 00:27:15.002 } 00:27:15.002 } 00:27:15.002 ]' 00:27:15.002 14:15:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:27:15.002 14:15:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:27:15.002 14:15:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:27:15.002 14:15:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=1310720 00:27:15.002 14:15:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:27:15.002 14:15:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 5120 00:27:15.002 14:15:53 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:27:15.002 14:15:53 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:27:15.002 14:15:53 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:27:15.002 14:15:53 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:15.002 14:15:53 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:15.261 14:15:53 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=8f2fe191-2a8d-4295-94f9-5bc7d4bc5970 00:27:15.261 14:15:53 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:27:15.261 14:15:53 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8f2fe191-2a8d-4295-94f9-5bc7d4bc5970 00:27:15.519 14:15:53 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:27:15.777 14:15:53 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=8425273b-6fa8-41d2-aec8-fd0b3f254935 00:27:15.777 14:15:53 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 8425273b-6fa8-41d2-aec8-fd0b3f254935 00:27:16.035 14:15:54 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=bf5fc51e-8e97-43cf-b46c-0ee612400fc2 00:27:16.035 14:15:54 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:27:16.035 14:15:54 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 bf5fc51e-8e97-43cf-b46c-0ee612400fc2 00:27:16.035 14:15:54 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:27:16.035 14:15:54 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:27:16.035 14:15:54 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=bf5fc51e-8e97-43cf-b46c-0ee612400fc2 00:27:16.035 14:15:54 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:27:16.035 14:15:54 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size bf5fc51e-8e97-43cf-b46c-0ee612400fc2 00:27:16.035 14:15:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=bf5fc51e-8e97-43cf-b46c-0ee612400fc2 00:27:16.035 14:15:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:27:16.035 14:15:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:27:16.035 14:15:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:27:16.035 14:15:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b bf5fc51e-8e97-43cf-b46c-0ee612400fc2 00:27:16.035 14:15:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:27:16.035 { 00:27:16.035 "name": "bf5fc51e-8e97-43cf-b46c-0ee612400fc2", 00:27:16.035 "aliases": [ 00:27:16.035 "lvs/nvme0n1p0" 00:27:16.035 ], 00:27:16.035 "product_name": "Logical Volume", 00:27:16.035 "block_size": 4096, 00:27:16.035 "num_blocks": 26476544, 00:27:16.035 "uuid": "bf5fc51e-8e97-43cf-b46c-0ee612400fc2", 00:27:16.035 "assigned_rate_limits": { 00:27:16.035 "rw_ios_per_sec": 0, 00:27:16.035 "rw_mbytes_per_sec": 0, 00:27:16.035 "r_mbytes_per_sec": 0, 00:27:16.035 "w_mbytes_per_sec": 0 00:27:16.035 }, 00:27:16.035 "claimed": false, 00:27:16.035 "zoned": false, 00:27:16.035 "supported_io_types": { 00:27:16.035 "read": true, 00:27:16.035 "write": true, 00:27:16.035 "unmap": true, 00:27:16.035 "flush": false, 00:27:16.035 "reset": true, 00:27:16.035 "nvme_admin": false, 00:27:16.035 "nvme_io": false, 00:27:16.035 "nvme_io_md": false, 00:27:16.035 "write_zeroes": true, 00:27:16.035 "zcopy": false, 00:27:16.035 "get_zone_info": false, 00:27:16.035 "zone_management": false, 00:27:16.035 "zone_append": false, 00:27:16.035 "compare": false, 00:27:16.035 "compare_and_write": false, 00:27:16.035 "abort": false, 00:27:16.035 "seek_hole": true, 00:27:16.035 "seek_data": true, 00:27:16.035 "copy": false, 00:27:16.035 "nvme_iov_md": false 00:27:16.035 }, 00:27:16.035 "driver_specific": { 00:27:16.035 "lvol": { 00:27:16.035 "lvol_store_uuid": "8425273b-6fa8-41d2-aec8-fd0b3f254935", 00:27:16.035 "base_bdev": "nvme0n1", 00:27:16.035 "thin_provision": true, 00:27:16.035 "num_allocated_clusters": 0, 00:27:16.035 "snapshot": false, 00:27:16.035 "clone": false, 00:27:16.035 "esnap_clone": false 00:27:16.035 } 00:27:16.035 } 00:27:16.035 } 00:27:16.035 ]' 00:27:16.035 14:15:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:27:16.293 14:15:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:27:16.293 14:15:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:27:16.293 14:15:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:27:16.293 14:15:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:27:16.293 14:15:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:27:16.293 14:15:54 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:27:16.293 14:15:54 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:27:16.293 14:15:54 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:27:16.552 14:15:54 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:27:16.552 14:15:54 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:27:16.552 14:15:54 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size bf5fc51e-8e97-43cf-b46c-0ee612400fc2 00:27:16.552 14:15:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=bf5fc51e-8e97-43cf-b46c-0ee612400fc2 00:27:16.552 14:15:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:27:16.552 14:15:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:27:16.552 14:15:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:27:16.552 14:15:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b bf5fc51e-8e97-43cf-b46c-0ee612400fc2 00:27:16.810 14:15:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:27:16.810 { 00:27:16.810 "name": "bf5fc51e-8e97-43cf-b46c-0ee612400fc2", 00:27:16.810 "aliases": [ 00:27:16.810 "lvs/nvme0n1p0" 00:27:16.810 ], 00:27:16.810 "product_name": "Logical Volume", 00:27:16.810 "block_size": 4096, 00:27:16.810 "num_blocks": 26476544, 00:27:16.810 "uuid": "bf5fc51e-8e97-43cf-b46c-0ee612400fc2", 00:27:16.810 "assigned_rate_limits": { 00:27:16.810 "rw_ios_per_sec": 0, 00:27:16.810 "rw_mbytes_per_sec": 0, 00:27:16.810 "r_mbytes_per_sec": 0, 00:27:16.811 "w_mbytes_per_sec": 0 00:27:16.811 }, 00:27:16.811 "claimed": false, 00:27:16.811 "zoned": false, 00:27:16.811 "supported_io_types": { 00:27:16.811 "read": true, 00:27:16.811 "write": true, 00:27:16.811 "unmap": true, 00:27:16.811 "flush": false, 00:27:16.811 "reset": true, 00:27:16.811 "nvme_admin": false, 00:27:16.811 "nvme_io": false, 00:27:16.811 "nvme_io_md": false, 00:27:16.811 "write_zeroes": true, 00:27:16.811 "zcopy": false, 00:27:16.811 "get_zone_info": false, 00:27:16.811 "zone_management": false, 00:27:16.811 "zone_append": false, 00:27:16.811 "compare": false, 00:27:16.811 "compare_and_write": false, 00:27:16.811 "abort": false, 00:27:16.811 "seek_hole": true, 00:27:16.811 "seek_data": true, 00:27:16.811 "copy": false, 00:27:16.811 "nvme_iov_md": false 00:27:16.811 }, 00:27:16.811 "driver_specific": { 00:27:16.811 "lvol": { 00:27:16.811 "lvol_store_uuid": "8425273b-6fa8-41d2-aec8-fd0b3f254935", 00:27:16.811 "base_bdev": "nvme0n1", 00:27:16.811 "thin_provision": true, 00:27:16.811 "num_allocated_clusters": 0, 00:27:16.811 "snapshot": false, 00:27:16.811 "clone": false, 00:27:16.811 "esnap_clone": false 00:27:16.811 } 00:27:16.811 } 00:27:16.811 } 00:27:16.811 ]' 00:27:16.811 14:15:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:27:16.811 14:15:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:27:16.811 14:15:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:27:16.811 14:15:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:27:16.811 14:15:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:27:16.811 14:15:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:27:16.811 14:15:54 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:27:16.811 14:15:54 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:27:17.069 14:15:55 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:27:17.069 14:15:55 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size bf5fc51e-8e97-43cf-b46c-0ee612400fc2 00:27:17.069 14:15:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=bf5fc51e-8e97-43cf-b46c-0ee612400fc2 00:27:17.069 14:15:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:27:17.069 14:15:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:27:17.069 14:15:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:27:17.069 14:15:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b bf5fc51e-8e97-43cf-b46c-0ee612400fc2 00:27:17.069 14:15:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:27:17.069 { 00:27:17.069 "name": "bf5fc51e-8e97-43cf-b46c-0ee612400fc2", 00:27:17.069 "aliases": [ 00:27:17.069 "lvs/nvme0n1p0" 00:27:17.069 ], 00:27:17.069 "product_name": "Logical Volume", 00:27:17.069 "block_size": 4096, 00:27:17.069 "num_blocks": 26476544, 00:27:17.069 "uuid": "bf5fc51e-8e97-43cf-b46c-0ee612400fc2", 00:27:17.069 "assigned_rate_limits": { 00:27:17.069 "rw_ios_per_sec": 0, 00:27:17.069 "rw_mbytes_per_sec": 0, 00:27:17.069 "r_mbytes_per_sec": 0, 00:27:17.069 "w_mbytes_per_sec": 0 00:27:17.069 }, 00:27:17.069 "claimed": false, 00:27:17.069 "zoned": false, 00:27:17.069 "supported_io_types": { 00:27:17.069 "read": true, 00:27:17.069 "write": true, 00:27:17.069 "unmap": true, 00:27:17.069 "flush": false, 00:27:17.069 "reset": true, 00:27:17.069 "nvme_admin": false, 00:27:17.069 "nvme_io": false, 00:27:17.069 "nvme_io_md": false, 00:27:17.069 "write_zeroes": true, 00:27:17.069 "zcopy": false, 00:27:17.069 "get_zone_info": false, 00:27:17.069 "zone_management": false, 00:27:17.069 "zone_append": false, 00:27:17.069 "compare": false, 00:27:17.069 "compare_and_write": false, 00:27:17.069 "abort": false, 00:27:17.069 "seek_hole": true, 00:27:17.069 "seek_data": true, 00:27:17.070 "copy": false, 00:27:17.070 "nvme_iov_md": false 00:27:17.070 }, 00:27:17.070 "driver_specific": { 00:27:17.070 "lvol": { 00:27:17.070 "lvol_store_uuid": "8425273b-6fa8-41d2-aec8-fd0b3f254935", 00:27:17.070 "base_bdev": "nvme0n1", 00:27:17.070 "thin_provision": true, 00:27:17.070 "num_allocated_clusters": 0, 00:27:17.070 "snapshot": false, 00:27:17.070 "clone": false, 00:27:17.070 "esnap_clone": false 00:27:17.070 } 00:27:17.070 } 00:27:17.070 } 00:27:17.070 ]' 00:27:17.070 14:15:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:27:17.070 14:15:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:27:17.070 14:15:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:27:17.329 14:15:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:27:17.329 14:15:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:27:17.329 14:15:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:27:17.329 14:15:55 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:27:17.329 14:15:55 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d bf5fc51e-8e97-43cf-b46c-0ee612400fc2 --l2p_dram_limit 10' 00:27:17.329 14:15:55 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:27:17.329 14:15:55 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:27:17.329 14:15:55 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:27:17.329 14:15:55 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:27:17.329 14:15:55 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:27:17.329 14:15:55 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d bf5fc51e-8e97-43cf-b46c-0ee612400fc2 --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:27:17.329 [2024-11-17 14:15:55.552423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.329 [2024-11-17 14:15:55.552466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:17.329 [2024-11-17 14:15:55.552478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:17.329 [2024-11-17 14:15:55.552486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.329 [2024-11-17 14:15:55.552522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.329 [2024-11-17 14:15:55.552531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:17.329 [2024-11-17 14:15:55.552538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:27:17.329 [2024-11-17 14:15:55.552550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.329 [2024-11-17 14:15:55.552570] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:17.329 [2024-11-17 14:15:55.552753] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:17.329 [2024-11-17 14:15:55.552764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.329 [2024-11-17 14:15:55.552772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:17.329 [2024-11-17 14:15:55.552781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.202 ms 00:27:17.329 [2024-11-17 14:15:55.552791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.329 [2024-11-17 14:15:55.552953] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 9eb8e1fa-342d-4de8-93e6-c05f3844e79c 00:27:17.329 [2024-11-17 14:15:55.554170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.329 [2024-11-17 14:15:55.554193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:27:17.329 [2024-11-17 14:15:55.554203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:27:17.329 [2024-11-17 14:15:55.554209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.329 [2024-11-17 14:15:55.560884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.329 [2024-11-17 14:15:55.560910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:17.329 [2024-11-17 14:15:55.560919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.622 ms 00:27:17.329 [2024-11-17 14:15:55.560925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.329 [2024-11-17 14:15:55.560990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.329 [2024-11-17 14:15:55.560998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:17.329 [2024-11-17 14:15:55.561007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:27:17.329 [2024-11-17 14:15:55.561014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.329 [2024-11-17 14:15:55.561052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.329 [2024-11-17 14:15:55.561059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:17.329 [2024-11-17 14:15:55.561067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:17.330 [2024-11-17 14:15:55.561073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.330 [2024-11-17 14:15:55.561090] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:17.330 [2024-11-17 14:15:55.562666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.330 [2024-11-17 14:15:55.562694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:17.330 [2024-11-17 14:15:55.562704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.580 ms 00:27:17.330 [2024-11-17 14:15:55.562712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.330 [2024-11-17 14:15:55.562742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.330 [2024-11-17 14:15:55.562752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:17.330 [2024-11-17 14:15:55.562758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:17.330 [2024-11-17 14:15:55.562771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.330 [2024-11-17 14:15:55.562795] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:27:17.330 [2024-11-17 14:15:55.562909] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:17.330 [2024-11-17 14:15:55.562918] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:17.330 [2024-11-17 14:15:55.562928] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:17.330 [2024-11-17 14:15:55.562937] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:17.330 [2024-11-17 14:15:55.562947] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:17.330 [2024-11-17 14:15:55.562953] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:17.330 [2024-11-17 14:15:55.562962] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:17.330 [2024-11-17 14:15:55.562968] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:17.330 [2024-11-17 14:15:55.562977] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:17.330 [2024-11-17 14:15:55.562985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.330 [2024-11-17 14:15:55.562992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:17.330 [2024-11-17 14:15:55.563001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.191 ms 00:27:17.330 [2024-11-17 14:15:55.563009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.330 [2024-11-17 14:15:55.563072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.330 [2024-11-17 14:15:55.563082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:17.330 [2024-11-17 14:15:55.563088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:27:17.330 [2024-11-17 14:15:55.563095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.330 [2024-11-17 14:15:55.563166] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:17.330 [2024-11-17 14:15:55.563177] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:17.330 [2024-11-17 14:15:55.563183] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:17.330 [2024-11-17 14:15:55.563192] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:17.330 [2024-11-17 14:15:55.563198] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:17.330 [2024-11-17 14:15:55.563205] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:17.330 [2024-11-17 14:15:55.563210] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:17.330 [2024-11-17 14:15:55.563217] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:17.330 [2024-11-17 14:15:55.563222] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:17.330 [2024-11-17 14:15:55.563229] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:17.330 [2024-11-17 14:15:55.563234] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:17.330 [2024-11-17 14:15:55.563265] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:17.330 [2024-11-17 14:15:55.563271] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:17.330 [2024-11-17 14:15:55.563281] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:17.330 [2024-11-17 14:15:55.563286] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:17.330 [2024-11-17 14:15:55.563293] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:17.330 [2024-11-17 14:15:55.563298] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:17.330 [2024-11-17 14:15:55.563305] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:17.330 [2024-11-17 14:15:55.563310] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:17.330 [2024-11-17 14:15:55.563318] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:17.330 [2024-11-17 14:15:55.563323] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:17.330 [2024-11-17 14:15:55.563331] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:17.330 [2024-11-17 14:15:55.563337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:17.330 [2024-11-17 14:15:55.563345] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:17.330 [2024-11-17 14:15:55.563350] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:17.330 [2024-11-17 14:15:55.563358] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:17.330 [2024-11-17 14:15:55.563364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:17.330 [2024-11-17 14:15:55.563371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:17.330 [2024-11-17 14:15:55.563377] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:17.330 [2024-11-17 14:15:55.563386] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:17.330 [2024-11-17 14:15:55.563391] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:17.330 [2024-11-17 14:15:55.563400] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:17.330 [2024-11-17 14:15:55.563406] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:17.330 [2024-11-17 14:15:55.563414] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:17.330 [2024-11-17 14:15:55.563420] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:17.330 [2024-11-17 14:15:55.563427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:17.330 [2024-11-17 14:15:55.563433] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:17.330 [2024-11-17 14:15:55.563440] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:17.330 [2024-11-17 14:15:55.563446] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:17.330 [2024-11-17 14:15:55.563454] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:17.330 [2024-11-17 14:15:55.563460] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:17.330 [2024-11-17 14:15:55.563467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:17.330 [2024-11-17 14:15:55.563473] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:17.330 [2024-11-17 14:15:55.563484] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:17.330 [2024-11-17 14:15:55.563491] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:17.330 [2024-11-17 14:15:55.563501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:17.330 [2024-11-17 14:15:55.563507] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:17.330 [2024-11-17 14:15:55.563515] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:17.330 [2024-11-17 14:15:55.563521] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:17.330 [2024-11-17 14:15:55.563529] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:17.330 [2024-11-17 14:15:55.563534] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:17.330 [2024-11-17 14:15:55.563542] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:17.330 [2024-11-17 14:15:55.563547] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:17.330 [2024-11-17 14:15:55.563558] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:17.330 [2024-11-17 14:15:55.563567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:17.330 [2024-11-17 14:15:55.563575] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:17.330 [2024-11-17 14:15:55.563582] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:17.330 [2024-11-17 14:15:55.563591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:17.330 [2024-11-17 14:15:55.563597] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:17.330 [2024-11-17 14:15:55.563605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:17.330 [2024-11-17 14:15:55.563612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:17.330 [2024-11-17 14:15:55.563621] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:17.330 [2024-11-17 14:15:55.563627] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:17.330 [2024-11-17 14:15:55.563635] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:17.330 [2024-11-17 14:15:55.563641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:17.330 [2024-11-17 14:15:55.563649] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:17.330 [2024-11-17 14:15:55.563655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:17.330 [2024-11-17 14:15:55.563663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:17.330 [2024-11-17 14:15:55.563670] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:17.330 [2024-11-17 14:15:55.563677] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:17.331 [2024-11-17 14:15:55.563686] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:17.331 [2024-11-17 14:15:55.563694] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:17.331 [2024-11-17 14:15:55.563700] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:17.331 [2024-11-17 14:15:55.563708] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:17.331 [2024-11-17 14:15:55.563713] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:17.331 [2024-11-17 14:15:55.563723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:17.331 [2024-11-17 14:15:55.563729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:17.331 [2024-11-17 14:15:55.563738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.606 ms 00:27:17.331 [2024-11-17 14:15:55.563743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:17.331 [2024-11-17 14:15:55.563777] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:27:17.331 [2024-11-17 14:15:55.563784] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:27:21.530 [2024-11-17 14:15:59.630129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.530 [2024-11-17 14:15:59.630256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:27:21.530 [2024-11-17 14:15:59.630288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4066.325 ms 00:27:21.530 [2024-11-17 14:15:59.630307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.530 [2024-11-17 14:15:59.650498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.530 [2024-11-17 14:15:59.650566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:21.530 [2024-11-17 14:15:59.650587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.039 ms 00:27:21.530 [2024-11-17 14:15:59.650597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.530 [2024-11-17 14:15:59.650760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.530 [2024-11-17 14:15:59.650773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:21.530 [2024-11-17 14:15:59.650791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:27:21.530 [2024-11-17 14:15:59.650800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.530 [2024-11-17 14:15:59.667122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.530 [2024-11-17 14:15:59.667187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:21.530 [2024-11-17 14:15:59.667204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.279 ms 00:27:21.530 [2024-11-17 14:15:59.667214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.530 [2024-11-17 14:15:59.667286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.530 [2024-11-17 14:15:59.667304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:21.530 [2024-11-17 14:15:59.667321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:21.530 [2024-11-17 14:15:59.667333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.530 [2024-11-17 14:15:59.668135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.530 [2024-11-17 14:15:59.668186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:21.530 [2024-11-17 14:15:59.668202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.693 ms 00:27:21.530 [2024-11-17 14:15:59.668212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.530 [2024-11-17 14:15:59.668365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.530 [2024-11-17 14:15:59.668377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:21.530 [2024-11-17 14:15:59.668400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:27:21.530 [2024-11-17 14:15:59.668409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.530 [2024-11-17 14:15:59.695723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.530 [2024-11-17 14:15:59.695785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:21.530 [2024-11-17 14:15:59.695803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.284 ms 00:27:21.530 [2024-11-17 14:15:59.695812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.530 [2024-11-17 14:15:59.707385] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:21.530 [2024-11-17 14:15:59.712493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.530 [2024-11-17 14:15:59.712548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:21.530 [2024-11-17 14:15:59.712561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.568 ms 00:27:21.530 [2024-11-17 14:15:59.712573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.530 [2024-11-17 14:15:59.799975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.530 [2024-11-17 14:15:59.800048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:27:21.530 [2024-11-17 14:15:59.800062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.365 ms 00:27:21.530 [2024-11-17 14:15:59.800087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.530 [2024-11-17 14:15:59.800344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.530 [2024-11-17 14:15:59.800364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:21.530 [2024-11-17 14:15:59.800374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:27:21.530 [2024-11-17 14:15:59.800385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.530 [2024-11-17 14:15:59.807147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.530 [2024-11-17 14:15:59.807213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:27:21.530 [2024-11-17 14:15:59.807226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.732 ms 00:27:21.530 [2024-11-17 14:15:59.807256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.530 [2024-11-17 14:15:59.813059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.530 [2024-11-17 14:15:59.813118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:27:21.530 [2024-11-17 14:15:59.813131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.732 ms 00:27:21.530 [2024-11-17 14:15:59.813142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.530 [2024-11-17 14:15:59.813556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.530 [2024-11-17 14:15:59.813575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:21.530 [2024-11-17 14:15:59.813587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.362 ms 00:27:21.530 [2024-11-17 14:15:59.813600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.791 [2024-11-17 14:15:59.863053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.791 [2024-11-17 14:15:59.863119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:27:21.791 [2024-11-17 14:15:59.863132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.428 ms 00:27:21.791 [2024-11-17 14:15:59.863145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.791 [2024-11-17 14:15:59.872159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.791 [2024-11-17 14:15:59.872221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:27:21.791 [2024-11-17 14:15:59.872250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.926 ms 00:27:21.791 [2024-11-17 14:15:59.872264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.791 [2024-11-17 14:15:59.879111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.791 [2024-11-17 14:15:59.879171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:27:21.791 [2024-11-17 14:15:59.879183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.790 ms 00:27:21.791 [2024-11-17 14:15:59.879194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.791 [2024-11-17 14:15:59.886527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.791 [2024-11-17 14:15:59.886588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:21.791 [2024-11-17 14:15:59.886600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.246 ms 00:27:21.791 [2024-11-17 14:15:59.886616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.791 [2024-11-17 14:15:59.886676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.791 [2024-11-17 14:15:59.886692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:21.791 [2024-11-17 14:15:59.886703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:27:21.791 [2024-11-17 14:15:59.886714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.791 [2024-11-17 14:15:59.886796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:21.791 [2024-11-17 14:15:59.886811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:21.791 [2024-11-17 14:15:59.886821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:27:21.791 [2024-11-17 14:15:59.886842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:21.792 [2024-11-17 14:15:59.888390] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4335.343 ms, result 0 00:27:21.792 { 00:27:21.792 "name": "ftl0", 00:27:21.792 "uuid": "9eb8e1fa-342d-4de8-93e6-c05f3844e79c" 00:27:21.792 } 00:27:21.792 14:15:59 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:27:21.792 14:15:59 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:27:22.053 14:16:00 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:27:22.053 14:16:00 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:27:22.053 [2024-11-17 14:16:00.337951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.053 [2024-11-17 14:16:00.338012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:22.053 [2024-11-17 14:16:00.338029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:22.053 [2024-11-17 14:16:00.338039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.053 [2024-11-17 14:16:00.338071] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:22.053 [2024-11-17 14:16:00.339094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.053 [2024-11-17 14:16:00.339145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:22.053 [2024-11-17 14:16:00.339159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.004 ms 00:27:22.053 [2024-11-17 14:16:00.339173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.053 [2024-11-17 14:16:00.339611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.053 [2024-11-17 14:16:00.339647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:22.053 [2024-11-17 14:16:00.339658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.410 ms 00:27:22.053 [2024-11-17 14:16:00.339669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.053 [2024-11-17 14:16:00.342921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.053 [2024-11-17 14:16:00.342956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:22.053 [2024-11-17 14:16:00.342966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.235 ms 00:27:22.053 [2024-11-17 14:16:00.342977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.053 [2024-11-17 14:16:00.349496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.053 [2024-11-17 14:16:00.349550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:22.053 [2024-11-17 14:16:00.349562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.495 ms 00:27:22.053 [2024-11-17 14:16:00.349575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.314 [2024-11-17 14:16:00.353272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.314 [2024-11-17 14:16:00.353339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:22.314 [2024-11-17 14:16:00.353349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.597 ms 00:27:22.314 [2024-11-17 14:16:00.353360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.314 [2024-11-17 14:16:00.360823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.314 [2024-11-17 14:16:00.360887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:22.314 [2024-11-17 14:16:00.360901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.409 ms 00:27:22.314 [2024-11-17 14:16:00.360915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.314 [2024-11-17 14:16:00.361059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.314 [2024-11-17 14:16:00.361075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:22.314 [2024-11-17 14:16:00.361087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:27:22.314 [2024-11-17 14:16:00.361099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.314 [2024-11-17 14:16:00.364668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.314 [2024-11-17 14:16:00.364731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:22.314 [2024-11-17 14:16:00.364743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.543 ms 00:27:22.314 [2024-11-17 14:16:00.364754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.314 [2024-11-17 14:16:00.367827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.314 [2024-11-17 14:16:00.367900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:22.314 [2024-11-17 14:16:00.367911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.016 ms 00:27:22.314 [2024-11-17 14:16:00.367921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.314 [2024-11-17 14:16:00.370478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.314 [2024-11-17 14:16:00.370535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:22.314 [2024-11-17 14:16:00.370547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.502 ms 00:27:22.314 [2024-11-17 14:16:00.370559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.314 [2024-11-17 14:16:00.373119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.314 [2024-11-17 14:16:00.373178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:22.314 [2024-11-17 14:16:00.373189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.477 ms 00:27:22.314 [2024-11-17 14:16:00.373199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.314 [2024-11-17 14:16:00.373263] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:22.314 [2024-11-17 14:16:00.373284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:22.314 [2024-11-17 14:16:00.373296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:22.314 [2024-11-17 14:16:00.373308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:22.314 [2024-11-17 14:16:00.373317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:22.314 [2024-11-17 14:16:00.373332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:22.314 [2024-11-17 14:16:00.373339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:22.314 [2024-11-17 14:16:00.373353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:22.314 [2024-11-17 14:16:00.373362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:22.314 [2024-11-17 14:16:00.373373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:22.314 [2024-11-17 14:16:00.373381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:22.314 [2024-11-17 14:16:00.373391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:22.314 [2024-11-17 14:16:00.373399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:22.314 [2024-11-17 14:16:00.373411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:22.314 [2024-11-17 14:16:00.373418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:22.314 [2024-11-17 14:16:00.373429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:22.314 [2024-11-17 14:16:00.373437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:22.314 [2024-11-17 14:16:00.373449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:22.314 [2024-11-17 14:16:00.373457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:22.314 [2024-11-17 14:16:00.373467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:22.314 [2024-11-17 14:16:00.373475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:22.314 [2024-11-17 14:16:00.373491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:22.314 [2024-11-17 14:16:00.373501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:22.314 [2024-11-17 14:16:00.373511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:22.314 [2024-11-17 14:16:00.373518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:22.314 [2024-11-17 14:16:00.373528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:22.314 [2024-11-17 14:16:00.373536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:22.314 [2024-11-17 14:16:00.373547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:22.314 [2024-11-17 14:16:00.373555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:22.314 [2024-11-17 14:16:00.373569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:22.314 [2024-11-17 14:16:00.373579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:22.314 [2024-11-17 14:16:00.373589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:22.314 [2024-11-17 14:16:00.373596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:22.314 [2024-11-17 14:16:00.373614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:22.314 [2024-11-17 14:16:00.373628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:22.314 [2024-11-17 14:16:00.373638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:22.314 [2024-11-17 14:16:00.373646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:22.314 [2024-11-17 14:16:00.373658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:22.314 [2024-11-17 14:16:00.373666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:22.314 [2024-11-17 14:16:00.373678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:22.314 [2024-11-17 14:16:00.373685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:22.315 [2024-11-17 14:16:00.373695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:22.315 [2024-11-17 14:16:00.373702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:22.315 [2024-11-17 14:16:00.373714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:22.315 [2024-11-17 14:16:00.373721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:22.315 [2024-11-17 14:16:00.373731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:22.315 [2024-11-17 14:16:00.373738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:22.315 [2024-11-17 14:16:00.373748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:22.315 [2024-11-17 14:16:00.373756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:22.315 [2024-11-17 14:16:00.373767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:22.315 [2024-11-17 14:16:00.373775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:22.315 [2024-11-17 14:16:00.373784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:22.315 [2024-11-17 14:16:00.373792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:22.315 [2024-11-17 14:16:00.373805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:22.315 [2024-11-17 14:16:00.373812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:22.315 [2024-11-17 14:16:00.373821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:22.315 [2024-11-17 14:16:00.373829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:22.315 [2024-11-17 14:16:00.373839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:22.315 [2024-11-17 14:16:00.373847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:22.315 [2024-11-17 14:16:00.373857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:22.315 [2024-11-17 14:16:00.373865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:22.315 [2024-11-17 14:16:00.373876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:22.315 [2024-11-17 14:16:00.373884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:22.315 [2024-11-17 14:16:00.373894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:22.315 [2024-11-17 14:16:00.373901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:22.315 [2024-11-17 14:16:00.373924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:22.315 [2024-11-17 14:16:00.373935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:22.315 [2024-11-17 14:16:00.373945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:22.315 [2024-11-17 14:16:00.373954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:22.315 [2024-11-17 14:16:00.373966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:22.315 [2024-11-17 14:16:00.373984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:22.315 [2024-11-17 14:16:00.373995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:22.315 [2024-11-17 14:16:00.374003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:22.315 [2024-11-17 14:16:00.374015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:22.315 [2024-11-17 14:16:00.374023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:22.315 [2024-11-17 14:16:00.374033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:22.315 [2024-11-17 14:16:00.374041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:22.315 [2024-11-17 14:16:00.374051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:22.315 [2024-11-17 14:16:00.374058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:22.315 [2024-11-17 14:16:00.374067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:22.315 [2024-11-17 14:16:00.374077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:22.315 [2024-11-17 14:16:00.374088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:22.315 [2024-11-17 14:16:00.374095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:22.315 [2024-11-17 14:16:00.374105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:22.315 [2024-11-17 14:16:00.374112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:22.315 [2024-11-17 14:16:00.374124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:22.315 [2024-11-17 14:16:00.374133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:22.315 [2024-11-17 14:16:00.374143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:22.315 [2024-11-17 14:16:00.374151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:22.315 [2024-11-17 14:16:00.374164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:22.315 [2024-11-17 14:16:00.374171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:22.315 [2024-11-17 14:16:00.374181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:22.315 [2024-11-17 14:16:00.374190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:22.315 [2024-11-17 14:16:00.374200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:22.315 [2024-11-17 14:16:00.374208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:22.315 [2024-11-17 14:16:00.374218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:22.315 [2024-11-17 14:16:00.374225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:22.315 [2024-11-17 14:16:00.374260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:22.315 [2024-11-17 14:16:00.374271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:22.315 [2024-11-17 14:16:00.374282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:22.315 [2024-11-17 14:16:00.374290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:22.315 [2024-11-17 14:16:00.374315] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:22.315 [2024-11-17 14:16:00.374323] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9eb8e1fa-342d-4de8-93e6-c05f3844e79c 00:27:22.315 [2024-11-17 14:16:00.374337] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:22.315 [2024-11-17 14:16:00.374344] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:22.315 [2024-11-17 14:16:00.374355] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:22.315 [2024-11-17 14:16:00.374364] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:22.315 [2024-11-17 14:16:00.374376] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:22.315 [2024-11-17 14:16:00.374384] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:22.315 [2024-11-17 14:16:00.374394] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:22.315 [2024-11-17 14:16:00.374402] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:22.315 [2024-11-17 14:16:00.374412] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:22.315 [2024-11-17 14:16:00.374419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.315 [2024-11-17 14:16:00.374432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:22.315 [2024-11-17 14:16:00.374443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.157 ms 00:27:22.315 [2024-11-17 14:16:00.374453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.315 [2024-11-17 14:16:00.377592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.315 [2024-11-17 14:16:00.377645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:22.315 [2024-11-17 14:16:00.377657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.117 ms 00:27:22.315 [2024-11-17 14:16:00.377669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.315 [2024-11-17 14:16:00.377832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.315 [2024-11-17 14:16:00.377846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:22.315 [2024-11-17 14:16:00.377856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.134 ms 00:27:22.315 [2024-11-17 14:16:00.377868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.315 [2024-11-17 14:16:00.389179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:22.315 [2024-11-17 14:16:00.389236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:22.315 [2024-11-17 14:16:00.389285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:22.315 [2024-11-17 14:16:00.389303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.315 [2024-11-17 14:16:00.389382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:22.315 [2024-11-17 14:16:00.389396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:22.315 [2024-11-17 14:16:00.389407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:22.315 [2024-11-17 14:16:00.389418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.315 [2024-11-17 14:16:00.389504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:22.315 [2024-11-17 14:16:00.389524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:22.315 [2024-11-17 14:16:00.389534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:22.315 [2024-11-17 14:16:00.389545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.315 [2024-11-17 14:16:00.389566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:22.316 [2024-11-17 14:16:00.389582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:22.316 [2024-11-17 14:16:00.389592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:22.316 [2024-11-17 14:16:00.389604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.316 [2024-11-17 14:16:00.407771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:22.316 [2024-11-17 14:16:00.407824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:22.316 [2024-11-17 14:16:00.407836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:22.316 [2024-11-17 14:16:00.407846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.316 [2024-11-17 14:16:00.421789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:22.316 [2024-11-17 14:16:00.421842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:22.316 [2024-11-17 14:16:00.421853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:22.316 [2024-11-17 14:16:00.421867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.316 [2024-11-17 14:16:00.421955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:22.316 [2024-11-17 14:16:00.421971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:22.316 [2024-11-17 14:16:00.421978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:22.316 [2024-11-17 14:16:00.421989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.316 [2024-11-17 14:16:00.422037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:22.316 [2024-11-17 14:16:00.422049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:22.316 [2024-11-17 14:16:00.422058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:22.316 [2024-11-17 14:16:00.422067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.316 [2024-11-17 14:16:00.422139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:22.316 [2024-11-17 14:16:00.422154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:22.316 [2024-11-17 14:16:00.422162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:22.316 [2024-11-17 14:16:00.422170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.316 [2024-11-17 14:16:00.422200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:22.316 [2024-11-17 14:16:00.422213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:22.316 [2024-11-17 14:16:00.422219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:22.316 [2024-11-17 14:16:00.422230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.316 [2024-11-17 14:16:00.422291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:22.316 [2024-11-17 14:16:00.422304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:22.316 [2024-11-17 14:16:00.422312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:22.316 [2024-11-17 14:16:00.422321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.316 [2024-11-17 14:16:00.422373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:22.316 [2024-11-17 14:16:00.422386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:22.316 [2024-11-17 14:16:00.422395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:22.316 [2024-11-17 14:16:00.422404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.316 [2024-11-17 14:16:00.422551] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 84.568 ms, result 0 00:27:22.316 true 00:27:22.316 14:16:00 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 92595 00:27:22.316 14:16:00 ftl.ftl_restore_fast -- common/autotest_common.sh@950 -- # '[' -z 92595 ']' 00:27:22.316 14:16:00 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # kill -0 92595 00:27:22.316 14:16:00 ftl.ftl_restore_fast -- common/autotest_common.sh@955 -- # uname 00:27:22.316 14:16:00 ftl.ftl_restore_fast -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:22.316 14:16:00 ftl.ftl_restore_fast -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92595 00:27:22.316 14:16:00 ftl.ftl_restore_fast -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:22.316 killing process with pid 92595 00:27:22.316 14:16:00 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:22.316 14:16:00 ftl.ftl_restore_fast -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92595' 00:27:22.316 14:16:00 ftl.ftl_restore_fast -- common/autotest_common.sh@969 -- # kill 92595 00:27:22.316 14:16:00 ftl.ftl_restore_fast -- common/autotest_common.sh@974 -- # wait 92595 00:27:27.596 14:16:05 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:27:30.878 262144+0 records in 00:27:30.878 262144+0 records out 00:27:30.878 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.58323 s, 300 MB/s 00:27:30.878 14:16:09 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:27:33.417 14:16:11 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:33.417 [2024-11-17 14:16:11.311937] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:27:33.417 [2024-11-17 14:16:11.312045] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92811 ] 00:27:33.417 [2024-11-17 14:16:11.459222] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:33.417 [2024-11-17 14:16:11.508213] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:33.417 [2024-11-17 14:16:11.652932] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:33.417 [2024-11-17 14:16:11.653026] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:33.680 [2024-11-17 14:16:11.816758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.680 [2024-11-17 14:16:11.816815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:33.680 [2024-11-17 14:16:11.816836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:33.680 [2024-11-17 14:16:11.816850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.680 [2024-11-17 14:16:11.816911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.680 [2024-11-17 14:16:11.816923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:33.680 [2024-11-17 14:16:11.816938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:27:33.680 [2024-11-17 14:16:11.816947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.680 [2024-11-17 14:16:11.816968] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:33.680 [2024-11-17 14:16:11.817590] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:33.680 [2024-11-17 14:16:11.817644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.680 [2024-11-17 14:16:11.817655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:33.680 [2024-11-17 14:16:11.817670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.681 ms 00:27:33.680 [2024-11-17 14:16:11.817689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.680 [2024-11-17 14:16:11.820053] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:33.680 [2024-11-17 14:16:11.824832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.680 [2024-11-17 14:16:11.824879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:33.680 [2024-11-17 14:16:11.824892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.781 ms 00:27:33.680 [2024-11-17 14:16:11.824902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.680 [2024-11-17 14:16:11.824995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.680 [2024-11-17 14:16:11.825007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:33.680 [2024-11-17 14:16:11.825020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:27:33.680 [2024-11-17 14:16:11.825028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.680 [2024-11-17 14:16:11.836804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.680 [2024-11-17 14:16:11.836844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:33.680 [2024-11-17 14:16:11.836857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.719 ms 00:27:33.680 [2024-11-17 14:16:11.836880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.680 [2024-11-17 14:16:11.836991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.680 [2024-11-17 14:16:11.837003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:33.680 [2024-11-17 14:16:11.837012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:27:33.680 [2024-11-17 14:16:11.837024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.680 [2024-11-17 14:16:11.837100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.680 [2024-11-17 14:16:11.837112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:33.680 [2024-11-17 14:16:11.837122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:27:33.681 [2024-11-17 14:16:11.837130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.681 [2024-11-17 14:16:11.837159] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:33.681 [2024-11-17 14:16:11.839938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.681 [2024-11-17 14:16:11.839975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:33.681 [2024-11-17 14:16:11.839987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.789 ms 00:27:33.681 [2024-11-17 14:16:11.840006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.681 [2024-11-17 14:16:11.840046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.681 [2024-11-17 14:16:11.840057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:33.681 [2024-11-17 14:16:11.840067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:27:33.681 [2024-11-17 14:16:11.840077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.681 [2024-11-17 14:16:11.840103] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:33.681 [2024-11-17 14:16:11.840136] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:33.681 [2024-11-17 14:16:11.840179] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:33.681 [2024-11-17 14:16:11.840204] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:33.681 [2024-11-17 14:16:11.840336] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:33.681 [2024-11-17 14:16:11.840351] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:33.681 [2024-11-17 14:16:11.840364] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:33.681 [2024-11-17 14:16:11.840376] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:33.681 [2024-11-17 14:16:11.840390] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:33.681 [2024-11-17 14:16:11.840399] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:33.681 [2024-11-17 14:16:11.840408] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:33.681 [2024-11-17 14:16:11.840416] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:33.681 [2024-11-17 14:16:11.840426] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:33.681 [2024-11-17 14:16:11.840436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.681 [2024-11-17 14:16:11.840444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:33.681 [2024-11-17 14:16:11.840453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.336 ms 00:27:33.681 [2024-11-17 14:16:11.840464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.681 [2024-11-17 14:16:11.840551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.681 [2024-11-17 14:16:11.840565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:33.681 [2024-11-17 14:16:11.840573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:27:33.681 [2024-11-17 14:16:11.840585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.681 [2024-11-17 14:16:11.840685] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:33.681 [2024-11-17 14:16:11.840699] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:33.681 [2024-11-17 14:16:11.840708] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:33.681 [2024-11-17 14:16:11.840724] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:33.681 [2024-11-17 14:16:11.840736] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:33.681 [2024-11-17 14:16:11.840743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:33.681 [2024-11-17 14:16:11.840751] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:33.681 [2024-11-17 14:16:11.840760] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:33.681 [2024-11-17 14:16:11.840769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:33.681 [2024-11-17 14:16:11.840777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:33.681 [2024-11-17 14:16:11.840787] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:33.681 [2024-11-17 14:16:11.840797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:33.681 [2024-11-17 14:16:11.840808] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:33.681 [2024-11-17 14:16:11.840817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:33.681 [2024-11-17 14:16:11.840826] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:33.681 [2024-11-17 14:16:11.840835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:33.681 [2024-11-17 14:16:11.840844] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:33.681 [2024-11-17 14:16:11.840852] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:33.681 [2024-11-17 14:16:11.840859] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:33.681 [2024-11-17 14:16:11.840866] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:33.681 [2024-11-17 14:16:11.840874] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:33.681 [2024-11-17 14:16:11.840882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:33.681 [2024-11-17 14:16:11.840889] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:33.681 [2024-11-17 14:16:11.840899] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:33.681 [2024-11-17 14:16:11.840906] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:33.681 [2024-11-17 14:16:11.840913] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:33.681 [2024-11-17 14:16:11.840920] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:33.681 [2024-11-17 14:16:11.840927] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:33.681 [2024-11-17 14:16:11.840940] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:33.681 [2024-11-17 14:16:11.840949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:33.681 [2024-11-17 14:16:11.840956] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:33.681 [2024-11-17 14:16:11.840963] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:33.681 [2024-11-17 14:16:11.840970] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:33.681 [2024-11-17 14:16:11.840976] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:33.681 [2024-11-17 14:16:11.840983] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:33.681 [2024-11-17 14:16:11.840990] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:33.681 [2024-11-17 14:16:11.840997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:33.681 [2024-11-17 14:16:11.841005] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:33.681 [2024-11-17 14:16:11.841012] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:33.681 [2024-11-17 14:16:11.841018] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:33.681 [2024-11-17 14:16:11.841026] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:33.681 [2024-11-17 14:16:11.841033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:33.681 [2024-11-17 14:16:11.841040] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:33.681 [2024-11-17 14:16:11.841048] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:33.681 [2024-11-17 14:16:11.841062] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:33.681 [2024-11-17 14:16:11.841072] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:33.681 [2024-11-17 14:16:11.841083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:33.681 [2024-11-17 14:16:11.841092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:33.681 [2024-11-17 14:16:11.841099] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:33.681 [2024-11-17 14:16:11.841106] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:33.681 [2024-11-17 14:16:11.841113] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:33.681 [2024-11-17 14:16:11.841120] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:33.681 [2024-11-17 14:16:11.841128] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:33.681 [2024-11-17 14:16:11.841138] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:33.681 [2024-11-17 14:16:11.841148] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:33.681 [2024-11-17 14:16:11.841157] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:33.681 [2024-11-17 14:16:11.841165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:33.681 [2024-11-17 14:16:11.841173] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:33.681 [2024-11-17 14:16:11.841182] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:33.681 [2024-11-17 14:16:11.841190] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:33.681 [2024-11-17 14:16:11.841200] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:33.681 [2024-11-17 14:16:11.841207] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:33.681 [2024-11-17 14:16:11.841214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:33.681 [2024-11-17 14:16:11.841222] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:33.681 [2024-11-17 14:16:11.841230] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:33.681 [2024-11-17 14:16:11.841253] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:33.681 [2024-11-17 14:16:11.841263] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:33.682 [2024-11-17 14:16:11.841270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:33.682 [2024-11-17 14:16:11.841278] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:33.682 [2024-11-17 14:16:11.841285] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:33.682 [2024-11-17 14:16:11.841294] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:33.682 [2024-11-17 14:16:11.841304] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:33.682 [2024-11-17 14:16:11.841314] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:33.682 [2024-11-17 14:16:11.841323] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:33.682 [2024-11-17 14:16:11.841330] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:33.682 [2024-11-17 14:16:11.841339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.682 [2024-11-17 14:16:11.841350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:33.682 [2024-11-17 14:16:11.841360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.723 ms 00:27:33.682 [2024-11-17 14:16:11.841368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.682 [2024-11-17 14:16:11.869572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.682 [2024-11-17 14:16:11.869614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:33.682 [2024-11-17 14:16:11.869641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.130 ms 00:27:33.682 [2024-11-17 14:16:11.869652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.682 [2024-11-17 14:16:11.869763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.682 [2024-11-17 14:16:11.869776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:33.682 [2024-11-17 14:16:11.869792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:27:33.682 [2024-11-17 14:16:11.869801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.682 [2024-11-17 14:16:11.880222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.682 [2024-11-17 14:16:11.880261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:33.682 [2024-11-17 14:16:11.880271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.354 ms 00:27:33.682 [2024-11-17 14:16:11.880280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.682 [2024-11-17 14:16:11.880309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.682 [2024-11-17 14:16:11.880318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:33.682 [2024-11-17 14:16:11.880326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:33.682 [2024-11-17 14:16:11.880334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.682 [2024-11-17 14:16:11.880773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.682 [2024-11-17 14:16:11.880802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:33.682 [2024-11-17 14:16:11.880812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.395 ms 00:27:33.682 [2024-11-17 14:16:11.880820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.682 [2024-11-17 14:16:11.880955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.682 [2024-11-17 14:16:11.880966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:33.682 [2024-11-17 14:16:11.880975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:27:33.682 [2024-11-17 14:16:11.880984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.682 [2024-11-17 14:16:11.887030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.682 [2024-11-17 14:16:11.887057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:33.682 [2024-11-17 14:16:11.887072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.024 ms 00:27:33.682 [2024-11-17 14:16:11.887085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.682 [2024-11-17 14:16:11.890345] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:27:33.682 [2024-11-17 14:16:11.890377] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:33.682 [2024-11-17 14:16:11.890392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.682 [2024-11-17 14:16:11.890401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:33.682 [2024-11-17 14:16:11.890409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.221 ms 00:27:33.682 [2024-11-17 14:16:11.890416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.682 [2024-11-17 14:16:11.905589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.682 [2024-11-17 14:16:11.905623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:33.682 [2024-11-17 14:16:11.905634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.134 ms 00:27:33.682 [2024-11-17 14:16:11.905644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.682 [2024-11-17 14:16:11.907581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.682 [2024-11-17 14:16:11.907610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:33.682 [2024-11-17 14:16:11.907619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.900 ms 00:27:33.682 [2024-11-17 14:16:11.907626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.682 [2024-11-17 14:16:11.909161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.682 [2024-11-17 14:16:11.909188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:33.682 [2024-11-17 14:16:11.909197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.502 ms 00:27:33.682 [2024-11-17 14:16:11.909204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.682 [2024-11-17 14:16:11.909555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.682 [2024-11-17 14:16:11.909574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:33.682 [2024-11-17 14:16:11.909583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:27:33.682 [2024-11-17 14:16:11.909590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.682 [2024-11-17 14:16:11.929403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.682 [2024-11-17 14:16:11.929443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:33.682 [2024-11-17 14:16:11.929457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.797 ms 00:27:33.682 [2024-11-17 14:16:11.929466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.682 [2024-11-17 14:16:11.937065] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:33.682 [2024-11-17 14:16:11.939766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.682 [2024-11-17 14:16:11.939795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:33.682 [2024-11-17 14:16:11.939806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.262 ms 00:27:33.682 [2024-11-17 14:16:11.939823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.682 [2024-11-17 14:16:11.939874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.682 [2024-11-17 14:16:11.939885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:33.682 [2024-11-17 14:16:11.939895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:27:33.682 [2024-11-17 14:16:11.939903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.682 [2024-11-17 14:16:11.939970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.682 [2024-11-17 14:16:11.939980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:33.682 [2024-11-17 14:16:11.939988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:27:33.682 [2024-11-17 14:16:11.940004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.682 [2024-11-17 14:16:11.940027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.682 [2024-11-17 14:16:11.940035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:33.682 [2024-11-17 14:16:11.940044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:33.682 [2024-11-17 14:16:11.940051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.682 [2024-11-17 14:16:11.940084] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:33.682 [2024-11-17 14:16:11.940093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.682 [2024-11-17 14:16:11.940103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:33.682 [2024-11-17 14:16:11.940113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:27:33.682 [2024-11-17 14:16:11.940120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.682 [2024-11-17 14:16:11.944197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.682 [2024-11-17 14:16:11.944234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:33.682 [2024-11-17 14:16:11.944258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.060 ms 00:27:33.682 [2024-11-17 14:16:11.944266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.682 [2024-11-17 14:16:11.944337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.682 [2024-11-17 14:16:11.944347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:33.682 [2024-11-17 14:16:11.944355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:27:33.682 [2024-11-17 14:16:11.944362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.682 [2024-11-17 14:16:11.945420] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 128.187 ms, result 0 00:27:35.066  [2024-11-17T14:16:14.308Z] Copying: 19/1024 [MB] (19 MBps) [2024-11-17T14:16:15.250Z] Copying: 41/1024 [MB] (21 MBps) [2024-11-17T14:16:16.194Z] Copying: 62/1024 [MB] (20 MBps) [2024-11-17T14:16:17.134Z] Copying: 77/1024 [MB] (15 MBps) [2024-11-17T14:16:18.069Z] Copying: 97/1024 [MB] (20 MBps) [2024-11-17T14:16:19.003Z] Copying: 116/1024 [MB] (18 MBps) [2024-11-17T14:16:20.403Z] Copying: 134/1024 [MB] (17 MBps) [2024-11-17T14:16:21.053Z] Copying: 152/1024 [MB] (17 MBps) [2024-11-17T14:16:21.989Z] Copying: 167/1024 [MB] (15 MBps) [2024-11-17T14:16:23.375Z] Copying: 182/1024 [MB] (14 MBps) [2024-11-17T14:16:24.319Z] Copying: 194/1024 [MB] (12 MBps) [2024-11-17T14:16:25.252Z] Copying: 205/1024 [MB] (10 MBps) [2024-11-17T14:16:26.186Z] Copying: 217/1024 [MB] (12 MBps) [2024-11-17T14:16:27.119Z] Copying: 233/1024 [MB] (15 MBps) [2024-11-17T14:16:28.061Z] Copying: 245/1024 [MB] (12 MBps) [2024-11-17T14:16:29.003Z] Copying: 257/1024 [MB] (11 MBps) [2024-11-17T14:16:30.389Z] Copying: 268/1024 [MB] (10 MBps) [2024-11-17T14:16:30.961Z] Copying: 279/1024 [MB] (11 MBps) [2024-11-17T14:16:32.348Z] Copying: 290/1024 [MB] (10 MBps) [2024-11-17T14:16:33.292Z] Copying: 303/1024 [MB] (13 MBps) [2024-11-17T14:16:34.251Z] Copying: 333/1024 [MB] (30 MBps) [2024-11-17T14:16:35.195Z] Copying: 343/1024 [MB] (10 MBps) [2024-11-17T14:16:36.139Z] Copying: 354/1024 [MB] (10 MBps) [2024-11-17T14:16:37.081Z] Copying: 368/1024 [MB] (14 MBps) [2024-11-17T14:16:38.023Z] Copying: 402/1024 [MB] (33 MBps) [2024-11-17T14:16:38.966Z] Copying: 434/1024 [MB] (32 MBps) [2024-11-17T14:16:40.348Z] Copying: 450/1024 [MB] (16 MBps) [2024-11-17T14:16:41.289Z] Copying: 467/1024 [MB] (16 MBps) [2024-11-17T14:16:42.234Z] Copying: 487/1024 [MB] (19 MBps) [2024-11-17T14:16:43.177Z] Copying: 501/1024 [MB] (14 MBps) [2024-11-17T14:16:44.120Z] Copying: 518/1024 [MB] (17 MBps) [2024-11-17T14:16:45.062Z] Copying: 532/1024 [MB] (14 MBps) [2024-11-17T14:16:46.004Z] Copying: 548/1024 [MB] (15 MBps) [2024-11-17T14:16:47.390Z] Copying: 562/1024 [MB] (14 MBps) [2024-11-17T14:16:47.960Z] Copying: 576/1024 [MB] (13 MBps) [2024-11-17T14:16:49.344Z] Copying: 592/1024 [MB] (16 MBps) [2024-11-17T14:16:50.285Z] Copying: 602/1024 [MB] (10 MBps) [2024-11-17T14:16:51.229Z] Copying: 613/1024 [MB] (10 MBps) [2024-11-17T14:16:52.174Z] Copying: 624/1024 [MB] (10 MBps) [2024-11-17T14:16:53.188Z] Copying: 634/1024 [MB] (10 MBps) [2024-11-17T14:16:54.131Z] Copying: 657/1024 [MB] (23 MBps) [2024-11-17T14:16:55.075Z] Copying: 672/1024 [MB] (14 MBps) [2024-11-17T14:16:56.019Z] Copying: 689/1024 [MB] (17 MBps) [2024-11-17T14:16:56.963Z] Copying: 702/1024 [MB] (12 MBps) [2024-11-17T14:16:58.377Z] Copying: 715/1024 [MB] (12 MBps) [2024-11-17T14:16:59.319Z] Copying: 728/1024 [MB] (12 MBps) [2024-11-17T14:17:00.261Z] Copying: 740/1024 [MB] (12 MBps) [2024-11-17T14:17:01.204Z] Copying: 754/1024 [MB] (14 MBps) [2024-11-17T14:17:02.147Z] Copying: 770/1024 [MB] (16 MBps) [2024-11-17T14:17:03.088Z] Copying: 783/1024 [MB] (12 MBps) [2024-11-17T14:17:04.031Z] Copying: 797/1024 [MB] (14 MBps) [2024-11-17T14:17:04.975Z] Copying: 813/1024 [MB] (16 MBps) [2024-11-17T14:17:06.360Z] Copying: 826/1024 [MB] (12 MBps) [2024-11-17T14:17:07.302Z] Copying: 845/1024 [MB] (19 MBps) [2024-11-17T14:17:08.246Z] Copying: 856/1024 [MB] (11 MBps) [2024-11-17T14:17:09.191Z] Copying: 898/1024 [MB] (41 MBps) [2024-11-17T14:17:10.131Z] Copying: 926/1024 [MB] (28 MBps) [2024-11-17T14:17:11.075Z] Copying: 941/1024 [MB] (14 MBps) [2024-11-17T14:17:12.017Z] Copying: 961/1024 [MB] (20 MBps) [2024-11-17T14:17:13.402Z] Copying: 973/1024 [MB] (12 MBps) [2024-11-17T14:17:13.974Z] Copying: 989/1024 [MB] (15 MBps) [2024-11-17T14:17:15.361Z] Copying: 1005/1024 [MB] (16 MBps) [2024-11-17T14:17:15.623Z] Copying: 1018/1024 [MB] (12 MBps) [2024-11-17T14:17:15.623Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-11-17 14:17:15.441504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:37.322 [2024-11-17 14:17:15.441559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:37.322 [2024-11-17 14:17:15.441576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:37.322 [2024-11-17 14:17:15.441586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.322 [2024-11-17 14:17:15.441608] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:37.322 [2024-11-17 14:17:15.442419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:37.322 [2024-11-17 14:17:15.442453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:37.322 [2024-11-17 14:17:15.442477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.790 ms 00:28:37.322 [2024-11-17 14:17:15.442486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.322 [2024-11-17 14:17:15.444963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:37.322 [2024-11-17 14:17:15.445001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:37.322 [2024-11-17 14:17:15.445022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.449 ms 00:28:37.322 [2024-11-17 14:17:15.445030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.322 [2024-11-17 14:17:15.445059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:37.322 [2024-11-17 14:17:15.445072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:28:37.322 [2024-11-17 14:17:15.445081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:37.322 [2024-11-17 14:17:15.445088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.322 [2024-11-17 14:17:15.445153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:37.322 [2024-11-17 14:17:15.445163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:28:37.322 [2024-11-17 14:17:15.445177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:28:37.322 [2024-11-17 14:17:15.445185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.322 [2024-11-17 14:17:15.445198] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:37.323 [2024-11-17 14:17:15.445215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:37.323 [2024-11-17 14:17:15.445904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:37.324 [2024-11-17 14:17:15.445911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:37.324 [2024-11-17 14:17:15.445919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:37.324 [2024-11-17 14:17:15.445926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:37.324 [2024-11-17 14:17:15.445933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:37.324 [2024-11-17 14:17:15.445941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:37.324 [2024-11-17 14:17:15.445948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:37.324 [2024-11-17 14:17:15.445955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:37.324 [2024-11-17 14:17:15.445962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:37.324 [2024-11-17 14:17:15.445969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:37.324 [2024-11-17 14:17:15.445977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:37.324 [2024-11-17 14:17:15.445985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:37.324 [2024-11-17 14:17:15.445992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:37.324 [2024-11-17 14:17:15.446007] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:37.324 [2024-11-17 14:17:15.446018] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9eb8e1fa-342d-4de8-93e6-c05f3844e79c 00:28:37.324 [2024-11-17 14:17:15.446026] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:37.324 [2024-11-17 14:17:15.446034] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:28:37.324 [2024-11-17 14:17:15.446042] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:37.324 [2024-11-17 14:17:15.446049] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:37.324 [2024-11-17 14:17:15.446066] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:37.324 [2024-11-17 14:17:15.446074] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:37.324 [2024-11-17 14:17:15.446081] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:37.324 [2024-11-17 14:17:15.446088] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:37.324 [2024-11-17 14:17:15.446095] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:37.324 [2024-11-17 14:17:15.446102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:37.324 [2024-11-17 14:17:15.446110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:37.324 [2024-11-17 14:17:15.446117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.905 ms 00:28:37.324 [2024-11-17 14:17:15.446125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.324 [2024-11-17 14:17:15.448526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:37.324 [2024-11-17 14:17:15.448552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:37.324 [2024-11-17 14:17:15.448564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.382 ms 00:28:37.324 [2024-11-17 14:17:15.448573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.324 [2024-11-17 14:17:15.448707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:37.324 [2024-11-17 14:17:15.448717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:37.324 [2024-11-17 14:17:15.448727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:28:37.324 [2024-11-17 14:17:15.448739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.324 [2024-11-17 14:17:15.455823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:37.324 [2024-11-17 14:17:15.455869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:37.324 [2024-11-17 14:17:15.455880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:37.324 [2024-11-17 14:17:15.455894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.324 [2024-11-17 14:17:15.455961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:37.324 [2024-11-17 14:17:15.455971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:37.324 [2024-11-17 14:17:15.455980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:37.324 [2024-11-17 14:17:15.455991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.324 [2024-11-17 14:17:15.456041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:37.324 [2024-11-17 14:17:15.456050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:37.324 [2024-11-17 14:17:15.456059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:37.324 [2024-11-17 14:17:15.456067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.324 [2024-11-17 14:17:15.456082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:37.324 [2024-11-17 14:17:15.456091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:37.324 [2024-11-17 14:17:15.456099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:37.324 [2024-11-17 14:17:15.456106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.324 [2024-11-17 14:17:15.470039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:37.324 [2024-11-17 14:17:15.470085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:37.324 [2024-11-17 14:17:15.470096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:37.324 [2024-11-17 14:17:15.470104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.324 [2024-11-17 14:17:15.480517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:37.324 [2024-11-17 14:17:15.480575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:37.324 [2024-11-17 14:17:15.480591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:37.324 [2024-11-17 14:17:15.480603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.324 [2024-11-17 14:17:15.480654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:37.324 [2024-11-17 14:17:15.480668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:37.324 [2024-11-17 14:17:15.480681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:37.324 [2024-11-17 14:17:15.480689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.324 [2024-11-17 14:17:15.480725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:37.324 [2024-11-17 14:17:15.480735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:37.324 [2024-11-17 14:17:15.480743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:37.324 [2024-11-17 14:17:15.480751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.324 [2024-11-17 14:17:15.480807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:37.324 [2024-11-17 14:17:15.480817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:37.324 [2024-11-17 14:17:15.480826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:37.324 [2024-11-17 14:17:15.480838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.324 [2024-11-17 14:17:15.480865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:37.324 [2024-11-17 14:17:15.480875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:37.324 [2024-11-17 14:17:15.480883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:37.324 [2024-11-17 14:17:15.480892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.324 [2024-11-17 14:17:15.480932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:37.324 [2024-11-17 14:17:15.480945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:37.324 [2024-11-17 14:17:15.480954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:37.324 [2024-11-17 14:17:15.480961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.324 [2024-11-17 14:17:15.481008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:37.324 [2024-11-17 14:17:15.481020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:37.324 [2024-11-17 14:17:15.481028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:37.324 [2024-11-17 14:17:15.481037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:37.324 [2024-11-17 14:17:15.481172] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 39.631 ms, result 0 00:28:37.897 00:28:37.897 00:28:37.897 14:17:16 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:28:37.897 [2024-11-17 14:17:16.077254] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:28:37.897 [2024-11-17 14:17:16.077393] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93464 ] 00:28:38.158 [2024-11-17 14:17:16.229944] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:38.158 [2024-11-17 14:17:16.279227] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:38.158 [2024-11-17 14:17:16.389491] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:38.158 [2024-11-17 14:17:16.389581] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:38.422 [2024-11-17 14:17:16.551340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.422 [2024-11-17 14:17:16.551407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:38.422 [2024-11-17 14:17:16.551426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:38.422 [2024-11-17 14:17:16.551436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.422 [2024-11-17 14:17:16.551496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.422 [2024-11-17 14:17:16.551511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:38.422 [2024-11-17 14:17:16.551520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:28:38.422 [2024-11-17 14:17:16.551529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.422 [2024-11-17 14:17:16.551555] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:38.422 [2024-11-17 14:17:16.552293] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:38.422 [2024-11-17 14:17:16.552348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.422 [2024-11-17 14:17:16.552364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:38.422 [2024-11-17 14:17:16.552378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.803 ms 00:28:38.422 [2024-11-17 14:17:16.552391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.422 [2024-11-17 14:17:16.552697] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:28:38.422 [2024-11-17 14:17:16.552741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.422 [2024-11-17 14:17:16.552751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:38.422 [2024-11-17 14:17:16.552762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:28:38.422 [2024-11-17 14:17:16.552770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.422 [2024-11-17 14:17:16.552835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.422 [2024-11-17 14:17:16.552848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:38.422 [2024-11-17 14:17:16.552857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:28:38.422 [2024-11-17 14:17:16.552865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.422 [2024-11-17 14:17:16.553122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.422 [2024-11-17 14:17:16.553134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:38.422 [2024-11-17 14:17:16.553150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.210 ms 00:28:38.422 [2024-11-17 14:17:16.553157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.422 [2024-11-17 14:17:16.553903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.422 [2024-11-17 14:17:16.553961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:38.422 [2024-11-17 14:17:16.553980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:28:38.422 [2024-11-17 14:17:16.553989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.422 [2024-11-17 14:17:16.554023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.422 [2024-11-17 14:17:16.554033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:38.422 [2024-11-17 14:17:16.554045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:38.422 [2024-11-17 14:17:16.554053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.422 [2024-11-17 14:17:16.554075] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:38.422 [2024-11-17 14:17:16.556303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.422 [2024-11-17 14:17:16.556330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:38.422 [2024-11-17 14:17:16.556343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.232 ms 00:28:38.422 [2024-11-17 14:17:16.556350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.422 [2024-11-17 14:17:16.556385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.422 [2024-11-17 14:17:16.556394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:38.422 [2024-11-17 14:17:16.556403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:28:38.422 [2024-11-17 14:17:16.556410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.422 [2024-11-17 14:17:16.556470] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:38.422 [2024-11-17 14:17:16.556493] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:38.422 [2024-11-17 14:17:16.556532] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:38.422 [2024-11-17 14:17:16.556548] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:38.422 [2024-11-17 14:17:16.556656] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:38.422 [2024-11-17 14:17:16.556667] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:38.422 [2024-11-17 14:17:16.556679] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:38.422 [2024-11-17 14:17:16.556690] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:38.422 [2024-11-17 14:17:16.556699] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:38.422 [2024-11-17 14:17:16.556715] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:38.422 [2024-11-17 14:17:16.556726] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:38.422 [2024-11-17 14:17:16.556734] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:38.422 [2024-11-17 14:17:16.556741] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:38.422 [2024-11-17 14:17:16.556749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.422 [2024-11-17 14:17:16.556758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:38.422 [2024-11-17 14:17:16.556771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:28:38.422 [2024-11-17 14:17:16.556779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.422 [2024-11-17 14:17:16.556861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.422 [2024-11-17 14:17:16.556870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:38.422 [2024-11-17 14:17:16.556877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:28:38.422 [2024-11-17 14:17:16.556888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.422 [2024-11-17 14:17:16.556989] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:38.422 [2024-11-17 14:17:16.557001] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:38.422 [2024-11-17 14:17:16.557011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:38.422 [2024-11-17 14:17:16.557024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:38.422 [2024-11-17 14:17:16.557033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:38.423 [2024-11-17 14:17:16.557047] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:38.423 [2024-11-17 14:17:16.557056] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:38.423 [2024-11-17 14:17:16.557063] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:38.423 [2024-11-17 14:17:16.557071] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:38.423 [2024-11-17 14:17:16.557078] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:38.423 [2024-11-17 14:17:16.557086] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:38.423 [2024-11-17 14:17:16.557095] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:38.423 [2024-11-17 14:17:16.557102] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:38.423 [2024-11-17 14:17:16.557110] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:38.423 [2024-11-17 14:17:16.557119] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:38.423 [2024-11-17 14:17:16.557126] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:38.423 [2024-11-17 14:17:16.557134] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:38.423 [2024-11-17 14:17:16.557142] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:38.423 [2024-11-17 14:17:16.557149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:38.423 [2024-11-17 14:17:16.557159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:38.423 [2024-11-17 14:17:16.557167] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:38.423 [2024-11-17 14:17:16.557175] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:38.423 [2024-11-17 14:17:16.557183] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:38.423 [2024-11-17 14:17:16.557190] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:38.423 [2024-11-17 14:17:16.557200] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:38.423 [2024-11-17 14:17:16.557208] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:38.423 [2024-11-17 14:17:16.557216] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:38.423 [2024-11-17 14:17:16.557224] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:38.423 [2024-11-17 14:17:16.557232] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:38.423 [2024-11-17 14:17:16.557267] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:38.423 [2024-11-17 14:17:16.557275] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:38.423 [2024-11-17 14:17:16.557283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:38.423 [2024-11-17 14:17:16.557291] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:38.423 [2024-11-17 14:17:16.557300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:38.423 [2024-11-17 14:17:16.557308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:38.423 [2024-11-17 14:17:16.557324] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:38.423 [2024-11-17 14:17:16.557333] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:38.423 [2024-11-17 14:17:16.557341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:38.423 [2024-11-17 14:17:16.557349] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:38.423 [2024-11-17 14:17:16.557357] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:38.423 [2024-11-17 14:17:16.557365] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:38.423 [2024-11-17 14:17:16.557373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:38.423 [2024-11-17 14:17:16.557381] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:38.423 [2024-11-17 14:17:16.557388] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:38.423 [2024-11-17 14:17:16.557398] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:38.423 [2024-11-17 14:17:16.557410] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:38.423 [2024-11-17 14:17:16.557422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:38.423 [2024-11-17 14:17:16.557432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:38.423 [2024-11-17 14:17:16.557440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:38.423 [2024-11-17 14:17:16.557449] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:38.423 [2024-11-17 14:17:16.557458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:38.423 [2024-11-17 14:17:16.557467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:38.423 [2024-11-17 14:17:16.557474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:38.423 [2024-11-17 14:17:16.557482] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:38.423 [2024-11-17 14:17:16.557495] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:38.423 [2024-11-17 14:17:16.557504] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:38.423 [2024-11-17 14:17:16.557515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:38.423 [2024-11-17 14:17:16.557522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:38.423 [2024-11-17 14:17:16.557530] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:38.423 [2024-11-17 14:17:16.557538] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:38.423 [2024-11-17 14:17:16.557544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:38.423 [2024-11-17 14:17:16.557551] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:38.423 [2024-11-17 14:17:16.557560] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:38.423 [2024-11-17 14:17:16.557567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:38.423 [2024-11-17 14:17:16.557582] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:38.423 [2024-11-17 14:17:16.557590] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:38.423 [2024-11-17 14:17:16.557597] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:38.423 [2024-11-17 14:17:16.557607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:38.423 [2024-11-17 14:17:16.557615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:38.423 [2024-11-17 14:17:16.557622] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:38.423 [2024-11-17 14:17:16.557630] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:38.423 [2024-11-17 14:17:16.557645] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:38.423 [2024-11-17 14:17:16.557652] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:38.423 [2024-11-17 14:17:16.557659] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:38.423 [2024-11-17 14:17:16.557666] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:38.423 [2024-11-17 14:17:16.557674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.423 [2024-11-17 14:17:16.557682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:38.423 [2024-11-17 14:17:16.557690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.753 ms 00:28:38.423 [2024-11-17 14:17:16.557697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.423 [2024-11-17 14:17:16.576614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.423 [2024-11-17 14:17:16.576684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:38.423 [2024-11-17 14:17:16.576702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.871 ms 00:28:38.423 [2024-11-17 14:17:16.576716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.423 [2024-11-17 14:17:16.576820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.423 [2024-11-17 14:17:16.576832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:38.423 [2024-11-17 14:17:16.576847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:28:38.423 [2024-11-17 14:17:16.576855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.423 [2024-11-17 14:17:16.589286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.423 [2024-11-17 14:17:16.589338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:38.423 [2024-11-17 14:17:16.589354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.357 ms 00:28:38.423 [2024-11-17 14:17:16.589362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.423 [2024-11-17 14:17:16.589397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.423 [2024-11-17 14:17:16.589406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:38.423 [2024-11-17 14:17:16.589415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:38.423 [2024-11-17 14:17:16.589424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.423 [2024-11-17 14:17:16.589520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.423 [2024-11-17 14:17:16.589532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:38.423 [2024-11-17 14:17:16.589540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:28:38.423 [2024-11-17 14:17:16.589551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.423 [2024-11-17 14:17:16.589683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.423 [2024-11-17 14:17:16.589695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:38.423 [2024-11-17 14:17:16.589704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:28:38.423 [2024-11-17 14:17:16.589715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.423 [2024-11-17 14:17:16.597049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.423 [2024-11-17 14:17:16.597101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:38.424 [2024-11-17 14:17:16.597112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.309 ms 00:28:38.424 [2024-11-17 14:17:16.597128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.424 [2024-11-17 14:17:16.597263] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:38.424 [2024-11-17 14:17:16.597276] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:38.424 [2024-11-17 14:17:16.597287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.424 [2024-11-17 14:17:16.597305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:38.424 [2024-11-17 14:17:16.597314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:28:38.424 [2024-11-17 14:17:16.597322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.424 [2024-11-17 14:17:16.614525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.424 [2024-11-17 14:17:16.614577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:38.424 [2024-11-17 14:17:16.614589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.184 ms 00:28:38.424 [2024-11-17 14:17:16.614597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.424 [2024-11-17 14:17:16.614735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.424 [2024-11-17 14:17:16.614749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:38.424 [2024-11-17 14:17:16.614758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:28:38.424 [2024-11-17 14:17:16.614767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.424 [2024-11-17 14:17:16.614820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.424 [2024-11-17 14:17:16.614829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:38.424 [2024-11-17 14:17:16.614838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:28:38.424 [2024-11-17 14:17:16.614849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.424 [2024-11-17 14:17:16.615170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.424 [2024-11-17 14:17:16.615196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:38.424 [2024-11-17 14:17:16.615205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:28:38.424 [2024-11-17 14:17:16.615213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.424 [2024-11-17 14:17:16.615233] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:28:38.424 [2024-11-17 14:17:16.615280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.424 [2024-11-17 14:17:16.615289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:38.424 [2024-11-17 14:17:16.615302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:28:38.424 [2024-11-17 14:17:16.615312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.424 [2024-11-17 14:17:16.624786] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:38.424 [2024-11-17 14:17:16.624941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.424 [2024-11-17 14:17:16.624952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:38.424 [2024-11-17 14:17:16.624961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.610 ms 00:28:38.424 [2024-11-17 14:17:16.624969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.424 [2024-11-17 14:17:16.627748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.424 [2024-11-17 14:17:16.627790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:38.424 [2024-11-17 14:17:16.627800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.749 ms 00:28:38.424 [2024-11-17 14:17:16.627808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.424 [2024-11-17 14:17:16.627907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.424 [2024-11-17 14:17:16.627918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:38.424 [2024-11-17 14:17:16.627927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:28:38.424 [2024-11-17 14:17:16.627935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.424 [2024-11-17 14:17:16.627959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.424 [2024-11-17 14:17:16.627972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:38.424 [2024-11-17 14:17:16.627985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:38.424 [2024-11-17 14:17:16.627993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.424 [2024-11-17 14:17:16.628025] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:38.424 [2024-11-17 14:17:16.628034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.424 [2024-11-17 14:17:16.628046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:38.424 [2024-11-17 14:17:16.628054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:28:38.424 [2024-11-17 14:17:16.628062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.424 [2024-11-17 14:17:16.634454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.424 [2024-11-17 14:17:16.634509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:38.424 [2024-11-17 14:17:16.634527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.370 ms 00:28:38.424 [2024-11-17 14:17:16.634536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.424 [2024-11-17 14:17:16.634622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.424 [2024-11-17 14:17:16.634633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:38.424 [2024-11-17 14:17:16.634648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:28:38.424 [2024-11-17 14:17:16.634655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.424 [2024-11-17 14:17:16.635984] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 84.192 ms, result 0 00:28:39.813  [2024-11-17T14:17:19.057Z] Copying: 18/1024 [MB] (18 MBps) [2024-11-17T14:17:20.000Z] Copying: 33/1024 [MB] (14 MBps) [2024-11-17T14:17:20.943Z] Copying: 49/1024 [MB] (15 MBps) [2024-11-17T14:17:21.884Z] Copying: 64/1024 [MB] (15 MBps) [2024-11-17T14:17:22.876Z] Copying: 82/1024 [MB] (17 MBps) [2024-11-17T14:17:23.885Z] Copying: 95/1024 [MB] (12 MBps) [2024-11-17T14:17:24.828Z] Copying: 107/1024 [MB] (12 MBps) [2024-11-17T14:17:26.213Z] Copying: 118/1024 [MB] (10 MBps) [2024-11-17T14:17:27.156Z] Copying: 130/1024 [MB] (12 MBps) [2024-11-17T14:17:28.096Z] Copying: 145/1024 [MB] (14 MBps) [2024-11-17T14:17:29.035Z] Copying: 157/1024 [MB] (12 MBps) [2024-11-17T14:17:29.975Z] Copying: 168/1024 [MB] (10 MBps) [2024-11-17T14:17:30.915Z] Copying: 178/1024 [MB] (10 MBps) [2024-11-17T14:17:31.855Z] Copying: 189/1024 [MB] (10 MBps) [2024-11-17T14:17:33.241Z] Copying: 199/1024 [MB] (10 MBps) [2024-11-17T14:17:34.185Z] Copying: 211/1024 [MB] (11 MBps) [2024-11-17T14:17:35.127Z] Copying: 230/1024 [MB] (18 MBps) [2024-11-17T14:17:36.071Z] Copying: 241/1024 [MB] (10 MBps) [2024-11-17T14:17:37.014Z] Copying: 251/1024 [MB] (10 MBps) [2024-11-17T14:17:37.956Z] Copying: 262/1024 [MB] (11 MBps) [2024-11-17T14:17:38.900Z] Copying: 278/1024 [MB] (15 MBps) [2024-11-17T14:17:39.843Z] Copying: 290/1024 [MB] (12 MBps) [2024-11-17T14:17:41.232Z] Copying: 302/1024 [MB] (11 MBps) [2024-11-17T14:17:42.177Z] Copying: 318/1024 [MB] (15 MBps) [2024-11-17T14:17:43.122Z] Copying: 328/1024 [MB] (10 MBps) [2024-11-17T14:17:44.066Z] Copying: 339/1024 [MB] (10 MBps) [2024-11-17T14:17:45.011Z] Copying: 355/1024 [MB] (16 MBps) [2024-11-17T14:17:45.954Z] Copying: 372/1024 [MB] (16 MBps) [2024-11-17T14:17:46.896Z] Copying: 388/1024 [MB] (16 MBps) [2024-11-17T14:17:47.839Z] Copying: 410/1024 [MB] (21 MBps) [2024-11-17T14:17:49.224Z] Copying: 423/1024 [MB] (12 MBps) [2024-11-17T14:17:50.168Z] Copying: 437/1024 [MB] (14 MBps) [2024-11-17T14:17:51.112Z] Copying: 457/1024 [MB] (20 MBps) [2024-11-17T14:17:52.057Z] Copying: 474/1024 [MB] (16 MBps) [2024-11-17T14:17:53.000Z] Copying: 493/1024 [MB] (19 MBps) [2024-11-17T14:17:53.945Z] Copying: 513/1024 [MB] (19 MBps) [2024-11-17T14:17:54.943Z] Copying: 532/1024 [MB] (18 MBps) [2024-11-17T14:17:55.930Z] Copying: 548/1024 [MB] (16 MBps) [2024-11-17T14:17:56.874Z] Copying: 560/1024 [MB] (12 MBps) [2024-11-17T14:17:58.262Z] Copying: 570/1024 [MB] (10 MBps) [2024-11-17T14:17:58.834Z] Copying: 581/1024 [MB] (10 MBps) [2024-11-17T14:18:00.232Z] Copying: 592/1024 [MB] (10 MBps) [2024-11-17T14:18:01.176Z] Copying: 602/1024 [MB] (10 MBps) [2024-11-17T14:18:02.120Z] Copying: 613/1024 [MB] (10 MBps) [2024-11-17T14:18:03.065Z] Copying: 623/1024 [MB] (10 MBps) [2024-11-17T14:18:04.009Z] Copying: 634/1024 [MB] (10 MBps) [2024-11-17T14:18:04.953Z] Copying: 644/1024 [MB] (10 MBps) [2024-11-17T14:18:05.898Z] Copying: 655/1024 [MB] (10 MBps) [2024-11-17T14:18:06.842Z] Copying: 666/1024 [MB] (10 MBps) [2024-11-17T14:18:08.228Z] Copying: 677/1024 [MB] (10 MBps) [2024-11-17T14:18:09.172Z] Copying: 688/1024 [MB] (10 MBps) [2024-11-17T14:18:10.112Z] Copying: 699/1024 [MB] (10 MBps) [2024-11-17T14:18:11.058Z] Copying: 710/1024 [MB] (10 MBps) [2024-11-17T14:18:12.004Z] Copying: 725/1024 [MB] (15 MBps) [2024-11-17T14:18:12.948Z] Copying: 744/1024 [MB] (19 MBps) [2024-11-17T14:18:13.893Z] Copying: 763/1024 [MB] (18 MBps) [2024-11-17T14:18:14.838Z] Copying: 784/1024 [MB] (20 MBps) [2024-11-17T14:18:16.225Z] Copying: 798/1024 [MB] (14 MBps) [2024-11-17T14:18:17.167Z] Copying: 811/1024 [MB] (13 MBps) [2024-11-17T14:18:18.108Z] Copying: 827/1024 [MB] (16 MBps) [2024-11-17T14:18:19.050Z] Copying: 845/1024 [MB] (17 MBps) [2024-11-17T14:18:19.991Z] Copying: 861/1024 [MB] (16 MBps) [2024-11-17T14:18:20.933Z] Copying: 873/1024 [MB] (11 MBps) [2024-11-17T14:18:21.875Z] Copying: 892/1024 [MB] (19 MBps) [2024-11-17T14:18:23.262Z] Copying: 908/1024 [MB] (15 MBps) [2024-11-17T14:18:23.833Z] Copying: 924/1024 [MB] (15 MBps) [2024-11-17T14:18:25.217Z] Copying: 942/1024 [MB] (18 MBps) [2024-11-17T14:18:26.158Z] Copying: 955/1024 [MB] (12 MBps) [2024-11-17T14:18:27.176Z] Copying: 967/1024 [MB] (11 MBps) [2024-11-17T14:18:28.146Z] Copying: 979/1024 [MB] (11 MBps) [2024-11-17T14:18:29.092Z] Copying: 991/1024 [MB] (11 MBps) [2024-11-17T14:18:30.041Z] Copying: 1003/1024 [MB] (11 MBps) [2024-11-17T14:18:30.613Z] Copying: 1014/1024 [MB] (11 MBps) [2024-11-17T14:18:30.613Z] Copying: 1024/1024 [MB] (average 13 MBps)[2024-11-17 14:18:30.531659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.312 [2024-11-17 14:18:30.531751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:52.312 [2024-11-17 14:18:30.531774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:52.312 [2024-11-17 14:18:30.531787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.312 [2024-11-17 14:18:30.531822] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:52.312 [2024-11-17 14:18:30.532616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.312 [2024-11-17 14:18:30.532657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:52.312 [2024-11-17 14:18:30.532671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.772 ms 00:29:52.312 [2024-11-17 14:18:30.532680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.312 [2024-11-17 14:18:30.532919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.312 [2024-11-17 14:18:30.532931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:52.312 [2024-11-17 14:18:30.532941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.212 ms 00:29:52.312 [2024-11-17 14:18:30.532950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.312 [2024-11-17 14:18:30.532987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.312 [2024-11-17 14:18:30.532998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:29:52.312 [2024-11-17 14:18:30.533010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:52.312 [2024-11-17 14:18:30.533023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.312 [2024-11-17 14:18:30.533087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.312 [2024-11-17 14:18:30.533097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:29:52.312 [2024-11-17 14:18:30.533106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:29:52.312 [2024-11-17 14:18:30.533115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.312 [2024-11-17 14:18:30.533133] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:52.312 [2024-11-17 14:18:30.533147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:52.312 [2024-11-17 14:18:30.533157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:52.312 [2024-11-17 14:18:30.533165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:52.312 [2024-11-17 14:18:30.533173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:52.312 [2024-11-17 14:18:30.533181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:52.312 [2024-11-17 14:18:30.533189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:52.312 [2024-11-17 14:18:30.533196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:52.312 [2024-11-17 14:18:30.533204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:52.312 [2024-11-17 14:18:30.533211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:52.312 [2024-11-17 14:18:30.533220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:52.312 [2024-11-17 14:18:30.533227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:52.312 [2024-11-17 14:18:30.533256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:52.312 [2024-11-17 14:18:30.533265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:52.312 [2024-11-17 14:18:30.533273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:52.312 [2024-11-17 14:18:30.533281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:52.312 [2024-11-17 14:18:30.533289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:52.312 [2024-11-17 14:18:30.533297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:52.312 [2024-11-17 14:18:30.533305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:52.312 [2024-11-17 14:18:30.533313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:52.312 [2024-11-17 14:18:30.533323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:52.312 [2024-11-17 14:18:30.533330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:52.312 [2024-11-17 14:18:30.533340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:52.312 [2024-11-17 14:18:30.533347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:52.312 [2024-11-17 14:18:30.533355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:52.312 [2024-11-17 14:18:30.533362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:52.312 [2024-11-17 14:18:30.533370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:52.312 [2024-11-17 14:18:30.533378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:52.312 [2024-11-17 14:18:30.533386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:52.312 [2024-11-17 14:18:30.533393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:52.312 [2024-11-17 14:18:30.533401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:52.312 [2024-11-17 14:18:30.533409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:52.312 [2024-11-17 14:18:30.533417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:52.312 [2024-11-17 14:18:30.533432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:52.312 [2024-11-17 14:18:30.533440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:52.312 [2024-11-17 14:18:30.533447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:52.312 [2024-11-17 14:18:30.533457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:52.312 [2024-11-17 14:18:30.533466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:52.312 [2024-11-17 14:18:30.533474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:52.312 [2024-11-17 14:18:30.533481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:52.312 [2024-11-17 14:18:30.533488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:52.312 [2024-11-17 14:18:30.533496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:52.312 [2024-11-17 14:18:30.533504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:52.312 [2024-11-17 14:18:30.533511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:52.312 [2024-11-17 14:18:30.533520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:52.312 [2024-11-17 14:18:30.533528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:52.313 [2024-11-17 14:18:30.533536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:52.313 [2024-11-17 14:18:30.533543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:52.313 [2024-11-17 14:18:30.533553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:52.313 [2024-11-17 14:18:30.533562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:52.313 [2024-11-17 14:18:30.533570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:52.313 [2024-11-17 14:18:30.533577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:52.313 [2024-11-17 14:18:30.533585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:52.313 [2024-11-17 14:18:30.533593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:52.313 [2024-11-17 14:18:30.533600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:52.313 [2024-11-17 14:18:30.533607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:52.313 [2024-11-17 14:18:30.533615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:52.313 [2024-11-17 14:18:30.533622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:52.313 [2024-11-17 14:18:30.533629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:52.313 [2024-11-17 14:18:30.533636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:52.313 [2024-11-17 14:18:30.533644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:52.313 [2024-11-17 14:18:30.533651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:52.313 [2024-11-17 14:18:30.533658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:52.313 [2024-11-17 14:18:30.533667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:52.313 [2024-11-17 14:18:30.533675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:52.313 [2024-11-17 14:18:30.533683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:52.313 [2024-11-17 14:18:30.533691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:52.313 [2024-11-17 14:18:30.533698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:52.313 [2024-11-17 14:18:30.533705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:52.313 [2024-11-17 14:18:30.533712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:52.313 [2024-11-17 14:18:30.533720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:52.313 [2024-11-17 14:18:30.533727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:52.313 [2024-11-17 14:18:30.533734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:52.313 [2024-11-17 14:18:30.533741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:52.313 [2024-11-17 14:18:30.533748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:52.313 [2024-11-17 14:18:30.533755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:52.313 [2024-11-17 14:18:30.533766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:52.313 [2024-11-17 14:18:30.533774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:52.313 [2024-11-17 14:18:30.533782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:52.313 [2024-11-17 14:18:30.533789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:52.313 [2024-11-17 14:18:30.533796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:52.313 [2024-11-17 14:18:30.533804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:52.313 [2024-11-17 14:18:30.533811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:52.313 [2024-11-17 14:18:30.533818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:52.313 [2024-11-17 14:18:30.533828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:52.313 [2024-11-17 14:18:30.533835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:52.313 [2024-11-17 14:18:30.533843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:52.313 [2024-11-17 14:18:30.533850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:52.313 [2024-11-17 14:18:30.533858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:52.313 [2024-11-17 14:18:30.533865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:52.313 [2024-11-17 14:18:30.533873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:52.313 [2024-11-17 14:18:30.533880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:52.313 [2024-11-17 14:18:30.533888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:52.313 [2024-11-17 14:18:30.533895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:52.313 [2024-11-17 14:18:30.533902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:52.313 [2024-11-17 14:18:30.533913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:52.313 [2024-11-17 14:18:30.533922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:52.313 [2024-11-17 14:18:30.533930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:52.313 [2024-11-17 14:18:30.533937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:52.313 [2024-11-17 14:18:30.533944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:52.313 [2024-11-17 14:18:30.533952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:52.313 [2024-11-17 14:18:30.533966] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:52.313 [2024-11-17 14:18:30.533974] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9eb8e1fa-342d-4de8-93e6-c05f3844e79c 00:29:52.313 [2024-11-17 14:18:30.533985] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:52.313 [2024-11-17 14:18:30.533994] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:29:52.313 [2024-11-17 14:18:30.534002] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:52.313 [2024-11-17 14:18:30.534010] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:52.313 [2024-11-17 14:18:30.534017] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:52.313 [2024-11-17 14:18:30.534043] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:52.313 [2024-11-17 14:18:30.534053] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:52.313 [2024-11-17 14:18:30.534060] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:52.313 [2024-11-17 14:18:30.534068] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:52.313 [2024-11-17 14:18:30.534076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.313 [2024-11-17 14:18:30.534084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:52.313 [2024-11-17 14:18:30.534092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.943 ms 00:29:52.313 [2024-11-17 14:18:30.534111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.313 [2024-11-17 14:18:30.536829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.313 [2024-11-17 14:18:30.536872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:52.313 [2024-11-17 14:18:30.536883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.304 ms 00:29:52.313 [2024-11-17 14:18:30.536891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.313 [2024-11-17 14:18:30.537018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.313 [2024-11-17 14:18:30.537029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:52.313 [2024-11-17 14:18:30.537041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:29:52.313 [2024-11-17 14:18:30.537050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.313 [2024-11-17 14:18:30.544280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:52.313 [2024-11-17 14:18:30.544328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:52.313 [2024-11-17 14:18:30.544339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:52.313 [2024-11-17 14:18:30.544347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.313 [2024-11-17 14:18:30.544416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:52.313 [2024-11-17 14:18:30.544425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:52.313 [2024-11-17 14:18:30.544434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:52.313 [2024-11-17 14:18:30.544443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.313 [2024-11-17 14:18:30.544509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:52.313 [2024-11-17 14:18:30.544521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:52.313 [2024-11-17 14:18:30.544534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:52.313 [2024-11-17 14:18:30.544541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.313 [2024-11-17 14:18:30.544558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:52.313 [2024-11-17 14:18:30.544568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:52.313 [2024-11-17 14:18:30.544576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:52.313 [2024-11-17 14:18:30.544584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.313 [2024-11-17 14:18:30.558471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:52.313 [2024-11-17 14:18:30.558526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:52.313 [2024-11-17 14:18:30.558539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:52.314 [2024-11-17 14:18:30.558548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.314 [2024-11-17 14:18:30.570450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:52.314 [2024-11-17 14:18:30.570506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:52.314 [2024-11-17 14:18:30.570519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:52.314 [2024-11-17 14:18:30.570527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.314 [2024-11-17 14:18:30.570592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:52.314 [2024-11-17 14:18:30.570602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:52.314 [2024-11-17 14:18:30.570618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:52.314 [2024-11-17 14:18:30.570627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.314 [2024-11-17 14:18:30.570663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:52.314 [2024-11-17 14:18:30.570674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:52.314 [2024-11-17 14:18:30.570685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:52.314 [2024-11-17 14:18:30.570694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.314 [2024-11-17 14:18:30.570753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:52.314 [2024-11-17 14:18:30.570766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:52.314 [2024-11-17 14:18:30.570775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:52.314 [2024-11-17 14:18:30.570783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.314 [2024-11-17 14:18:30.570808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:52.314 [2024-11-17 14:18:30.570824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:52.314 [2024-11-17 14:18:30.570837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:52.314 [2024-11-17 14:18:30.570845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.314 [2024-11-17 14:18:30.570887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:52.314 [2024-11-17 14:18:30.570901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:52.314 [2024-11-17 14:18:30.570910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:52.314 [2024-11-17 14:18:30.570919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.314 [2024-11-17 14:18:30.570963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:52.314 [2024-11-17 14:18:30.570988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:52.314 [2024-11-17 14:18:30.570998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:52.314 [2024-11-17 14:18:30.571006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.314 [2024-11-17 14:18:30.571139] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 39.458 ms, result 0 00:29:52.575 00:29:52.575 00:29:52.575 14:18:30 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:55.123 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:29:55.123 14:18:33 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:29:55.123 [2024-11-17 14:18:33.101380] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:55.123 [2024-11-17 14:18:33.101741] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94245 ] 00:29:55.123 [2024-11-17 14:18:33.255405] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:55.123 [2024-11-17 14:18:33.305509] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:55.123 [2024-11-17 14:18:33.419845] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:55.123 [2024-11-17 14:18:33.419925] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:55.384 [2024-11-17 14:18:33.581269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.384 [2024-11-17 14:18:33.581327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:55.384 [2024-11-17 14:18:33.581345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:55.384 [2024-11-17 14:18:33.581355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.384 [2024-11-17 14:18:33.581418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.384 [2024-11-17 14:18:33.581431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:55.384 [2024-11-17 14:18:33.581441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:29:55.384 [2024-11-17 14:18:33.581450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.384 [2024-11-17 14:18:33.581472] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:55.384 [2024-11-17 14:18:33.581808] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:55.384 [2024-11-17 14:18:33.581843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.384 [2024-11-17 14:18:33.581853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:55.384 [2024-11-17 14:18:33.581866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.377 ms 00:29:55.384 [2024-11-17 14:18:33.581881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.384 [2024-11-17 14:18:33.582158] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:29:55.384 [2024-11-17 14:18:33.582191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.384 [2024-11-17 14:18:33.582205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:55.384 [2024-11-17 14:18:33.582216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:29:55.384 [2024-11-17 14:18:33.582224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.384 [2024-11-17 14:18:33.582304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.384 [2024-11-17 14:18:33.582319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:55.384 [2024-11-17 14:18:33.582328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:29:55.384 [2024-11-17 14:18:33.582336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.384 [2024-11-17 14:18:33.582586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.384 [2024-11-17 14:18:33.582600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:55.384 [2024-11-17 14:18:33.582609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.205 ms 00:29:55.384 [2024-11-17 14:18:33.582617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.384 [2024-11-17 14:18:33.582751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.384 [2024-11-17 14:18:33.582777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:55.384 [2024-11-17 14:18:33.582786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:29:55.384 [2024-11-17 14:18:33.582794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.384 [2024-11-17 14:18:33.582819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.384 [2024-11-17 14:18:33.582828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:55.384 [2024-11-17 14:18:33.582837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:55.384 [2024-11-17 14:18:33.582845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.384 [2024-11-17 14:18:33.582868] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:55.384 [2024-11-17 14:18:33.585023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.384 [2024-11-17 14:18:33.585064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:55.384 [2024-11-17 14:18:33.585077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.160 ms 00:29:55.384 [2024-11-17 14:18:33.585091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.384 [2024-11-17 14:18:33.585126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.384 [2024-11-17 14:18:33.585135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:55.384 [2024-11-17 14:18:33.585143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:29:55.384 [2024-11-17 14:18:33.585151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.384 [2024-11-17 14:18:33.585206] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:55.384 [2024-11-17 14:18:33.585228] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:55.384 [2024-11-17 14:18:33.585286] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:55.384 [2024-11-17 14:18:33.585307] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:55.384 [2024-11-17 14:18:33.585412] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:55.384 [2024-11-17 14:18:33.585423] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:55.384 [2024-11-17 14:18:33.585434] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:55.384 [2024-11-17 14:18:33.585444] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:55.384 [2024-11-17 14:18:33.585456] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:55.384 [2024-11-17 14:18:33.585468] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:55.384 [2024-11-17 14:18:33.585479] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:55.384 [2024-11-17 14:18:33.585487] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:55.384 [2024-11-17 14:18:33.585499] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:55.384 [2024-11-17 14:18:33.585507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.384 [2024-11-17 14:18:33.585518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:55.384 [2024-11-17 14:18:33.585528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:29:55.384 [2024-11-17 14:18:33.585536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.384 [2024-11-17 14:18:33.585619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.384 [2024-11-17 14:18:33.585629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:55.384 [2024-11-17 14:18:33.585636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:29:55.384 [2024-11-17 14:18:33.585647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.384 [2024-11-17 14:18:33.585756] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:55.384 [2024-11-17 14:18:33.585778] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:55.384 [2024-11-17 14:18:33.585791] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:55.384 [2024-11-17 14:18:33.585809] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:55.384 [2024-11-17 14:18:33.585818] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:55.384 [2024-11-17 14:18:33.585831] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:55.384 [2024-11-17 14:18:33.585841] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:55.384 [2024-11-17 14:18:33.585850] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:55.384 [2024-11-17 14:18:33.585859] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:55.384 [2024-11-17 14:18:33.585868] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:55.384 [2024-11-17 14:18:33.585878] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:55.384 [2024-11-17 14:18:33.585886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:55.384 [2024-11-17 14:18:33.585897] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:55.384 [2024-11-17 14:18:33.585907] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:55.384 [2024-11-17 14:18:33.585915] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:55.384 [2024-11-17 14:18:33.585923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:55.384 [2024-11-17 14:18:33.585932] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:55.384 [2024-11-17 14:18:33.585940] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:55.384 [2024-11-17 14:18:33.585947] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:55.384 [2024-11-17 14:18:33.585958] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:55.384 [2024-11-17 14:18:33.585966] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:55.384 [2024-11-17 14:18:33.585973] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:55.384 [2024-11-17 14:18:33.585981] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:55.384 [2024-11-17 14:18:33.585989] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:55.384 [2024-11-17 14:18:33.585996] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:55.384 [2024-11-17 14:18:33.586003] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:55.384 [2024-11-17 14:18:33.586011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:55.384 [2024-11-17 14:18:33.586021] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:55.384 [2024-11-17 14:18:33.586029] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:55.384 [2024-11-17 14:18:33.586036] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:55.384 [2024-11-17 14:18:33.586044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:55.384 [2024-11-17 14:18:33.586051] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:55.385 [2024-11-17 14:18:33.586058] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:55.385 [2024-11-17 14:18:33.586066] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:55.385 [2024-11-17 14:18:33.586073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:55.385 [2024-11-17 14:18:33.586086] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:55.385 [2024-11-17 14:18:33.586095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:55.385 [2024-11-17 14:18:33.586102] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:55.385 [2024-11-17 14:18:33.586110] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:55.385 [2024-11-17 14:18:33.586117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:55.385 [2024-11-17 14:18:33.586125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:55.385 [2024-11-17 14:18:33.586134] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:55.385 [2024-11-17 14:18:33.586142] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:55.385 [2024-11-17 14:18:33.586149] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:55.385 [2024-11-17 14:18:33.586159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:55.385 [2024-11-17 14:18:33.586171] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:55.385 [2024-11-17 14:18:33.586178] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:55.385 [2024-11-17 14:18:33.586186] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:55.385 [2024-11-17 14:18:33.586194] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:55.385 [2024-11-17 14:18:33.586202] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:55.385 [2024-11-17 14:18:33.586209] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:55.385 [2024-11-17 14:18:33.586218] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:55.385 [2024-11-17 14:18:33.586224] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:55.385 [2024-11-17 14:18:33.586233] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:55.385 [2024-11-17 14:18:33.586271] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:55.385 [2024-11-17 14:18:33.586280] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:55.385 [2024-11-17 14:18:33.586288] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:55.385 [2024-11-17 14:18:33.586297] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:55.385 [2024-11-17 14:18:33.586304] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:55.385 [2024-11-17 14:18:33.586311] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:55.385 [2024-11-17 14:18:33.586319] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:55.385 [2024-11-17 14:18:33.586326] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:55.385 [2024-11-17 14:18:33.586333] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:55.385 [2024-11-17 14:18:33.586340] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:55.385 [2024-11-17 14:18:33.586347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:55.385 [2024-11-17 14:18:33.586355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:55.385 [2024-11-17 14:18:33.586361] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:55.385 [2024-11-17 14:18:33.586371] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:55.385 [2024-11-17 14:18:33.586380] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:55.385 [2024-11-17 14:18:33.586388] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:55.385 [2024-11-17 14:18:33.586396] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:55.385 [2024-11-17 14:18:33.586404] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:55.385 [2024-11-17 14:18:33.586411] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:55.385 [2024-11-17 14:18:33.586418] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:55.385 [2024-11-17 14:18:33.586427] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:55.385 [2024-11-17 14:18:33.586442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.385 [2024-11-17 14:18:33.586451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:55.385 [2024-11-17 14:18:33.586461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.755 ms 00:29:55.385 [2024-11-17 14:18:33.586469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.385 [2024-11-17 14:18:33.605762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.385 [2024-11-17 14:18:33.605845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:55.385 [2024-11-17 14:18:33.605876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.243 ms 00:29:55.385 [2024-11-17 14:18:33.605903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.385 [2024-11-17 14:18:33.606085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.385 [2024-11-17 14:18:33.606108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:55.385 [2024-11-17 14:18:33.606136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:29:55.385 [2024-11-17 14:18:33.606162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.385 [2024-11-17 14:18:33.619006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.385 [2024-11-17 14:18:33.619059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:55.385 [2024-11-17 14:18:33.619074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.699 ms 00:29:55.385 [2024-11-17 14:18:33.619082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.385 [2024-11-17 14:18:33.619123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.385 [2024-11-17 14:18:33.619132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:55.385 [2024-11-17 14:18:33.619141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:55.385 [2024-11-17 14:18:33.619149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.385 [2024-11-17 14:18:33.619286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.385 [2024-11-17 14:18:33.619299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:55.385 [2024-11-17 14:18:33.619307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:29:55.385 [2024-11-17 14:18:33.619323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.385 [2024-11-17 14:18:33.619448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.385 [2024-11-17 14:18:33.619458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:55.385 [2024-11-17 14:18:33.619466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:29:55.385 [2024-11-17 14:18:33.619479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.385 [2024-11-17 14:18:33.626273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.385 [2024-11-17 14:18:33.626314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:55.385 [2024-11-17 14:18:33.626324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.772 ms 00:29:55.385 [2024-11-17 14:18:33.626338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.385 [2024-11-17 14:18:33.626451] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:55.385 [2024-11-17 14:18:33.626467] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:55.385 [2024-11-17 14:18:33.626477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.385 [2024-11-17 14:18:33.626494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:55.385 [2024-11-17 14:18:33.626503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:29:55.385 [2024-11-17 14:18:33.626511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.385 [2024-11-17 14:18:33.638829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.385 [2024-11-17 14:18:33.638892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:55.385 [2024-11-17 14:18:33.638903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.295 ms 00:29:55.385 [2024-11-17 14:18:33.638911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.385 [2024-11-17 14:18:33.639045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.385 [2024-11-17 14:18:33.639056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:55.385 [2024-11-17 14:18:33.639065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:29:55.385 [2024-11-17 14:18:33.639073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.385 [2024-11-17 14:18:33.639125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.385 [2024-11-17 14:18:33.639135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:55.385 [2024-11-17 14:18:33.639144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:29:55.385 [2024-11-17 14:18:33.639160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.385 [2024-11-17 14:18:33.639508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.385 [2024-11-17 14:18:33.639531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:55.385 [2024-11-17 14:18:33.639541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.309 ms 00:29:55.385 [2024-11-17 14:18:33.639551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.385 [2024-11-17 14:18:33.639578] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:29:55.385 [2024-11-17 14:18:33.639592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.385 [2024-11-17 14:18:33.639600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:55.385 [2024-11-17 14:18:33.639611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:29:55.385 [2024-11-17 14:18:33.639623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.385 [2024-11-17 14:18:33.648904] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:55.385 [2024-11-17 14:18:33.649061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.385 [2024-11-17 14:18:33.649076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:55.385 [2024-11-17 14:18:33.649093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.419 ms 00:29:55.385 [2024-11-17 14:18:33.649101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.385 [2024-11-17 14:18:33.651572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.385 [2024-11-17 14:18:33.651609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:55.385 [2024-11-17 14:18:33.651619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.446 ms 00:29:55.385 [2024-11-17 14:18:33.651627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.385 [2024-11-17 14:18:33.651720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.385 [2024-11-17 14:18:33.651731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:55.385 [2024-11-17 14:18:33.651740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:29:55.385 [2024-11-17 14:18:33.651747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.385 [2024-11-17 14:18:33.651770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.385 [2024-11-17 14:18:33.651784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:55.385 [2024-11-17 14:18:33.651793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:55.385 [2024-11-17 14:18:33.651801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.385 [2024-11-17 14:18:33.651835] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:55.385 [2024-11-17 14:18:33.651845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.385 [2024-11-17 14:18:33.651860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:55.385 [2024-11-17 14:18:33.651868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:29:55.385 [2024-11-17 14:18:33.651876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.385 [2024-11-17 14:18:33.657950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.385 [2024-11-17 14:18:33.658003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:55.385 [2024-11-17 14:18:33.658021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.054 ms 00:29:55.385 [2024-11-17 14:18:33.658030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.385 [2024-11-17 14:18:33.658112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.385 [2024-11-17 14:18:33.658123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:55.385 [2024-11-17 14:18:33.658132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:29:55.385 [2024-11-17 14:18:33.658145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.385 [2024-11-17 14:18:33.659359] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 77.632 ms, result 0 00:29:56.776  [2024-11-17T14:18:36.022Z] Copying: 10/1024 [MB] (10 MBps) [2024-11-17T14:18:36.966Z] Copying: 23/1024 [MB] (12 MBps) [2024-11-17T14:18:37.909Z] Copying: 36/1024 [MB] (13 MBps) [2024-11-17T14:18:38.852Z] Copying: 49/1024 [MB] (13 MBps) [2024-11-17T14:18:39.795Z] Copying: 66/1024 [MB] (16 MBps) [2024-11-17T14:18:40.741Z] Copying: 81/1024 [MB] (15 MBps) [2024-11-17T14:18:41.682Z] Copying: 95/1024 [MB] (13 MBps) [2024-11-17T14:18:43.066Z] Copying: 113/1024 [MB] (17 MBps) [2024-11-17T14:18:44.010Z] Copying: 143/1024 [MB] (29 MBps) [2024-11-17T14:18:44.955Z] Copying: 161/1024 [MB] (18 MBps) [2024-11-17T14:18:45.899Z] Copying: 181/1024 [MB] (19 MBps) [2024-11-17T14:18:46.843Z] Copying: 198/1024 [MB] (17 MBps) [2024-11-17T14:18:47.788Z] Copying: 209/1024 [MB] (10 MBps) [2024-11-17T14:18:48.732Z] Copying: 220/1024 [MB] (11 MBps) [2024-11-17T14:18:49.674Z] Copying: 245/1024 [MB] (25 MBps) [2024-11-17T14:18:51.062Z] Copying: 277/1024 [MB] (31 MBps) [2024-11-17T14:18:52.006Z] Copying: 296/1024 [MB] (18 MBps) [2024-11-17T14:18:52.949Z] Copying: 312/1024 [MB] (15 MBps) [2024-11-17T14:18:53.893Z] Copying: 329/1024 [MB] (17 MBps) [2024-11-17T14:18:54.837Z] Copying: 341/1024 [MB] (11 MBps) [2024-11-17T14:18:55.782Z] Copying: 351/1024 [MB] (10 MBps) [2024-11-17T14:18:56.726Z] Copying: 364/1024 [MB] (12 MBps) [2024-11-17T14:18:58.114Z] Copying: 380/1024 [MB] (16 MBps) [2024-11-17T14:18:58.706Z] Copying: 391/1024 [MB] (10 MBps) [2024-11-17T14:18:59.684Z] Copying: 401/1024 [MB] (10 MBps) [2024-11-17T14:19:01.068Z] Copying: 415/1024 [MB] (13 MBps) [2024-11-17T14:19:02.011Z] Copying: 427/1024 [MB] (11 MBps) [2024-11-17T14:19:02.957Z] Copying: 444/1024 [MB] (16 MBps) [2024-11-17T14:19:03.902Z] Copying: 459/1024 [MB] (14 MBps) [2024-11-17T14:19:04.846Z] Copying: 488/1024 [MB] (29 MBps) [2024-11-17T14:19:05.791Z] Copying: 509/1024 [MB] (21 MBps) [2024-11-17T14:19:06.736Z] Copying: 546/1024 [MB] (36 MBps) [2024-11-17T14:19:07.680Z] Copying: 558/1024 [MB] (12 MBps) [2024-11-17T14:19:09.068Z] Copying: 583/1024 [MB] (25 MBps) [2024-11-17T14:19:10.010Z] Copying: 607/1024 [MB] (23 MBps) [2024-11-17T14:19:10.955Z] Copying: 624/1024 [MB] (17 MBps) [2024-11-17T14:19:11.899Z] Copying: 640/1024 [MB] (15 MBps) [2024-11-17T14:19:12.843Z] Copying: 668/1024 [MB] (28 MBps) [2024-11-17T14:19:13.787Z] Copying: 683/1024 [MB] (15 MBps) [2024-11-17T14:19:14.732Z] Copying: 724/1024 [MB] (40 MBps) [2024-11-17T14:19:15.676Z] Copying: 752/1024 [MB] (27 MBps) [2024-11-17T14:19:17.064Z] Copying: 780/1024 [MB] (28 MBps) [2024-11-17T14:19:18.009Z] Copying: 815/1024 [MB] (35 MBps) [2024-11-17T14:19:18.952Z] Copying: 830/1024 [MB] (15 MBps) [2024-11-17T14:19:19.893Z] Copying: 842/1024 [MB] (11 MBps) [2024-11-17T14:19:20.836Z] Copying: 870/1024 [MB] (27 MBps) [2024-11-17T14:19:21.778Z] Copying: 892/1024 [MB] (22 MBps) [2024-11-17T14:19:22.735Z] Copying: 916/1024 [MB] (24 MBps) [2024-11-17T14:19:23.679Z] Copying: 937/1024 [MB] (21 MBps) [2024-11-17T14:19:25.066Z] Copying: 975/1024 [MB] (37 MBps) [2024-11-17T14:19:26.009Z] Copying: 987/1024 [MB] (12 MBps) [2024-11-17T14:19:26.954Z] Copying: 1003/1024 [MB] (15 MBps) [2024-11-17T14:19:27.899Z] Copying: 1021/1024 [MB] (18 MBps) [2024-11-17T14:19:27.899Z] Copying: 1048456/1048576 [kB] (2188 kBps) [2024-11-17T14:19:27.899Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-11-17 14:19:27.807394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.598 [2024-11-17 14:19:27.807482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:49.598 [2024-11-17 14:19:27.807500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:49.598 [2024-11-17 14:19:27.807510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.598 [2024-11-17 14:19:27.808823] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:49.598 [2024-11-17 14:19:27.811670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.598 [2024-11-17 14:19:27.811719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:49.598 [2024-11-17 14:19:27.811732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.817 ms 00:30:49.598 [2024-11-17 14:19:27.811742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.598 [2024-11-17 14:19:27.823335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.598 [2024-11-17 14:19:27.823396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:49.598 [2024-11-17 14:19:27.823415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.879 ms 00:30:49.598 [2024-11-17 14:19:27.823424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.598 [2024-11-17 14:19:27.823461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.598 [2024-11-17 14:19:27.823471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:30:49.598 [2024-11-17 14:19:27.823480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:49.598 [2024-11-17 14:19:27.823489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.598 [2024-11-17 14:19:27.823561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.598 [2024-11-17 14:19:27.823572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:30:49.598 [2024-11-17 14:19:27.823580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:30:49.598 [2024-11-17 14:19:27.823596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.598 [2024-11-17 14:19:27.823611] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:49.598 [2024-11-17 14:19:27.823624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 126720 / 261120 wr_cnt: 1 state: open 00:30:49.598 [2024-11-17 14:19:27.823639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:49.598 [2024-11-17 14:19:27.823647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:49.598 [2024-11-17 14:19:27.823655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:49.598 [2024-11-17 14:19:27.823662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:49.598 [2024-11-17 14:19:27.823670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:49.598 [2024-11-17 14:19:27.823678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:49.598 [2024-11-17 14:19:27.823686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:49.598 [2024-11-17 14:19:27.823694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:49.598 [2024-11-17 14:19:27.823702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:49.598 [2024-11-17 14:19:27.823710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:49.598 [2024-11-17 14:19:27.823717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:49.598 [2024-11-17 14:19:27.823726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:49.598 [2024-11-17 14:19:27.823734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:49.598 [2024-11-17 14:19:27.823742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:49.598 [2024-11-17 14:19:27.823750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:49.598 [2024-11-17 14:19:27.823757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:49.598 [2024-11-17 14:19:27.823764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:49.598 [2024-11-17 14:19:27.823772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:49.598 [2024-11-17 14:19:27.823779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:49.598 [2024-11-17 14:19:27.823788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:49.598 [2024-11-17 14:19:27.823797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:49.598 [2024-11-17 14:19:27.823804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:49.598 [2024-11-17 14:19:27.823813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:49.598 [2024-11-17 14:19:27.823821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:49.598 [2024-11-17 14:19:27.823829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:49.598 [2024-11-17 14:19:27.823837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:49.598 [2024-11-17 14:19:27.823844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:49.598 [2024-11-17 14:19:27.823852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:49.598 [2024-11-17 14:19:27.823860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:49.598 [2024-11-17 14:19:27.823867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:49.598 [2024-11-17 14:19:27.823875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:49.598 [2024-11-17 14:19:27.823891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:49.598 [2024-11-17 14:19:27.823898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:49.598 [2024-11-17 14:19:27.823906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:49.598 [2024-11-17 14:19:27.823916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:49.599 [2024-11-17 14:19:27.823924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:49.599 [2024-11-17 14:19:27.823942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:49.599 [2024-11-17 14:19:27.823950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:49.599 [2024-11-17 14:19:27.823958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:49.599 [2024-11-17 14:19:27.823966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:49.599 [2024-11-17 14:19:27.823974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:49.599 [2024-11-17 14:19:27.823982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:49.599 [2024-11-17 14:19:27.823989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:49.599 [2024-11-17 14:19:27.823997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:49.599 [2024-11-17 14:19:27.824005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:49.599 [2024-11-17 14:19:27.824013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:49.599 [2024-11-17 14:19:27.824020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:49.599 [2024-11-17 14:19:27.824028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:49.599 [2024-11-17 14:19:27.824037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:49.599 [2024-11-17 14:19:27.824045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:49.599 [2024-11-17 14:19:27.824052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:49.599 [2024-11-17 14:19:27.824060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:49.599 [2024-11-17 14:19:27.824068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:49.599 [2024-11-17 14:19:27.824075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:49.599 [2024-11-17 14:19:27.824087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:49.599 [2024-11-17 14:19:27.824100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:49.599 [2024-11-17 14:19:27.824107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:49.599 [2024-11-17 14:19:27.824115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:49.599 [2024-11-17 14:19:27.824122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:49.599 [2024-11-17 14:19:27.824130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:49.599 [2024-11-17 14:19:27.824139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:49.599 [2024-11-17 14:19:27.824146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:49.599 [2024-11-17 14:19:27.824153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:49.599 [2024-11-17 14:19:27.824161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:49.599 [2024-11-17 14:19:27.824168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:49.599 [2024-11-17 14:19:27.824176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:49.599 [2024-11-17 14:19:27.824184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:49.599 [2024-11-17 14:19:27.824192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:49.599 [2024-11-17 14:19:27.824200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:49.599 [2024-11-17 14:19:27.824207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:49.599 [2024-11-17 14:19:27.824215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:49.599 [2024-11-17 14:19:27.824222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:49.599 [2024-11-17 14:19:27.824230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:49.599 [2024-11-17 14:19:27.824252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:49.599 [2024-11-17 14:19:27.824261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:49.599 [2024-11-17 14:19:27.824268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:49.599 [2024-11-17 14:19:27.824276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:49.599 [2024-11-17 14:19:27.824284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:49.599 [2024-11-17 14:19:27.824293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:49.599 [2024-11-17 14:19:27.824301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:49.599 [2024-11-17 14:19:27.824308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:49.599 [2024-11-17 14:19:27.824315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:49.599 [2024-11-17 14:19:27.824323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:49.599 [2024-11-17 14:19:27.824330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:49.599 [2024-11-17 14:19:27.824338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:49.599 [2024-11-17 14:19:27.824346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:49.599 [2024-11-17 14:19:27.824354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:49.599 [2024-11-17 14:19:27.824362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:49.599 [2024-11-17 14:19:27.824371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:49.599 [2024-11-17 14:19:27.824379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:49.599 [2024-11-17 14:19:27.824386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:49.599 [2024-11-17 14:19:27.824395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:49.599 [2024-11-17 14:19:27.824403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:49.599 [2024-11-17 14:19:27.824410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:49.599 [2024-11-17 14:19:27.824417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:49.599 [2024-11-17 14:19:27.824425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:49.599 [2024-11-17 14:19:27.824433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:49.599 [2024-11-17 14:19:27.824440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:49.599 [2024-11-17 14:19:27.824447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:49.599 [2024-11-17 14:19:27.824466] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:49.599 [2024-11-17 14:19:27.824477] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9eb8e1fa-342d-4de8-93e6-c05f3844e79c 00:30:49.599 [2024-11-17 14:19:27.824489] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 126720 00:30:49.599 [2024-11-17 14:19:27.824496] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 126752 00:30:49.599 [2024-11-17 14:19:27.824503] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 126720 00:30:49.599 [2024-11-17 14:19:27.824512] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0003 00:30:49.599 [2024-11-17 14:19:27.824520] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:49.599 [2024-11-17 14:19:27.824528] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:49.599 [2024-11-17 14:19:27.824538] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:49.599 [2024-11-17 14:19:27.824545] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:49.599 [2024-11-17 14:19:27.824551] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:49.599 [2024-11-17 14:19:27.824558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.599 [2024-11-17 14:19:27.824567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:49.599 [2024-11-17 14:19:27.824575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.948 ms 00:30:49.599 [2024-11-17 14:19:27.824582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.599 [2024-11-17 14:19:27.826891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.599 [2024-11-17 14:19:27.826924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:49.599 [2024-11-17 14:19:27.826935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.292 ms 00:30:49.599 [2024-11-17 14:19:27.826944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.599 [2024-11-17 14:19:27.827081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.599 [2024-11-17 14:19:27.827091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:49.599 [2024-11-17 14:19:27.827108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:30:49.599 [2024-11-17 14:19:27.827115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.599 [2024-11-17 14:19:27.833743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:49.599 [2024-11-17 14:19:27.833931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:49.599 [2024-11-17 14:19:27.833957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:49.599 [2024-11-17 14:19:27.833965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.599 [2024-11-17 14:19:27.834028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:49.599 [2024-11-17 14:19:27.834046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:49.599 [2024-11-17 14:19:27.834054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:49.599 [2024-11-17 14:19:27.834063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.600 [2024-11-17 14:19:27.834114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:49.600 [2024-11-17 14:19:27.834124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:49.600 [2024-11-17 14:19:27.834133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:49.600 [2024-11-17 14:19:27.834144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.600 [2024-11-17 14:19:27.834160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:49.600 [2024-11-17 14:19:27.834172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:49.600 [2024-11-17 14:19:27.834181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:49.600 [2024-11-17 14:19:27.834189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.600 [2024-11-17 14:19:27.847665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:49.600 [2024-11-17 14:19:27.847716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:49.600 [2024-11-17 14:19:27.847727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:49.600 [2024-11-17 14:19:27.847742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.600 [2024-11-17 14:19:27.859488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:49.600 [2024-11-17 14:19:27.859702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:49.600 [2024-11-17 14:19:27.859721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:49.600 [2024-11-17 14:19:27.859731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.600 [2024-11-17 14:19:27.859784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:49.600 [2024-11-17 14:19:27.859794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:49.600 [2024-11-17 14:19:27.859803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:49.600 [2024-11-17 14:19:27.859811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.600 [2024-11-17 14:19:27.859860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:49.600 [2024-11-17 14:19:27.859870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:49.600 [2024-11-17 14:19:27.859879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:49.600 [2024-11-17 14:19:27.859887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.600 [2024-11-17 14:19:27.859946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:49.600 [2024-11-17 14:19:27.859956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:49.600 [2024-11-17 14:19:27.859965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:49.600 [2024-11-17 14:19:27.859973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.600 [2024-11-17 14:19:27.859998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:49.600 [2024-11-17 14:19:27.860021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:49.600 [2024-11-17 14:19:27.860029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:49.600 [2024-11-17 14:19:27.860038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.600 [2024-11-17 14:19:27.860079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:49.600 [2024-11-17 14:19:27.860089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:49.600 [2024-11-17 14:19:27.860097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:49.600 [2024-11-17 14:19:27.860105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.600 [2024-11-17 14:19:27.860153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:49.600 [2024-11-17 14:19:27.860165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:49.600 [2024-11-17 14:19:27.860174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:49.600 [2024-11-17 14:19:27.860182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.600 [2024-11-17 14:19:27.860342] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 55.759 ms, result 0 00:30:50.543 00:30:50.543 00:30:50.804 14:19:28 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:30:50.804 [2024-11-17 14:19:28.921700] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:30:50.804 [2024-11-17 14:19:28.921865] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94802 ] 00:30:50.804 [2024-11-17 14:19:29.075946] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:51.064 [2024-11-17 14:19:29.125829] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:30:51.065 [2024-11-17 14:19:29.238937] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:51.065 [2024-11-17 14:19:29.239025] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:51.327 [2024-11-17 14:19:29.400532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.327 [2024-11-17 14:19:29.400586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:51.327 [2024-11-17 14:19:29.400604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:51.327 [2024-11-17 14:19:29.400613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.327 [2024-11-17 14:19:29.400671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.327 [2024-11-17 14:19:29.400685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:51.327 [2024-11-17 14:19:29.400698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:30:51.327 [2024-11-17 14:19:29.400705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.327 [2024-11-17 14:19:29.400725] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:51.327 [2024-11-17 14:19:29.401003] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:51.327 [2024-11-17 14:19:29.401020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.327 [2024-11-17 14:19:29.401029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:51.327 [2024-11-17 14:19:29.401042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:30:51.327 [2024-11-17 14:19:29.401052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.327 [2024-11-17 14:19:29.401354] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:30:51.327 [2024-11-17 14:19:29.401382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.327 [2024-11-17 14:19:29.401419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:51.327 [2024-11-17 14:19:29.401429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:30:51.327 [2024-11-17 14:19:29.401437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.327 [2024-11-17 14:19:29.401497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.327 [2024-11-17 14:19:29.401510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:51.327 [2024-11-17 14:19:29.401520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:30:51.327 [2024-11-17 14:19:29.401527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.327 [2024-11-17 14:19:29.401820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.327 [2024-11-17 14:19:29.401838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:51.327 [2024-11-17 14:19:29.401847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.222 ms 00:30:51.327 [2024-11-17 14:19:29.401856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.327 [2024-11-17 14:19:29.401942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.327 [2024-11-17 14:19:29.401955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:51.327 [2024-11-17 14:19:29.401967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:30:51.327 [2024-11-17 14:19:29.401979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.327 [2024-11-17 14:19:29.402010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.327 [2024-11-17 14:19:29.402019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:51.327 [2024-11-17 14:19:29.402027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:51.327 [2024-11-17 14:19:29.402034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.327 [2024-11-17 14:19:29.402054] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:51.327 [2024-11-17 14:19:29.404139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.327 [2024-11-17 14:19:29.404181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:51.327 [2024-11-17 14:19:29.404192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.089 ms 00:30:51.327 [2024-11-17 14:19:29.404200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.327 [2024-11-17 14:19:29.404234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.327 [2024-11-17 14:19:29.404255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:51.327 [2024-11-17 14:19:29.404269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:30:51.327 [2024-11-17 14:19:29.404277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.327 [2024-11-17 14:19:29.404311] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:51.327 [2024-11-17 14:19:29.404337] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:51.327 [2024-11-17 14:19:29.404375] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:51.327 [2024-11-17 14:19:29.404391] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:51.327 [2024-11-17 14:19:29.404495] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:51.327 [2024-11-17 14:19:29.404507] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:51.327 [2024-11-17 14:19:29.404517] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:51.327 [2024-11-17 14:19:29.404527] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:51.327 [2024-11-17 14:19:29.404541] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:51.327 [2024-11-17 14:19:29.404549] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:51.327 [2024-11-17 14:19:29.404560] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:51.327 [2024-11-17 14:19:29.404567] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:51.327 [2024-11-17 14:19:29.404574] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:51.327 [2024-11-17 14:19:29.404583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.327 [2024-11-17 14:19:29.404590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:51.327 [2024-11-17 14:19:29.404598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:30:51.327 [2024-11-17 14:19:29.404605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.327 [2024-11-17 14:19:29.404709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.327 [2024-11-17 14:19:29.404719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:51.327 [2024-11-17 14:19:29.404726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:30:51.327 [2024-11-17 14:19:29.404740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.328 [2024-11-17 14:19:29.404845] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:51.328 [2024-11-17 14:19:29.404857] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:51.328 [2024-11-17 14:19:29.404866] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:51.328 [2024-11-17 14:19:29.404875] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:51.328 [2024-11-17 14:19:29.404884] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:51.328 [2024-11-17 14:19:29.404898] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:51.328 [2024-11-17 14:19:29.404906] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:51.328 [2024-11-17 14:19:29.404916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:51.328 [2024-11-17 14:19:29.404925] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:51.328 [2024-11-17 14:19:29.404932] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:51.328 [2024-11-17 14:19:29.404940] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:51.328 [2024-11-17 14:19:29.404954] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:51.328 [2024-11-17 14:19:29.404963] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:51.328 [2024-11-17 14:19:29.404972] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:51.328 [2024-11-17 14:19:29.404980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:51.328 [2024-11-17 14:19:29.404988] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:51.328 [2024-11-17 14:19:29.404995] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:51.328 [2024-11-17 14:19:29.405003] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:51.328 [2024-11-17 14:19:29.405011] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:51.328 [2024-11-17 14:19:29.405019] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:51.328 [2024-11-17 14:19:29.405026] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:51.328 [2024-11-17 14:19:29.405034] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:51.328 [2024-11-17 14:19:29.405042] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:51.328 [2024-11-17 14:19:29.405050] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:51.328 [2024-11-17 14:19:29.405058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:51.328 [2024-11-17 14:19:29.405066] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:51.328 [2024-11-17 14:19:29.405074] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:51.328 [2024-11-17 14:19:29.405084] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:51.328 [2024-11-17 14:19:29.405092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:51.328 [2024-11-17 14:19:29.405100] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:51.328 [2024-11-17 14:19:29.405107] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:51.328 [2024-11-17 14:19:29.405115] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:51.328 [2024-11-17 14:19:29.405123] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:51.328 [2024-11-17 14:19:29.405131] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:51.328 [2024-11-17 14:19:29.405138] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:51.328 [2024-11-17 14:19:29.405146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:51.328 [2024-11-17 14:19:29.405154] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:51.328 [2024-11-17 14:19:29.405161] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:51.328 [2024-11-17 14:19:29.405168] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:51.328 [2024-11-17 14:19:29.405175] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:51.328 [2024-11-17 14:19:29.405182] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:51.328 [2024-11-17 14:19:29.405188] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:51.328 [2024-11-17 14:19:29.405195] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:51.328 [2024-11-17 14:19:29.405204] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:51.328 [2024-11-17 14:19:29.405214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:51.328 [2024-11-17 14:19:29.405223] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:51.328 [2024-11-17 14:19:29.405230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:51.328 [2024-11-17 14:19:29.405498] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:51.328 [2024-11-17 14:19:29.405537] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:51.328 [2024-11-17 14:19:29.405558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:51.328 [2024-11-17 14:19:29.405578] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:51.328 [2024-11-17 14:19:29.405597] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:51.328 [2024-11-17 14:19:29.405615] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:51.328 [2024-11-17 14:19:29.405636] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:51.328 [2024-11-17 14:19:29.405676] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:51.328 [2024-11-17 14:19:29.405705] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:51.328 [2024-11-17 14:19:29.405735] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:51.328 [2024-11-17 14:19:29.405763] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:51.328 [2024-11-17 14:19:29.405791] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:51.328 [2024-11-17 14:19:29.405823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:51.328 [2024-11-17 14:19:29.406431] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:51.328 [2024-11-17 14:19:29.407116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:51.328 [2024-11-17 14:19:29.407138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:51.328 [2024-11-17 14:19:29.407146] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:51.328 [2024-11-17 14:19:29.407155] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:51.328 [2024-11-17 14:19:29.407163] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:51.328 [2024-11-17 14:19:29.407170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:51.328 [2024-11-17 14:19:29.407178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:51.328 [2024-11-17 14:19:29.407187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:51.328 [2024-11-17 14:19:29.407194] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:51.328 [2024-11-17 14:19:29.407203] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:51.328 [2024-11-17 14:19:29.407211] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:51.328 [2024-11-17 14:19:29.407219] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:51.328 [2024-11-17 14:19:29.407226] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:51.328 [2024-11-17 14:19:29.407233] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:51.328 [2024-11-17 14:19:29.407284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.328 [2024-11-17 14:19:29.407296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:51.328 [2024-11-17 14:19:29.407307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.485 ms 00:30:51.328 [2024-11-17 14:19:29.407316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.328 [2024-11-17 14:19:29.424325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.328 [2024-11-17 14:19:29.424512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:51.328 [2024-11-17 14:19:29.424586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.938 ms 00:30:51.328 [2024-11-17 14:19:29.424610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.328 [2024-11-17 14:19:29.424715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.328 [2024-11-17 14:19:29.424739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:51.328 [2024-11-17 14:19:29.424759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:30:51.328 [2024-11-17 14:19:29.424777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.328 [2024-11-17 14:19:29.436877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.328 [2024-11-17 14:19:29.437049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:51.328 [2024-11-17 14:19:29.437122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.024 ms 00:30:51.328 [2024-11-17 14:19:29.437150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.328 [2024-11-17 14:19:29.437211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.328 [2024-11-17 14:19:29.437257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:51.328 [2024-11-17 14:19:29.437283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:30:51.328 [2024-11-17 14:19:29.437306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.328 [2024-11-17 14:19:29.437443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.328 [2024-11-17 14:19:29.437542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:51.328 [2024-11-17 14:19:29.437571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:30:51.328 [2024-11-17 14:19:29.437601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.329 [2024-11-17 14:19:29.437774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.329 [2024-11-17 14:19:29.438131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:51.329 [2024-11-17 14:19:29.438249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:30:51.329 [2024-11-17 14:19:29.438275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.329 [2024-11-17 14:19:29.445178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.329 [2024-11-17 14:19:29.445362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:51.329 [2024-11-17 14:19:29.445532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.864 ms 00:30:51.329 [2024-11-17 14:19:29.445564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.329 [2024-11-17 14:19:29.445694] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:30:51.329 [2024-11-17 14:19:29.445745] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:51.329 [2024-11-17 14:19:29.445781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.329 [2024-11-17 14:19:29.445801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:51.329 [2024-11-17 14:19:29.445829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:30:51.329 [2024-11-17 14:19:29.445889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.329 [2024-11-17 14:19:29.458357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.329 [2024-11-17 14:19:29.458495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:51.329 [2024-11-17 14:19:29.458554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.424 ms 00:30:51.329 [2024-11-17 14:19:29.458576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.329 [2024-11-17 14:19:29.458718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.329 [2024-11-17 14:19:29.458751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:51.329 [2024-11-17 14:19:29.458771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:30:51.329 [2024-11-17 14:19:29.458790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.329 [2024-11-17 14:19:29.458854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.329 [2024-11-17 14:19:29.458882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:51.329 [2024-11-17 14:19:29.458904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:30:51.329 [2024-11-17 14:19:29.458983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.329 [2024-11-17 14:19:29.459359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.329 [2024-11-17 14:19:29.459402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:51.329 [2024-11-17 14:19:29.459495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:30:51.329 [2024-11-17 14:19:29.459519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.329 [2024-11-17 14:19:29.459551] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:30:51.329 [2024-11-17 14:19:29.459584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.329 [2024-11-17 14:19:29.459602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:51.329 [2024-11-17 14:19:29.459630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:30:51.329 [2024-11-17 14:19:29.459654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.329 [2024-11-17 14:19:29.468851] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:51.329 [2024-11-17 14:19:29.469133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.329 [2024-11-17 14:19:29.469170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:51.329 [2024-11-17 14:19:29.469230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.446 ms 00:30:51.329 [2024-11-17 14:19:29.469267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.329 [2024-11-17 14:19:29.471931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.329 [2024-11-17 14:19:29.472064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:51.329 [2024-11-17 14:19:29.472120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.622 ms 00:30:51.329 [2024-11-17 14:19:29.472142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.329 [2024-11-17 14:19:29.472230] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:30:51.329 [2024-11-17 14:19:29.472861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.329 [2024-11-17 14:19:29.472945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:51.329 [2024-11-17 14:19:29.472998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.649 ms 00:30:51.329 [2024-11-17 14:19:29.473049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.329 [2024-11-17 14:19:29.473102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.329 [2024-11-17 14:19:29.473152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:51.329 [2024-11-17 14:19:29.473175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:51.329 [2024-11-17 14:19:29.473194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.329 [2024-11-17 14:19:29.473269] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:51.329 [2024-11-17 14:19:29.473296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.329 [2024-11-17 14:19:29.473314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:51.329 [2024-11-17 14:19:29.473334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:30:51.329 [2024-11-17 14:19:29.473352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.329 [2024-11-17 14:19:29.479353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.329 [2024-11-17 14:19:29.479520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:51.329 [2024-11-17 14:19:29.479539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.969 ms 00:30:51.329 [2024-11-17 14:19:29.479548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.329 [2024-11-17 14:19:29.479626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:51.329 [2024-11-17 14:19:29.479638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:51.329 [2024-11-17 14:19:29.479646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:30:51.329 [2024-11-17 14:19:29.479653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:51.329 [2024-11-17 14:19:29.480792] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 79.831 ms, result 0 00:30:52.813  [2024-11-17T14:19:31.686Z] Copying: 14/1024 [MB] (14 MBps) [2024-11-17T14:19:33.071Z] Copying: 33/1024 [MB] (18 MBps) [2024-11-17T14:19:34.013Z] Copying: 53/1024 [MB] (20 MBps) [2024-11-17T14:19:34.958Z] Copying: 66/1024 [MB] (12 MBps) [2024-11-17T14:19:35.901Z] Copying: 78/1024 [MB] (12 MBps) [2024-11-17T14:19:36.844Z] Copying: 93/1024 [MB] (14 MBps) [2024-11-17T14:19:37.790Z] Copying: 110/1024 [MB] (17 MBps) [2024-11-17T14:19:38.733Z] Copying: 124/1024 [MB] (13 MBps) [2024-11-17T14:19:39.678Z] Copying: 134/1024 [MB] (10 MBps) [2024-11-17T14:19:41.065Z] Copying: 145/1024 [MB] (10 MBps) [2024-11-17T14:19:42.010Z] Copying: 156/1024 [MB] (10 MBps) [2024-11-17T14:19:42.956Z] Copying: 166/1024 [MB] (10 MBps) [2024-11-17T14:19:43.900Z] Copying: 177/1024 [MB] (10 MBps) [2024-11-17T14:19:44.844Z] Copying: 192/1024 [MB] (14 MBps) [2024-11-17T14:19:45.787Z] Copying: 214/1024 [MB] (21 MBps) [2024-11-17T14:19:46.731Z] Copying: 226/1024 [MB] (12 MBps) [2024-11-17T14:19:48.117Z] Copying: 247/1024 [MB] (20 MBps) [2024-11-17T14:19:48.690Z] Copying: 259/1024 [MB] (11 MBps) [2024-11-17T14:19:50.079Z] Copying: 276/1024 [MB] (17 MBps) [2024-11-17T14:19:51.024Z] Copying: 287/1024 [MB] (11 MBps) [2024-11-17T14:19:51.968Z] Copying: 309/1024 [MB] (21 MBps) [2024-11-17T14:19:52.914Z] Copying: 322/1024 [MB] (13 MBps) [2024-11-17T14:19:53.860Z] Copying: 338/1024 [MB] (15 MBps) [2024-11-17T14:19:54.804Z] Copying: 358/1024 [MB] (19 MBps) [2024-11-17T14:19:55.749Z] Copying: 371/1024 [MB] (13 MBps) [2024-11-17T14:19:56.690Z] Copying: 390/1024 [MB] (19 MBps) [2024-11-17T14:19:58.071Z] Copying: 408/1024 [MB] (17 MBps) [2024-11-17T14:19:59.013Z] Copying: 426/1024 [MB] (17 MBps) [2024-11-17T14:19:59.952Z] Copying: 442/1024 [MB] (16 MBps) [2024-11-17T14:20:00.895Z] Copying: 455/1024 [MB] (13 MBps) [2024-11-17T14:20:01.837Z] Copying: 477/1024 [MB] (21 MBps) [2024-11-17T14:20:02.843Z] Copying: 488/1024 [MB] (10 MBps) [2024-11-17T14:20:03.798Z] Copying: 499/1024 [MB] (11 MBps) [2024-11-17T14:20:04.732Z] Copying: 510/1024 [MB] (10 MBps) [2024-11-17T14:20:06.109Z] Copying: 523/1024 [MB] (13 MBps) [2024-11-17T14:20:06.681Z] Copying: 537/1024 [MB] (13 MBps) [2024-11-17T14:20:08.059Z] Copying: 548/1024 [MB] (10 MBps) [2024-11-17T14:20:08.993Z] Copying: 560/1024 [MB] (12 MBps) [2024-11-17T14:20:09.925Z] Copying: 574/1024 [MB] (13 MBps) [2024-11-17T14:20:10.865Z] Copying: 588/1024 [MB] (14 MBps) [2024-11-17T14:20:11.810Z] Copying: 600/1024 [MB] (12 MBps) [2024-11-17T14:20:12.747Z] Copying: 611/1024 [MB] (10 MBps) [2024-11-17T14:20:13.682Z] Copying: 622/1024 [MB] (11 MBps) [2024-11-17T14:20:15.058Z] Copying: 635/1024 [MB] (12 MBps) [2024-11-17T14:20:16.004Z] Copying: 647/1024 [MB] (11 MBps) [2024-11-17T14:20:16.947Z] Copying: 658/1024 [MB] (10 MBps) [2024-11-17T14:20:17.891Z] Copying: 668/1024 [MB] (10 MBps) [2024-11-17T14:20:18.833Z] Copying: 681/1024 [MB] (13 MBps) [2024-11-17T14:20:19.776Z] Copying: 700/1024 [MB] (18 MBps) [2024-11-17T14:20:20.717Z] Copying: 723/1024 [MB] (22 MBps) [2024-11-17T14:20:22.102Z] Copying: 742/1024 [MB] (18 MBps) [2024-11-17T14:20:23.044Z] Copying: 761/1024 [MB] (19 MBps) [2024-11-17T14:20:23.987Z] Copying: 781/1024 [MB] (20 MBps) [2024-11-17T14:20:24.929Z] Copying: 803/1024 [MB] (21 MBps) [2024-11-17T14:20:25.872Z] Copying: 824/1024 [MB] (21 MBps) [2024-11-17T14:20:26.815Z] Copying: 840/1024 [MB] (15 MBps) [2024-11-17T14:20:27.757Z] Copying: 863/1024 [MB] (23 MBps) [2024-11-17T14:20:28.720Z] Copying: 875/1024 [MB] (11 MBps) [2024-11-17T14:20:30.107Z] Copying: 886/1024 [MB] (11 MBps) [2024-11-17T14:20:31.049Z] Copying: 897/1024 [MB] (10 MBps) [2024-11-17T14:20:31.987Z] Copying: 912/1024 [MB] (14 MBps) [2024-11-17T14:20:32.927Z] Copying: 924/1024 [MB] (11 MBps) [2024-11-17T14:20:33.956Z] Copying: 941/1024 [MB] (17 MBps) [2024-11-17T14:20:34.901Z] Copying: 953/1024 [MB] (11 MBps) [2024-11-17T14:20:35.844Z] Copying: 964/1024 [MB] (11 MBps) [2024-11-17T14:20:36.788Z] Copying: 976/1024 [MB] (11 MBps) [2024-11-17T14:20:37.731Z] Copying: 988/1024 [MB] (11 MBps) [2024-11-17T14:20:39.116Z] Copying: 1000/1024 [MB] (11 MBps) [2024-11-17T14:20:39.690Z] Copying: 1011/1024 [MB] (11 MBps) [2024-11-17T14:20:39.951Z] Copying: 1023/1024 [MB] (11 MBps) [2024-11-17T14:20:40.214Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-11-17 14:20:40.089697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.913 [2024-11-17 14:20:40.089793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:01.913 [2024-11-17 14:20:40.089811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:01.913 [2024-11-17 14:20:40.089825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.913 [2024-11-17 14:20:40.089849] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:01.913 [2024-11-17 14:20:40.090666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.913 [2024-11-17 14:20:40.090696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:01.913 [2024-11-17 14:20:40.090719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.801 ms 00:32:01.913 [2024-11-17 14:20:40.090730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.913 [2024-11-17 14:20:40.090973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.913 [2024-11-17 14:20:40.091008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:01.913 [2024-11-17 14:20:40.091024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.216 ms 00:32:01.913 [2024-11-17 14:20:40.091034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.913 [2024-11-17 14:20:40.091069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.913 [2024-11-17 14:20:40.091080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:32:01.913 [2024-11-17 14:20:40.091090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:01.913 [2024-11-17 14:20:40.091099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.913 [2024-11-17 14:20:40.091162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.913 [2024-11-17 14:20:40.091172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:32:01.913 [2024-11-17 14:20:40.091184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:32:01.913 [2024-11-17 14:20:40.091193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.913 [2024-11-17 14:20:40.091208] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:01.913 [2024-11-17 14:20:40.091221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:32:01.913 [2024-11-17 14:20:40.091253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:01.913 [2024-11-17 14:20:40.091263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:01.913 [2024-11-17 14:20:40.091271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:01.913 [2024-11-17 14:20:40.091292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:01.913 [2024-11-17 14:20:40.091300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:01.913 [2024-11-17 14:20:40.091309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:01.913 [2024-11-17 14:20:40.091317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:01.913 [2024-11-17 14:20:40.091325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:01.913 [2024-11-17 14:20:40.091333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:01.913 [2024-11-17 14:20:40.091341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:01.913 [2024-11-17 14:20:40.091351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:01.913 [2024-11-17 14:20:40.091359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:01.913 [2024-11-17 14:20:40.091703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:01.913 [2024-11-17 14:20:40.091713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:01.913 [2024-11-17 14:20:40.091722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:01.913 [2024-11-17 14:20:40.091730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:01.913 [2024-11-17 14:20:40.091737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:01.913 [2024-11-17 14:20:40.091746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:01.913 [2024-11-17 14:20:40.091754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:01.913 [2024-11-17 14:20:40.091762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:01.913 [2024-11-17 14:20:40.091771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:01.913 [2024-11-17 14:20:40.091778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:01.913 [2024-11-17 14:20:40.091787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:01.913 [2024-11-17 14:20:40.091795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:01.913 [2024-11-17 14:20:40.091803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:01.913 [2024-11-17 14:20:40.091810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:01.913 [2024-11-17 14:20:40.091818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:01.914 [2024-11-17 14:20:40.091827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:01.914 [2024-11-17 14:20:40.091834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:01.914 [2024-11-17 14:20:40.091844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:01.914 [2024-11-17 14:20:40.091852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:01.914 [2024-11-17 14:20:40.091870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:01.914 [2024-11-17 14:20:40.091878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:01.914 [2024-11-17 14:20:40.091885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:01.914 [2024-11-17 14:20:40.091893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:01.914 [2024-11-17 14:20:40.091902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:01.914 [2024-11-17 14:20:40.091911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:01.914 [2024-11-17 14:20:40.091918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:01.914 [2024-11-17 14:20:40.091927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:01.914 [2024-11-17 14:20:40.091935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:01.914 [2024-11-17 14:20:40.091943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:01.914 [2024-11-17 14:20:40.091951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:01.914 [2024-11-17 14:20:40.091960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:01.914 [2024-11-17 14:20:40.091968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:01.914 [2024-11-17 14:20:40.091975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:01.914 [2024-11-17 14:20:40.091984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:01.914 [2024-11-17 14:20:40.092101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:01.914 [2024-11-17 14:20:40.092109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:01.914 [2024-11-17 14:20:40.092117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:01.914 [2024-11-17 14:20:40.092125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:01.914 [2024-11-17 14:20:40.092134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:01.914 [2024-11-17 14:20:40.092142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:01.914 [2024-11-17 14:20:40.092150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:01.914 [2024-11-17 14:20:40.092158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:01.914 [2024-11-17 14:20:40.092165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:01.914 [2024-11-17 14:20:40.092174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:01.914 [2024-11-17 14:20:40.092182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:01.914 [2024-11-17 14:20:40.092190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:01.914 [2024-11-17 14:20:40.092198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:01.914 [2024-11-17 14:20:40.092206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:01.914 [2024-11-17 14:20:40.092215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:01.914 [2024-11-17 14:20:40.092223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:01.914 [2024-11-17 14:20:40.092231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:01.914 [2024-11-17 14:20:40.092251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:01.914 [2024-11-17 14:20:40.092259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:01.914 [2024-11-17 14:20:40.092268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:01.914 [2024-11-17 14:20:40.092276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:01.914 [2024-11-17 14:20:40.092286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:01.914 [2024-11-17 14:20:40.092294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:01.914 [2024-11-17 14:20:40.092303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:01.914 [2024-11-17 14:20:40.092311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:01.914 [2024-11-17 14:20:40.092319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:01.914 [2024-11-17 14:20:40.092327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:01.914 [2024-11-17 14:20:40.092335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:01.914 [2024-11-17 14:20:40.092343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:01.914 [2024-11-17 14:20:40.092351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:01.914 [2024-11-17 14:20:40.092359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:01.914 [2024-11-17 14:20:40.092367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:01.914 [2024-11-17 14:20:40.092376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:01.914 [2024-11-17 14:20:40.092384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:01.914 [2024-11-17 14:20:40.092392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:01.914 [2024-11-17 14:20:40.092401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:01.914 [2024-11-17 14:20:40.092409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:01.914 [2024-11-17 14:20:40.092417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:01.914 [2024-11-17 14:20:40.092424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:01.914 [2024-11-17 14:20:40.092433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:01.914 [2024-11-17 14:20:40.092441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:01.914 [2024-11-17 14:20:40.092449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:01.914 [2024-11-17 14:20:40.092457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:01.914 [2024-11-17 14:20:40.092464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:01.914 [2024-11-17 14:20:40.092472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:01.914 [2024-11-17 14:20:40.092480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:01.914 [2024-11-17 14:20:40.092487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:01.914 [2024-11-17 14:20:40.092495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:01.914 [2024-11-17 14:20:40.092502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:01.914 [2024-11-17 14:20:40.092510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:01.914 [2024-11-17 14:20:40.092517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:01.914 [2024-11-17 14:20:40.092525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:01.914 [2024-11-17 14:20:40.092533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:01.914 [2024-11-17 14:20:40.092552] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:01.914 [2024-11-17 14:20:40.092566] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9eb8e1fa-342d-4de8-93e6-c05f3844e79c 00:32:01.914 [2024-11-17 14:20:40.092575] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:32:01.914 [2024-11-17 14:20:40.092583] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 4384 00:32:01.914 [2024-11-17 14:20:40.092591] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 4352 00:32:01.914 [2024-11-17 14:20:40.092601] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0074 00:32:01.914 [2024-11-17 14:20:40.092613] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:01.914 [2024-11-17 14:20:40.092628] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:01.914 [2024-11-17 14:20:40.092636] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:01.914 [2024-11-17 14:20:40.092643] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:01.914 [2024-11-17 14:20:40.092650] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:01.914 [2024-11-17 14:20:40.092657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.914 [2024-11-17 14:20:40.092665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:01.914 [2024-11-17 14:20:40.092673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.450 ms 00:32:01.914 [2024-11-17 14:20:40.092681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.914 [2024-11-17 14:20:40.095513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.914 [2024-11-17 14:20:40.095548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:01.914 [2024-11-17 14:20:40.095561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.814 ms 00:32:01.914 [2024-11-17 14:20:40.095577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.914 [2024-11-17 14:20:40.095703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.915 [2024-11-17 14:20:40.095712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:01.915 [2024-11-17 14:20:40.095722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:32:01.915 [2024-11-17 14:20:40.095730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.915 [2024-11-17 14:20:40.103771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:01.915 [2024-11-17 14:20:40.103828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:01.915 [2024-11-17 14:20:40.103845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:01.915 [2024-11-17 14:20:40.103854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.915 [2024-11-17 14:20:40.104059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:01.915 [2024-11-17 14:20:40.104069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:01.915 [2024-11-17 14:20:40.104077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:01.915 [2024-11-17 14:20:40.104086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.915 [2024-11-17 14:20:40.104149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:01.915 [2024-11-17 14:20:40.104160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:01.915 [2024-11-17 14:20:40.104168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:01.915 [2024-11-17 14:20:40.104180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.915 [2024-11-17 14:20:40.104197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:01.915 [2024-11-17 14:20:40.104206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:01.915 [2024-11-17 14:20:40.104216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:01.915 [2024-11-17 14:20:40.104225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.915 [2024-11-17 14:20:40.121672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:01.915 [2024-11-17 14:20:40.121734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:01.915 [2024-11-17 14:20:40.121757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:01.915 [2024-11-17 14:20:40.121765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.915 [2024-11-17 14:20:40.134760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:01.915 [2024-11-17 14:20:40.134820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:01.915 [2024-11-17 14:20:40.134831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:01.915 [2024-11-17 14:20:40.134839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.915 [2024-11-17 14:20:40.134900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:01.915 [2024-11-17 14:20:40.134910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:01.915 [2024-11-17 14:20:40.134919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:01.915 [2024-11-17 14:20:40.134927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.915 [2024-11-17 14:20:40.134972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:01.915 [2024-11-17 14:20:40.134982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:01.915 [2024-11-17 14:20:40.134991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:01.915 [2024-11-17 14:20:40.135007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.915 [2024-11-17 14:20:40.135062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:01.915 [2024-11-17 14:20:40.135072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:01.915 [2024-11-17 14:20:40.135081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:01.915 [2024-11-17 14:20:40.135089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.915 [2024-11-17 14:20:40.135116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:01.915 [2024-11-17 14:20:40.135133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:01.915 [2024-11-17 14:20:40.135142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:01.915 [2024-11-17 14:20:40.135149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.915 [2024-11-17 14:20:40.135192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:01.915 [2024-11-17 14:20:40.135202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:01.915 [2024-11-17 14:20:40.135211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:01.915 [2024-11-17 14:20:40.135218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.915 [2024-11-17 14:20:40.135308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:01.915 [2024-11-17 14:20:40.135320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:01.915 [2024-11-17 14:20:40.135329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:01.915 [2024-11-17 14:20:40.135337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.915 [2024-11-17 14:20:40.135485] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 45.746 ms, result 0 00:32:02.176 00:32:02.176 00:32:02.176 14:20:40 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:32:04.723 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:32:04.723 14:20:42 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:32:04.723 14:20:42 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:32:04.723 14:20:42 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:32:04.723 14:20:42 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:32:04.723 14:20:42 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:32:04.723 Process with pid 92595 is not found 00:32:04.723 Remove shared memory files 00:32:04.723 14:20:42 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 92595 00:32:04.723 14:20:42 ftl.ftl_restore_fast -- common/autotest_common.sh@950 -- # '[' -z 92595 ']' 00:32:04.723 14:20:42 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # kill -0 92595 00:32:04.723 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (92595) - No such process 00:32:04.723 14:20:42 ftl.ftl_restore_fast -- common/autotest_common.sh@977 -- # echo 'Process with pid 92595 is not found' 00:32:04.723 14:20:42 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:32:04.723 14:20:42 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:32:04.723 14:20:42 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:32:04.723 14:20:42 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_9eb8e1fa-342d-4de8-93e6-c05f3844e79c_band_md /dev/hugepages/ftl_9eb8e1fa-342d-4de8-93e6-c05f3844e79c_l2p_l1 /dev/hugepages/ftl_9eb8e1fa-342d-4de8-93e6-c05f3844e79c_l2p_l2 /dev/hugepages/ftl_9eb8e1fa-342d-4de8-93e6-c05f3844e79c_l2p_l2_ctx /dev/hugepages/ftl_9eb8e1fa-342d-4de8-93e6-c05f3844e79c_nvc_md /dev/hugepages/ftl_9eb8e1fa-342d-4de8-93e6-c05f3844e79c_p2l_pool /dev/hugepages/ftl_9eb8e1fa-342d-4de8-93e6-c05f3844e79c_sb /dev/hugepages/ftl_9eb8e1fa-342d-4de8-93e6-c05f3844e79c_sb_shm /dev/hugepages/ftl_9eb8e1fa-342d-4de8-93e6-c05f3844e79c_trim_bitmap /dev/hugepages/ftl_9eb8e1fa-342d-4de8-93e6-c05f3844e79c_trim_log /dev/hugepages/ftl_9eb8e1fa-342d-4de8-93e6-c05f3844e79c_trim_md /dev/hugepages/ftl_9eb8e1fa-342d-4de8-93e6-c05f3844e79c_vmap 00:32:04.723 14:20:42 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:32:04.723 14:20:42 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:32:04.723 14:20:42 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:32:04.723 ************************************ 00:32:04.723 END TEST ftl_restore_fast 00:32:04.723 ************************************ 00:32:04.723 00:32:04.723 real 4m51.009s 00:32:04.723 user 4m38.750s 00:32:04.723 sys 0m11.731s 00:32:04.723 14:20:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:04.723 14:20:42 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:32:04.723 Process with pid 84098 is not found 00:32:04.723 14:20:42 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:32:04.723 14:20:42 ftl -- ftl/ftl.sh@14 -- # killprocess 84098 00:32:04.723 14:20:42 ftl -- common/autotest_common.sh@950 -- # '[' -z 84098 ']' 00:32:04.723 14:20:42 ftl -- common/autotest_common.sh@954 -- # kill -0 84098 00:32:04.723 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (84098) - No such process 00:32:04.723 14:20:42 ftl -- common/autotest_common.sh@977 -- # echo 'Process with pid 84098 is not found' 00:32:04.723 14:20:42 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:32:04.723 14:20:42 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=95565 00:32:04.723 14:20:42 ftl -- ftl/ftl.sh@20 -- # waitforlisten 95565 00:32:04.723 14:20:42 ftl -- common/autotest_common.sh@831 -- # '[' -z 95565 ']' 00:32:04.723 14:20:42 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:04.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:04.723 14:20:42 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:04.723 14:20:42 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:04.723 14:20:42 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:04.723 14:20:42 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:04.723 14:20:42 ftl -- common/autotest_common.sh@10 -- # set +x 00:32:04.723 [2024-11-17 14:20:42.822759] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:32:04.723 [2024-11-17 14:20:42.822907] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95565 ] 00:32:04.723 [2024-11-17 14:20:42.973138] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:04.984 [2024-11-17 14:20:43.024539] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:32:05.557 14:20:43 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:05.557 14:20:43 ftl -- common/autotest_common.sh@864 -- # return 0 00:32:05.557 14:20:43 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:32:05.819 nvme0n1 00:32:05.819 14:20:43 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:32:05.819 14:20:43 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:32:05.819 14:20:43 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:06.080 14:20:44 ftl -- ftl/common.sh@28 -- # stores=8425273b-6fa8-41d2-aec8-fd0b3f254935 00:32:06.080 14:20:44 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:32:06.080 14:20:44 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8425273b-6fa8-41d2-aec8-fd0b3f254935 00:32:06.342 14:20:44 ftl -- ftl/ftl.sh@23 -- # killprocess 95565 00:32:06.342 14:20:44 ftl -- common/autotest_common.sh@950 -- # '[' -z 95565 ']' 00:32:06.342 14:20:44 ftl -- common/autotest_common.sh@954 -- # kill -0 95565 00:32:06.342 14:20:44 ftl -- common/autotest_common.sh@955 -- # uname 00:32:06.342 14:20:44 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:06.342 14:20:44 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95565 00:32:06.342 killing process with pid 95565 00:32:06.342 14:20:44 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:06.342 14:20:44 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:06.342 14:20:44 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95565' 00:32:06.342 14:20:44 ftl -- common/autotest_common.sh@969 -- # kill 95565 00:32:06.342 14:20:44 ftl -- common/autotest_common.sh@974 -- # wait 95565 00:32:06.603 14:20:44 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:32:06.864 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:06.864 Waiting for block devices as requested 00:32:06.864 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:32:06.864 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:32:07.125 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:32:07.125 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:32:12.414 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:32:12.414 Remove shared memory files 00:32:12.414 14:20:50 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:32:12.414 14:20:50 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:32:12.414 14:20:50 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:32:12.414 14:20:50 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:32:12.414 14:20:50 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:32:12.414 14:20:50 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:32:12.414 14:20:50 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:32:12.414 ************************************ 00:32:12.414 END TEST ftl 00:32:12.414 ************************************ 00:32:12.414 00:32:12.414 real 17m12.911s 00:32:12.414 user 18m58.168s 00:32:12.414 sys 1m22.365s 00:32:12.414 14:20:50 ftl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:12.414 14:20:50 ftl -- common/autotest_common.sh@10 -- # set +x 00:32:12.414 14:20:50 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:32:12.414 14:20:50 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:32:12.414 14:20:50 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:32:12.414 14:20:50 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:32:12.414 14:20:50 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:32:12.414 14:20:50 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:32:12.414 14:20:50 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:32:12.414 14:20:50 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:32:12.414 14:20:50 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:32:12.414 14:20:50 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:32:12.414 14:20:50 -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:12.414 14:20:50 -- common/autotest_common.sh@10 -- # set +x 00:32:12.414 14:20:50 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:32:12.414 14:20:50 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:32:12.414 14:20:50 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:32:12.414 14:20:50 -- common/autotest_common.sh@10 -- # set +x 00:32:13.797 INFO: APP EXITING 00:32:13.797 INFO: killing all VMs 00:32:13.797 INFO: killing vhost app 00:32:13.797 INFO: EXIT DONE 00:32:13.797 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:14.369 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:32:14.369 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:32:14.369 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:32:14.369 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:32:14.630 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:15.202 Cleaning 00:32:15.202 Removing: /var/run/dpdk/spdk0/config 00:32:15.202 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:15.202 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:15.202 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:15.202 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:15.202 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:15.202 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:15.202 Removing: /var/run/dpdk/spdk0 00:32:15.202 Removing: /var/run/dpdk/spdk_pid69639 00:32:15.202 Removing: /var/run/dpdk/spdk_pid69792 00:32:15.202 Removing: /var/run/dpdk/spdk_pid69993 00:32:15.202 Removing: /var/run/dpdk/spdk_pid70075 00:32:15.202 Removing: /var/run/dpdk/spdk_pid70104 00:32:15.202 Removing: /var/run/dpdk/spdk_pid70215 00:32:15.202 Removing: /var/run/dpdk/spdk_pid70228 00:32:15.202 Removing: /var/run/dpdk/spdk_pid70410 00:32:15.202 Removing: /var/run/dpdk/spdk_pid70484 00:32:15.202 Removing: /var/run/dpdk/spdk_pid70563 00:32:15.202 Removing: /var/run/dpdk/spdk_pid70658 00:32:15.202 Removing: /var/run/dpdk/spdk_pid70738 00:32:15.202 Removing: /var/run/dpdk/spdk_pid70778 00:32:15.202 Removing: /var/run/dpdk/spdk_pid70814 00:32:15.202 Removing: /var/run/dpdk/spdk_pid70885 00:32:15.202 Removing: /var/run/dpdk/spdk_pid70991 00:32:15.202 Removing: /var/run/dpdk/spdk_pid71405 00:32:15.202 Removing: /var/run/dpdk/spdk_pid71458 00:32:15.202 Removing: /var/run/dpdk/spdk_pid71499 00:32:15.202 Removing: /var/run/dpdk/spdk_pid71515 00:32:15.202 Removing: /var/run/dpdk/spdk_pid71573 00:32:15.202 Removing: /var/run/dpdk/spdk_pid71589 00:32:15.202 Removing: /var/run/dpdk/spdk_pid71647 00:32:15.202 Removing: /var/run/dpdk/spdk_pid71663 00:32:15.202 Removing: /var/run/dpdk/spdk_pid71705 00:32:15.202 Removing: /var/run/dpdk/spdk_pid71723 00:32:15.202 Removing: /var/run/dpdk/spdk_pid71765 00:32:15.202 Removing: /var/run/dpdk/spdk_pid71783 00:32:15.202 Removing: /var/run/dpdk/spdk_pid71910 00:32:15.202 Removing: /var/run/dpdk/spdk_pid71952 00:32:15.202 Removing: /var/run/dpdk/spdk_pid72030 00:32:15.202 Removing: /var/run/dpdk/spdk_pid72191 00:32:15.202 Removing: /var/run/dpdk/spdk_pid72264 00:32:15.202 Removing: /var/run/dpdk/spdk_pid72295 00:32:15.202 Removing: /var/run/dpdk/spdk_pid72704 00:32:15.202 Removing: /var/run/dpdk/spdk_pid72797 00:32:15.202 Removing: /var/run/dpdk/spdk_pid72895 00:32:15.202 Removing: /var/run/dpdk/spdk_pid72932 00:32:15.202 Removing: /var/run/dpdk/spdk_pid72957 00:32:15.202 Removing: /var/run/dpdk/spdk_pid73030 00:32:15.202 Removing: /var/run/dpdk/spdk_pid73645 00:32:15.202 Removing: /var/run/dpdk/spdk_pid73670 00:32:15.202 Removing: /var/run/dpdk/spdk_pid74118 00:32:15.202 Removing: /var/run/dpdk/spdk_pid74205 00:32:15.202 Removing: /var/run/dpdk/spdk_pid74309 00:32:15.202 Removing: /var/run/dpdk/spdk_pid74351 00:32:15.202 Removing: /var/run/dpdk/spdk_pid74371 00:32:15.202 Removing: /var/run/dpdk/spdk_pid74391 00:32:15.202 Removing: /var/run/dpdk/spdk_pid76207 00:32:15.202 Removing: /var/run/dpdk/spdk_pid76328 00:32:15.202 Removing: /var/run/dpdk/spdk_pid76332 00:32:15.202 Removing: /var/run/dpdk/spdk_pid76349 00:32:15.202 Removing: /var/run/dpdk/spdk_pid76395 00:32:15.202 Removing: /var/run/dpdk/spdk_pid76399 00:32:15.202 Removing: /var/run/dpdk/spdk_pid76411 00:32:15.202 Removing: /var/run/dpdk/spdk_pid76450 00:32:15.202 Removing: /var/run/dpdk/spdk_pid76454 00:32:15.202 Removing: /var/run/dpdk/spdk_pid76466 00:32:15.202 Removing: /var/run/dpdk/spdk_pid76511 00:32:15.202 Removing: /var/run/dpdk/spdk_pid76515 00:32:15.202 Removing: /var/run/dpdk/spdk_pid76527 00:32:15.202 Removing: /var/run/dpdk/spdk_pid77888 00:32:15.202 Removing: /var/run/dpdk/spdk_pid77970 00:32:15.202 Removing: /var/run/dpdk/spdk_pid79368 00:32:15.202 Removing: /var/run/dpdk/spdk_pid80719 00:32:15.202 Removing: /var/run/dpdk/spdk_pid80786 00:32:15.202 Removing: /var/run/dpdk/spdk_pid80840 00:32:15.202 Removing: /var/run/dpdk/spdk_pid80890 00:32:15.202 Removing: /var/run/dpdk/spdk_pid80978 00:32:15.202 Removing: /var/run/dpdk/spdk_pid81041 00:32:15.202 Removing: /var/run/dpdk/spdk_pid81178 00:32:15.202 Removing: /var/run/dpdk/spdk_pid81521 00:32:15.464 Removing: /var/run/dpdk/spdk_pid81547 00:32:15.464 Removing: /var/run/dpdk/spdk_pid81989 00:32:15.464 Removing: /var/run/dpdk/spdk_pid82168 00:32:15.464 Removing: /var/run/dpdk/spdk_pid82251 00:32:15.464 Removing: /var/run/dpdk/spdk_pid82351 00:32:15.464 Removing: /var/run/dpdk/spdk_pid82393 00:32:15.464 Removing: /var/run/dpdk/spdk_pid82417 00:32:15.464 Removing: /var/run/dpdk/spdk_pid82708 00:32:15.464 Removing: /var/run/dpdk/spdk_pid82745 00:32:15.464 Removing: /var/run/dpdk/spdk_pid82791 00:32:15.464 Removing: /var/run/dpdk/spdk_pid83154 00:32:15.464 Removing: /var/run/dpdk/spdk_pid83298 00:32:15.464 Removing: /var/run/dpdk/spdk_pid84098 00:32:15.464 Removing: /var/run/dpdk/spdk_pid84209 00:32:15.464 Removing: /var/run/dpdk/spdk_pid84368 00:32:15.464 Removing: /var/run/dpdk/spdk_pid84442 00:32:15.464 Removing: /var/run/dpdk/spdk_pid84707 00:32:15.464 Removing: /var/run/dpdk/spdk_pid84932 00:32:15.464 Removing: /var/run/dpdk/spdk_pid85251 00:32:15.464 Removing: /var/run/dpdk/spdk_pid85412 00:32:15.464 Removing: /var/run/dpdk/spdk_pid85555 00:32:15.464 Removing: /var/run/dpdk/spdk_pid85597 00:32:15.464 Removing: /var/run/dpdk/spdk_pid85751 00:32:15.464 Removing: /var/run/dpdk/spdk_pid85770 00:32:15.464 Removing: /var/run/dpdk/spdk_pid85806 00:32:15.464 Removing: /var/run/dpdk/spdk_pid86086 00:32:15.464 Removing: /var/run/dpdk/spdk_pid86300 00:32:15.464 Removing: /var/run/dpdk/spdk_pid86822 00:32:15.464 Removing: /var/run/dpdk/spdk_pid87489 00:32:15.464 Removing: /var/run/dpdk/spdk_pid88011 00:32:15.464 Removing: /var/run/dpdk/spdk_pid88781 00:32:15.464 Removing: /var/run/dpdk/spdk_pid88929 00:32:15.464 Removing: /var/run/dpdk/spdk_pid89009 00:32:15.464 Removing: /var/run/dpdk/spdk_pid89541 00:32:15.464 Removing: /var/run/dpdk/spdk_pid89592 00:32:15.464 Removing: /var/run/dpdk/spdk_pid90434 00:32:15.464 Removing: /var/run/dpdk/spdk_pid90854 00:32:15.464 Removing: /var/run/dpdk/spdk_pid91629 00:32:15.464 Removing: /var/run/dpdk/spdk_pid91756 00:32:15.464 Removing: /var/run/dpdk/spdk_pid91788 00:32:15.464 Removing: /var/run/dpdk/spdk_pid91846 00:32:15.464 Removing: /var/run/dpdk/spdk_pid91901 00:32:15.464 Removing: /var/run/dpdk/spdk_pid91956 00:32:15.464 Removing: /var/run/dpdk/spdk_pid92140 00:32:15.464 Removing: /var/run/dpdk/spdk_pid92221 00:32:15.464 Removing: /var/run/dpdk/spdk_pid92281 00:32:15.464 Removing: /var/run/dpdk/spdk_pid92338 00:32:15.464 Removing: /var/run/dpdk/spdk_pid92373 00:32:15.464 Removing: /var/run/dpdk/spdk_pid92441 00:32:15.464 Removing: /var/run/dpdk/spdk_pid92595 00:32:15.464 Removing: /var/run/dpdk/spdk_pid92811 00:32:15.464 Removing: /var/run/dpdk/spdk_pid93464 00:32:15.464 Removing: /var/run/dpdk/spdk_pid94245 00:32:15.464 Removing: /var/run/dpdk/spdk_pid94802 00:32:15.464 Removing: /var/run/dpdk/spdk_pid95565 00:32:15.464 Clean 00:32:15.464 14:20:53 -- common/autotest_common.sh@1451 -- # return 0 00:32:15.464 14:20:53 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:32:15.464 14:20:53 -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:15.464 14:20:53 -- common/autotest_common.sh@10 -- # set +x 00:32:15.464 14:20:53 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:32:15.464 14:20:53 -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:15.464 14:20:53 -- common/autotest_common.sh@10 -- # set +x 00:32:15.725 14:20:53 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:32:15.725 14:20:53 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:32:15.725 14:20:53 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:32:15.725 14:20:53 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:32:15.725 14:20:53 -- spdk/autotest.sh@394 -- # hostname 00:32:15.725 14:20:53 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:32:15.725 geninfo: WARNING: invalid characters removed from testname! 00:32:42.309 14:21:19 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:44.224 14:21:22 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:46.776 14:21:24 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:49.324 14:21:27 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:51.234 14:21:29 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:53.227 14:21:31 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:55.143 14:21:32 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:32:55.144 14:21:33 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:32:55.144 14:21:33 -- common/autotest_common.sh@1681 -- $ lcov --version 00:32:55.144 14:21:33 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:32:55.144 14:21:33 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:32:55.144 14:21:33 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:32:55.144 14:21:33 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:32:55.144 14:21:33 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:32:55.144 14:21:33 -- scripts/common.sh@336 -- $ IFS=.-: 00:32:55.144 14:21:33 -- scripts/common.sh@336 -- $ read -ra ver1 00:32:55.144 14:21:33 -- scripts/common.sh@337 -- $ IFS=.-: 00:32:55.144 14:21:33 -- scripts/common.sh@337 -- $ read -ra ver2 00:32:55.144 14:21:33 -- scripts/common.sh@338 -- $ local 'op=<' 00:32:55.144 14:21:33 -- scripts/common.sh@340 -- $ ver1_l=2 00:32:55.144 14:21:33 -- scripts/common.sh@341 -- $ ver2_l=1 00:32:55.144 14:21:33 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:32:55.144 14:21:33 -- scripts/common.sh@344 -- $ case "$op" in 00:32:55.144 14:21:33 -- scripts/common.sh@345 -- $ : 1 00:32:55.144 14:21:33 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:32:55.144 14:21:33 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:55.144 14:21:33 -- scripts/common.sh@365 -- $ decimal 1 00:32:55.144 14:21:33 -- scripts/common.sh@353 -- $ local d=1 00:32:55.144 14:21:33 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:32:55.144 14:21:33 -- scripts/common.sh@355 -- $ echo 1 00:32:55.144 14:21:33 -- scripts/common.sh@365 -- $ ver1[v]=1 00:32:55.144 14:21:33 -- scripts/common.sh@366 -- $ decimal 2 00:32:55.144 14:21:33 -- scripts/common.sh@353 -- $ local d=2 00:32:55.144 14:21:33 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:32:55.144 14:21:33 -- scripts/common.sh@355 -- $ echo 2 00:32:55.144 14:21:33 -- scripts/common.sh@366 -- $ ver2[v]=2 00:32:55.144 14:21:33 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:32:55.144 14:21:33 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:32:55.144 14:21:33 -- scripts/common.sh@368 -- $ return 0 00:32:55.144 14:21:33 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:55.144 14:21:33 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:32:55.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:55.144 --rc genhtml_branch_coverage=1 00:32:55.144 --rc genhtml_function_coverage=1 00:32:55.144 --rc genhtml_legend=1 00:32:55.144 --rc geninfo_all_blocks=1 00:32:55.144 --rc geninfo_unexecuted_blocks=1 00:32:55.144 00:32:55.144 ' 00:32:55.144 14:21:33 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:32:55.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:55.144 --rc genhtml_branch_coverage=1 00:32:55.144 --rc genhtml_function_coverage=1 00:32:55.144 --rc genhtml_legend=1 00:32:55.144 --rc geninfo_all_blocks=1 00:32:55.144 --rc geninfo_unexecuted_blocks=1 00:32:55.144 00:32:55.144 ' 00:32:55.144 14:21:33 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:32:55.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:55.144 --rc genhtml_branch_coverage=1 00:32:55.144 --rc genhtml_function_coverage=1 00:32:55.144 --rc genhtml_legend=1 00:32:55.144 --rc geninfo_all_blocks=1 00:32:55.144 --rc geninfo_unexecuted_blocks=1 00:32:55.144 00:32:55.144 ' 00:32:55.144 14:21:33 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:32:55.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:55.144 --rc genhtml_branch_coverage=1 00:32:55.144 --rc genhtml_function_coverage=1 00:32:55.144 --rc genhtml_legend=1 00:32:55.144 --rc geninfo_all_blocks=1 00:32:55.144 --rc geninfo_unexecuted_blocks=1 00:32:55.144 00:32:55.144 ' 00:32:55.144 14:21:33 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:55.144 14:21:33 -- scripts/common.sh@15 -- $ shopt -s extglob 00:32:55.144 14:21:33 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:32:55.144 14:21:33 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:55.144 14:21:33 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:55.144 14:21:33 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:55.144 14:21:33 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:55.144 14:21:33 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:55.144 14:21:33 -- paths/export.sh@5 -- $ export PATH 00:32:55.144 14:21:33 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:55.144 14:21:33 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:32:55.144 14:21:33 -- common/autobuild_common.sh@479 -- $ date +%s 00:32:55.144 14:21:33 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1731853293.XXXXXX 00:32:55.144 14:21:33 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1731853293.EPQ47C 00:32:55.144 14:21:33 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:32:55.144 14:21:33 -- common/autobuild_common.sh@485 -- $ '[' -n v23.11 ']' 00:32:55.144 14:21:33 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:32:55.144 14:21:33 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:32:55.144 14:21:33 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:32:55.144 14:21:33 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:32:55.144 14:21:33 -- common/autobuild_common.sh@495 -- $ get_config_params 00:32:55.144 14:21:33 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:32:55.144 14:21:33 -- common/autotest_common.sh@10 -- $ set +x 00:32:55.144 14:21:33 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:32:55.144 14:21:33 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:32:55.144 14:21:33 -- pm/common@17 -- $ local monitor 00:32:55.144 14:21:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:55.144 14:21:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:55.144 14:21:33 -- pm/common@25 -- $ sleep 1 00:32:55.144 14:21:33 -- pm/common@21 -- $ date +%s 00:32:55.144 14:21:33 -- pm/common@21 -- $ date +%s 00:32:55.144 14:21:33 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1731853293 00:32:55.144 14:21:33 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1731853293 00:32:55.144 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1731853293_collect-cpu-load.pm.log 00:32:55.144 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1731853293_collect-vmstat.pm.log 00:32:56.087 14:21:34 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:32:56.087 14:21:34 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:32:56.087 14:21:34 -- spdk/autopackage.sh@14 -- $ timing_finish 00:32:56.087 14:21:34 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:32:56.087 14:21:34 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:32:56.087 14:21:34 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:32:56.087 14:21:34 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:32:56.087 14:21:34 -- pm/common@29 -- $ signal_monitor_resources TERM 00:32:56.087 14:21:34 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:32:56.087 14:21:34 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:56.087 14:21:34 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:32:56.087 14:21:34 -- pm/common@44 -- $ pid=97254 00:32:56.087 14:21:34 -- pm/common@50 -- $ kill -TERM 97254 00:32:56.087 14:21:34 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:56.087 14:21:34 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:32:56.087 14:21:34 -- pm/common@44 -- $ pid=97255 00:32:56.087 14:21:34 -- pm/common@50 -- $ kill -TERM 97255 00:32:56.087 + [[ -n 5762 ]] 00:32:56.087 + sudo kill 5762 00:32:56.099 [Pipeline] } 00:32:56.115 [Pipeline] // timeout 00:32:56.121 [Pipeline] } 00:32:56.137 [Pipeline] // stage 00:32:56.144 [Pipeline] } 00:32:56.159 [Pipeline] // catchError 00:32:56.170 [Pipeline] stage 00:32:56.172 [Pipeline] { (Stop VM) 00:32:56.186 [Pipeline] sh 00:32:56.474 + vagrant halt 00:32:59.018 ==> default: Halting domain... 00:33:05.621 [Pipeline] sh 00:33:05.905 + vagrant destroy -f 00:33:08.446 ==> default: Removing domain... 00:33:09.033 [Pipeline] sh 00:33:09.319 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:33:09.330 [Pipeline] } 00:33:09.345 [Pipeline] // stage 00:33:09.349 [Pipeline] } 00:33:09.362 [Pipeline] // dir 00:33:09.367 [Pipeline] } 00:33:09.381 [Pipeline] // wrap 00:33:09.386 [Pipeline] } 00:33:09.398 [Pipeline] // catchError 00:33:09.407 [Pipeline] stage 00:33:09.410 [Pipeline] { (Epilogue) 00:33:09.423 [Pipeline] sh 00:33:09.710 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:33:15.011 [Pipeline] catchError 00:33:15.013 [Pipeline] { 00:33:15.026 [Pipeline] sh 00:33:15.313 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:33:15.313 Artifacts sizes are good 00:33:15.324 [Pipeline] } 00:33:15.339 [Pipeline] // catchError 00:33:15.350 [Pipeline] archiveArtifacts 00:33:15.357 Archiving artifacts 00:33:15.484 [Pipeline] cleanWs 00:33:15.520 [WS-CLEANUP] Deleting project workspace... 00:33:15.520 [WS-CLEANUP] Deferred wipeout is used... 00:33:15.537 [WS-CLEANUP] done 00:33:15.539 [Pipeline] } 00:33:15.556 [Pipeline] // stage 00:33:15.562 [Pipeline] } 00:33:15.578 [Pipeline] // node 00:33:15.584 [Pipeline] End of Pipeline 00:33:15.641 Finished: SUCCESS