00:00:00.000 Started by upstream project "autotest-spdk-v24.09-vs-dpdk-v23.11" build number 142 00:00:00.000 originally caused by: 00:00:00.000 Started by upstream project "nightly-trigger" build number 3643 00:00:00.000 originally caused by: 00:00:00.000 Started by timer 00:00:00.054 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.055 The recommended git tool is: git 00:00:00.055 using credential 00000000-0000-0000-0000-000000000002 00:00:00.056 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.068 Fetching changes from the remote Git repository 00:00:00.072 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.092 Using shallow fetch with depth 1 00:00:00.092 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.092 > git --version # timeout=10 00:00:00.116 > git --version # 'git version 2.39.2' 00:00:00.116 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.139 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.139 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.277 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.289 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.301 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.301 > git config core.sparsecheckout # timeout=10 00:00:06.314 > git read-tree -mu HEAD # timeout=10 00:00:06.330 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.347 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.347 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.474 [Pipeline] Start of Pipeline 00:00:06.489 [Pipeline] library 00:00:06.491 Loading library shm_lib@master 00:00:06.491 Library shm_lib@master is cached. Copying from home. 00:00:06.506 [Pipeline] node 00:00:06.522 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:06.524 [Pipeline] { 00:00:06.536 [Pipeline] catchError 00:00:06.538 [Pipeline] { 00:00:06.552 [Pipeline] wrap 00:00:06.561 [Pipeline] { 00:00:06.571 [Pipeline] stage 00:00:06.573 [Pipeline] { (Prologue) 00:00:06.593 [Pipeline] echo 00:00:06.595 Node: VM-host-SM38 00:00:06.602 [Pipeline] cleanWs 00:00:06.613 [WS-CLEANUP] Deleting project workspace... 00:00:06.613 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.621 [WS-CLEANUP] done 00:00:06.830 [Pipeline] setCustomBuildProperty 00:00:06.911 [Pipeline] httpRequest 00:00:07.293 [Pipeline] echo 00:00:07.295 Sorcerer 10.211.164.20 is alive 00:00:07.304 [Pipeline] retry 00:00:07.306 [Pipeline] { 00:00:07.320 [Pipeline] httpRequest 00:00:07.326 HttpMethod: GET 00:00:07.326 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.327 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.328 Response Code: HTTP/1.1 200 OK 00:00:07.329 Success: Status code 200 is in the accepted range: 200,404 00:00:07.329 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.330 [Pipeline] } 00:00:08.345 [Pipeline] // retry 00:00:08.352 [Pipeline] sh 00:00:08.639 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.659 [Pipeline] httpRequest 00:00:09.375 [Pipeline] echo 00:00:09.376 Sorcerer 10.211.164.20 is alive 00:00:09.382 [Pipeline] retry 00:00:09.383 [Pipeline] { 00:00:09.392 [Pipeline] httpRequest 00:00:09.397 HttpMethod: GET 00:00:09.398 URL: http://10.211.164.20/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:09.399 Sending request to url: http://10.211.164.20/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:09.415 Response Code: HTTP/1.1 200 OK 00:00:09.416 Success: Status code 200 is in the accepted range: 200,404 00:00:09.417 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:01:49.962 [Pipeline] } 00:01:49.986 [Pipeline] // retry 00:01:49.991 [Pipeline] sh 00:01:50.272 + tar --no-same-owner -xf spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:01:52.833 [Pipeline] sh 00:01:53.117 + git -C spdk log --oneline -n5 00:01:53.117 b18e1bd62 version: v24.09.1-pre 00:01:53.117 19524ad45 version: v24.09 00:01:53.117 9756b40a3 dpdk: update submodule to include alarm_cancel fix 00:01:53.117 a808500d2 test/nvmf: disable nvmf_shutdown_tc4 on e810 00:01:53.117 3024272c6 bdev/nvme: take nvme_ctrlr.mutex when setting keys 00:01:53.138 [Pipeline] withCredentials 00:01:53.150 > git --version # timeout=10 00:01:53.164 > git --version # 'git version 2.39.2' 00:01:53.182 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:53.184 [Pipeline] { 00:01:53.192 [Pipeline] retry 00:01:53.194 [Pipeline] { 00:01:53.208 [Pipeline] sh 00:01:53.489 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:01:53.502 [Pipeline] } 00:01:53.521 [Pipeline] // retry 00:01:53.526 [Pipeline] } 00:01:53.542 [Pipeline] // withCredentials 00:01:53.551 [Pipeline] httpRequest 00:01:53.911 [Pipeline] echo 00:01:53.913 Sorcerer 10.211.164.20 is alive 00:01:53.921 [Pipeline] retry 00:01:53.923 [Pipeline] { 00:01:53.935 [Pipeline] httpRequest 00:01:53.940 HttpMethod: GET 00:01:53.941 URL: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:53.941 Sending request to url: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:53.943 Response Code: HTTP/1.1 200 OK 00:01:53.943 Success: Status code 200 is in the accepted range: 200,404 00:01:53.944 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:02:02.269 [Pipeline] } 00:02:02.284 [Pipeline] // retry 00:02:02.292 [Pipeline] sh 00:02:02.573 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:02:04.501 [Pipeline] sh 00:02:04.789 + git -C dpdk log --oneline -n5 00:02:04.789 eeb0605f11 version: 23.11.0 00:02:04.789 238778122a doc: update release notes for 23.11 00:02:04.789 46aa6b3cfc doc: fix description of RSS features 00:02:04.789 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:04.789 7e421ae345 devtools: support skipping forbid rule check 00:02:04.810 [Pipeline] writeFile 00:02:04.825 [Pipeline] sh 00:02:05.112 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:02:05.126 [Pipeline] sh 00:02:05.412 + cat autorun-spdk.conf 00:02:05.412 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:05.412 SPDK_TEST_NVME=1 00:02:05.412 SPDK_TEST_FTL=1 00:02:05.412 SPDK_TEST_ISAL=1 00:02:05.412 SPDK_RUN_ASAN=1 00:02:05.412 SPDK_RUN_UBSAN=1 00:02:05.412 SPDK_TEST_XNVME=1 00:02:05.412 SPDK_TEST_NVME_FDP=1 00:02:05.412 SPDK_TEST_NATIVE_DPDK=v23.11 00:02:05.412 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:05.412 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:05.422 RUN_NIGHTLY=1 00:02:05.424 [Pipeline] } 00:02:05.440 [Pipeline] // stage 00:02:05.456 [Pipeline] stage 00:02:05.458 [Pipeline] { (Run VM) 00:02:05.471 [Pipeline] sh 00:02:05.757 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:02:05.757 + echo 'Start stage prepare_nvme.sh' 00:02:05.757 Start stage prepare_nvme.sh 00:02:05.757 + [[ -n 1 ]] 00:02:05.757 + disk_prefix=ex1 00:02:05.757 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:02:05.757 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:02:05.757 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:02:05.757 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:05.757 ++ SPDK_TEST_NVME=1 00:02:05.757 ++ SPDK_TEST_FTL=1 00:02:05.757 ++ SPDK_TEST_ISAL=1 00:02:05.757 ++ SPDK_RUN_ASAN=1 00:02:05.757 ++ SPDK_RUN_UBSAN=1 00:02:05.757 ++ SPDK_TEST_XNVME=1 00:02:05.757 ++ SPDK_TEST_NVME_FDP=1 00:02:05.757 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:05.757 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:05.757 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:05.757 ++ RUN_NIGHTLY=1 00:02:05.757 + cd /var/jenkins/workspace/nvme-vg-autotest 00:02:05.757 + nvme_files=() 00:02:05.757 + declare -A nvme_files 00:02:05.757 + backend_dir=/var/lib/libvirt/images/backends 00:02:05.757 + nvme_files['nvme.img']=5G 00:02:05.757 + nvme_files['nvme-cmb.img']=5G 00:02:05.757 + nvme_files['nvme-multi0.img']=4G 00:02:05.757 + nvme_files['nvme-multi1.img']=4G 00:02:05.757 + nvme_files['nvme-multi2.img']=4G 00:02:05.757 + nvme_files['nvme-openstack.img']=8G 00:02:05.757 + nvme_files['nvme-zns.img']=5G 00:02:05.757 + (( SPDK_TEST_NVME_PMR == 1 )) 00:02:05.757 + (( SPDK_TEST_FTL == 1 )) 00:02:05.757 + nvme_files["nvme-ftl.img"]=6G 00:02:05.757 + (( SPDK_TEST_NVME_FDP == 1 )) 00:02:05.757 + nvme_files["nvme-fdp.img"]=1G 00:02:05.757 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:02:05.757 + for nvme in "${!nvme_files[@]}" 00:02:05.757 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:02:05.757 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:02:05.757 + for nvme in "${!nvme_files[@]}" 00:02:05.757 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-ftl.img -s 6G 00:02:06.699 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:02:06.699 + for nvme in "${!nvme_files[@]}" 00:02:06.699 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:02:06.699 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:02:06.699 + for nvme in "${!nvme_files[@]}" 00:02:06.699 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:02:06.699 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:02:06.699 + for nvme in "${!nvme_files[@]}" 00:02:06.699 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:02:06.699 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:02:06.699 + for nvme in "${!nvme_files[@]}" 00:02:06.699 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:02:06.699 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:02:06.699 + for nvme in "${!nvme_files[@]}" 00:02:06.699 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:02:06.699 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:02:06.699 + for nvme in "${!nvme_files[@]}" 00:02:06.699 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-fdp.img -s 1G 00:02:06.958 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:02:06.958 + for nvme in "${!nvme_files[@]}" 00:02:06.958 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:02:06.958 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:02:06.958 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:02:06.958 + echo 'End stage prepare_nvme.sh' 00:02:06.958 End stage prepare_nvme.sh 00:02:06.970 [Pipeline] sh 00:02:07.256 + DISTRO=fedora39 00:02:07.256 + CPUS=10 00:02:07.256 + RAM=12288 00:02:07.256 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:02:07.257 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex1-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex1-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:02:07.257 00:02:07.257 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:02:07.257 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:02:07.257 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:02:07.257 HELP=0 00:02:07.257 DRY_RUN=0 00:02:07.257 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme-ftl.img,/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,/var/lib/libvirt/images/backends/ex1-nvme-fdp.img, 00:02:07.257 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:02:07.257 NVME_AUTO_CREATE=0 00:02:07.257 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,, 00:02:07.257 NVME_CMB=,,,, 00:02:07.257 NVME_PMR=,,,, 00:02:07.257 NVME_ZNS=,,,, 00:02:07.257 NVME_MS=true,,,, 00:02:07.257 NVME_FDP=,,,on, 00:02:07.257 SPDK_VAGRANT_DISTRO=fedora39 00:02:07.257 SPDK_VAGRANT_VMCPU=10 00:02:07.257 SPDK_VAGRANT_VMRAM=12288 00:02:07.257 SPDK_VAGRANT_PROVIDER=libvirt 00:02:07.257 SPDK_VAGRANT_HTTP_PROXY= 00:02:07.257 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:02:07.257 SPDK_OPENSTACK_NETWORK=0 00:02:07.257 VAGRANT_PACKAGE_BOX=0 00:02:07.257 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:02:07.257 FORCE_DISTRO=true 00:02:07.257 VAGRANT_BOX_VERSION= 00:02:07.257 EXTRA_VAGRANTFILES= 00:02:07.257 NIC_MODEL=e1000 00:02:07.257 00:02:07.257 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:02:07.257 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:02:09.831 Bringing machine 'default' up with 'libvirt' provider... 00:02:10.090 ==> default: Creating image (snapshot of base box volume). 00:02:10.349 ==> default: Creating domain with the following settings... 00:02:10.349 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1731970649_0051960c8fd79cb32f55 00:02:10.349 ==> default: -- Domain type: kvm 00:02:10.349 ==> default: -- Cpus: 10 00:02:10.349 ==> default: -- Feature: acpi 00:02:10.349 ==> default: -- Feature: apic 00:02:10.349 ==> default: -- Feature: pae 00:02:10.349 ==> default: -- Memory: 12288M 00:02:10.349 ==> default: -- Memory Backing: hugepages: 00:02:10.350 ==> default: -- Management MAC: 00:02:10.350 ==> default: -- Loader: 00:02:10.350 ==> default: -- Nvram: 00:02:10.350 ==> default: -- Base box: spdk/fedora39 00:02:10.350 ==> default: -- Storage pool: default 00:02:10.350 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1731970649_0051960c8fd79cb32f55.img (20G) 00:02:10.350 ==> default: -- Volume Cache: default 00:02:10.350 ==> default: -- Kernel: 00:02:10.350 ==> default: -- Initrd: 00:02:10.350 ==> default: -- Graphics Type: vnc 00:02:10.350 ==> default: -- Graphics Port: -1 00:02:10.350 ==> default: -- Graphics IP: 127.0.0.1 00:02:10.350 ==> default: -- Graphics Password: Not defined 00:02:10.350 ==> default: -- Video Type: cirrus 00:02:10.350 ==> default: -- Video VRAM: 9216 00:02:10.350 ==> default: -- Sound Type: 00:02:10.350 ==> default: -- Keymap: en-us 00:02:10.350 ==> default: -- TPM Path: 00:02:10.350 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:10.350 ==> default: -- Command line args: 00:02:10.350 ==> default: -> value=-device, 00:02:10.350 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:10.350 ==> default: -> value=-drive, 00:02:10.350 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:02:10.350 ==> default: -> value=-device, 00:02:10.350 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:02:10.350 ==> default: -> value=-device, 00:02:10.350 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:02:10.350 ==> default: -> value=-drive, 00:02:10.350 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-1-drive0, 00:02:10.350 ==> default: -> value=-device, 00:02:10.350 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:10.350 ==> default: -> value=-device, 00:02:10.350 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:02:10.350 ==> default: -> value=-drive, 00:02:10.350 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:02:10.350 ==> default: -> value=-device, 00:02:10.350 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:10.350 ==> default: -> value=-drive, 00:02:10.350 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:02:10.350 ==> default: -> value=-device, 00:02:10.350 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:10.350 ==> default: -> value=-drive, 00:02:10.350 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:02:10.350 ==> default: -> value=-device, 00:02:10.350 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:10.350 ==> default: -> value=-device, 00:02:10.350 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:02:10.350 ==> default: -> value=-device, 00:02:10.350 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:02:10.350 ==> default: -> value=-drive, 00:02:10.350 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:02:10.350 ==> default: -> value=-device, 00:02:10.350 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:10.350 ==> default: Creating shared folders metadata... 00:02:10.611 ==> default: Starting domain. 00:02:12.531 ==> default: Waiting for domain to get an IP address... 00:02:30.674 ==> default: Waiting for SSH to become available... 00:02:30.674 ==> default: Configuring and enabling network interfaces... 00:02:33.221 default: SSH address: 192.168.121.226:22 00:02:33.221 default: SSH username: vagrant 00:02:33.221 default: SSH auth method: private key 00:02:35.768 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:43.919 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:50.530 ==> default: Mounting SSHFS shared folder... 00:02:51.965 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:51.965 ==> default: Checking Mount.. 00:02:53.352 ==> default: Folder Successfully Mounted! 00:02:53.352 00:02:53.352 SUCCESS! 00:02:53.352 00:02:53.352 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:53.352 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:53.352 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:53.352 00:02:53.364 [Pipeline] } 00:02:53.381 [Pipeline] // stage 00:02:53.391 [Pipeline] dir 00:02:53.392 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:53.394 [Pipeline] { 00:02:53.407 [Pipeline] catchError 00:02:53.409 [Pipeline] { 00:02:53.423 [Pipeline] sh 00:02:53.712 + vagrant ssh-config --host vagrant 00:02:53.712 + sed -ne '/^Host/,$p' 00:02:53.712 + tee ssh_conf 00:02:56.262 Host vagrant 00:02:56.262 HostName 192.168.121.226 00:02:56.262 User vagrant 00:02:56.262 Port 22 00:02:56.262 UserKnownHostsFile /dev/null 00:02:56.262 StrictHostKeyChecking no 00:02:56.262 PasswordAuthentication no 00:02:56.262 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:56.262 IdentitiesOnly yes 00:02:56.262 LogLevel FATAL 00:02:56.262 ForwardAgent yes 00:02:56.262 ForwardX11 yes 00:02:56.262 00:02:56.278 [Pipeline] withEnv 00:02:56.281 [Pipeline] { 00:02:56.295 [Pipeline] sh 00:02:56.580 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:02:56.580 source /etc/os-release 00:02:56.580 [[ -e /image.version ]] && img=$(< /image.version) 00:02:56.580 # Minimal, systemd-like check. 00:02:56.580 if [[ -e /.dockerenv ]]; then 00:02:56.580 # Clear garbage from the node'\''s name: 00:02:56.580 # agt-er_autotest_547-896 -> autotest_547-896 00:02:56.580 # $HOSTNAME is the actual container id 00:02:56.580 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:56.580 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:56.580 # We can assume this is a mount from a host where container is running, 00:02:56.580 # so fetch its hostname to easily identify the target swarm worker. 00:02:56.580 container="$(< /etc/hostname) ($agent)" 00:02:56.580 else 00:02:56.580 # Fallback 00:02:56.580 container=$agent 00:02:56.580 fi 00:02:56.580 fi 00:02:56.580 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:56.580 ' 00:02:56.855 [Pipeline] } 00:02:56.871 [Pipeline] // withEnv 00:02:56.880 [Pipeline] setCustomBuildProperty 00:02:56.894 [Pipeline] stage 00:02:56.896 [Pipeline] { (Tests) 00:02:56.913 [Pipeline] sh 00:02:57.197 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:57.473 [Pipeline] sh 00:02:57.758 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:58.036 [Pipeline] timeout 00:02:58.036 Timeout set to expire in 50 min 00:02:58.038 [Pipeline] { 00:02:58.053 [Pipeline] sh 00:02:58.336 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:02:58.907 HEAD is now at b18e1bd62 version: v24.09.1-pre 00:02:58.921 [Pipeline] sh 00:02:59.268 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:02:59.281 [Pipeline] sh 00:02:59.560 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:59.833 [Pipeline] sh 00:03:00.112 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:03:00.112 ++ readlink -f spdk_repo 00:03:00.112 + DIR_ROOT=/home/vagrant/spdk_repo 00:03:00.112 + [[ -n /home/vagrant/spdk_repo ]] 00:03:00.112 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:03:00.112 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:03:00.112 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:03:00.112 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:03:00.112 + [[ -d /home/vagrant/spdk_repo/output ]] 00:03:00.112 + [[ nvme-vg-autotest == pkgdep-* ]] 00:03:00.112 + cd /home/vagrant/spdk_repo 00:03:00.112 + source /etc/os-release 00:03:00.112 ++ NAME='Fedora Linux' 00:03:00.112 ++ VERSION='39 (Cloud Edition)' 00:03:00.112 ++ ID=fedora 00:03:00.112 ++ VERSION_ID=39 00:03:00.112 ++ VERSION_CODENAME= 00:03:00.112 ++ PLATFORM_ID=platform:f39 00:03:00.112 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:03:00.112 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:00.112 ++ LOGO=fedora-logo-icon 00:03:00.112 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:03:00.112 ++ HOME_URL=https://fedoraproject.org/ 00:03:00.112 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:03:00.112 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:00.112 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:00.112 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:00.112 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:03:00.112 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:00.112 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:03:00.112 ++ SUPPORT_END=2024-11-12 00:03:00.112 ++ VARIANT='Cloud Edition' 00:03:00.112 ++ VARIANT_ID=cloud 00:03:00.112 + uname -a 00:03:00.370 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:03:00.370 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:00.628 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:00.885 Hugepages 00:03:00.885 node hugesize free / total 00:03:00.885 node0 1048576kB 0 / 0 00:03:00.885 node0 2048kB 0 / 0 00:03:00.885 00:03:00.885 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:00.885 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:00.885 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:00.885 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:03:00.885 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:03:00.885 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:03:00.885 + rm -f /tmp/spdk-ld-path 00:03:00.885 + source autorun-spdk.conf 00:03:00.885 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:00.885 ++ SPDK_TEST_NVME=1 00:03:00.885 ++ SPDK_TEST_FTL=1 00:03:00.885 ++ SPDK_TEST_ISAL=1 00:03:00.885 ++ SPDK_RUN_ASAN=1 00:03:00.885 ++ SPDK_RUN_UBSAN=1 00:03:00.885 ++ SPDK_TEST_XNVME=1 00:03:00.885 ++ SPDK_TEST_NVME_FDP=1 00:03:00.885 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:03:00.885 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:03:00.885 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:00.885 ++ RUN_NIGHTLY=1 00:03:00.885 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:00.885 + [[ -n '' ]] 00:03:00.885 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:03:01.143 + for M in /var/spdk/build-*-manifest.txt 00:03:01.143 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:03:01.143 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:01.143 + for M in /var/spdk/build-*-manifest.txt 00:03:01.143 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:01.143 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:01.143 + for M in /var/spdk/build-*-manifest.txt 00:03:01.143 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:01.143 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:01.143 ++ uname 00:03:01.143 + [[ Linux == \L\i\n\u\x ]] 00:03:01.143 + sudo dmesg -T 00:03:01.143 + sudo dmesg --clear 00:03:01.143 + dmesg_pid=5768 00:03:01.143 + [[ Fedora Linux == FreeBSD ]] 00:03:01.143 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:01.143 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:01.143 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:01.143 + [[ -x /usr/src/fio-static/fio ]] 00:03:01.143 + sudo dmesg -Tw 00:03:01.143 + export FIO_BIN=/usr/src/fio-static/fio 00:03:01.143 + FIO_BIN=/usr/src/fio-static/fio 00:03:01.143 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:01.143 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:01.143 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:01.143 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:01.143 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:01.143 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:01.143 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:01.143 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:01.143 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:01.143 Test configuration: 00:03:01.143 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:01.143 SPDK_TEST_NVME=1 00:03:01.143 SPDK_TEST_FTL=1 00:03:01.143 SPDK_TEST_ISAL=1 00:03:01.143 SPDK_RUN_ASAN=1 00:03:01.143 SPDK_RUN_UBSAN=1 00:03:01.143 SPDK_TEST_XNVME=1 00:03:01.143 SPDK_TEST_NVME_FDP=1 00:03:01.143 SPDK_TEST_NATIVE_DPDK=v23.11 00:03:01.143 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:03:01.143 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:01.143 RUN_NIGHTLY=1 22:58:20 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:03:01.143 22:58:20 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:01.143 22:58:20 -- scripts/common.sh@15 -- $ shopt -s extglob 00:03:01.143 22:58:20 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:01.143 22:58:20 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:01.144 22:58:20 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:01.144 22:58:20 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:01.144 22:58:20 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:01.144 22:58:20 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:01.144 22:58:20 -- paths/export.sh@5 -- $ export PATH 00:03:01.144 22:58:20 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:01.144 22:58:20 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:03:01.144 22:58:20 -- common/autobuild_common.sh@479 -- $ date +%s 00:03:01.144 22:58:20 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1731970700.XXXXXX 00:03:01.144 22:58:20 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1731970700.RxZ42Z 00:03:01.144 22:58:20 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:03:01.144 22:58:20 -- common/autobuild_common.sh@485 -- $ '[' -n v23.11 ']' 00:03:01.144 22:58:20 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:03:01.144 22:58:20 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:03:01.144 22:58:20 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:03:01.144 22:58:20 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:03:01.144 22:58:20 -- common/autobuild_common.sh@495 -- $ get_config_params 00:03:01.144 22:58:20 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:03:01.144 22:58:20 -- common/autotest_common.sh@10 -- $ set +x 00:03:01.144 22:58:20 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:03:01.144 22:58:20 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:03:01.144 22:58:20 -- pm/common@17 -- $ local monitor 00:03:01.144 22:58:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:01.144 22:58:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:01.144 22:58:20 -- pm/common@25 -- $ sleep 1 00:03:01.144 22:58:20 -- pm/common@21 -- $ date +%s 00:03:01.144 22:58:20 -- pm/common@21 -- $ date +%s 00:03:01.144 22:58:20 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731970700 00:03:01.144 22:58:20 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731970700 00:03:01.144 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731970700_collect-cpu-load.pm.log 00:03:01.144 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731970700_collect-vmstat.pm.log 00:03:02.518 22:58:21 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:03:02.518 22:58:21 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:02.518 22:58:21 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:02.518 22:58:21 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:02.518 22:58:21 -- spdk/autobuild.sh@16 -- $ date -u 00:03:02.518 Mon Nov 18 10:58:21 PM UTC 2024 00:03:02.518 22:58:21 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:02.518 v24.09-1-gb18e1bd62 00:03:02.518 22:58:21 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:03:02.518 22:58:21 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:03:02.518 22:58:21 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:02.518 22:58:21 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:02.518 22:58:21 -- common/autotest_common.sh@10 -- $ set +x 00:03:02.518 ************************************ 00:03:02.518 START TEST asan 00:03:02.518 ************************************ 00:03:02.518 using asan 00:03:02.518 22:58:21 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:03:02.518 00:03:02.518 real 0m0.000s 00:03:02.518 user 0m0.000s 00:03:02.518 sys 0m0.000s 00:03:02.518 22:58:21 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:02.518 ************************************ 00:03:02.518 22:58:21 asan -- common/autotest_common.sh@10 -- $ set +x 00:03:02.518 END TEST asan 00:03:02.518 ************************************ 00:03:02.518 22:58:21 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:02.518 22:58:21 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:02.518 22:58:21 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:02.518 22:58:21 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:02.518 22:58:21 -- common/autotest_common.sh@10 -- $ set +x 00:03:02.518 ************************************ 00:03:02.518 START TEST ubsan 00:03:02.518 ************************************ 00:03:02.518 using ubsan 00:03:02.518 22:58:21 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:03:02.518 00:03:02.518 real 0m0.000s 00:03:02.518 user 0m0.000s 00:03:02.518 sys 0m0.000s 00:03:02.518 22:58:21 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:02.518 22:58:21 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:02.518 ************************************ 00:03:02.518 END TEST ubsan 00:03:02.518 ************************************ 00:03:02.518 22:58:21 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:03:02.518 22:58:21 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:03:02.518 22:58:21 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:03:02.518 22:58:21 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:03:02.518 22:58:21 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:02.518 22:58:21 -- common/autotest_common.sh@10 -- $ set +x 00:03:02.518 ************************************ 00:03:02.519 START TEST build_native_dpdk 00:03:02.519 ************************************ 00:03:02.519 22:58:21 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:03:02.519 22:58:21 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:03:02.519 22:58:21 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:03:02.519 22:58:21 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:03:02.519 22:58:21 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:03:02.519 22:58:21 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:03:02.519 22:58:21 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:03:02.519 22:58:21 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:03:02.519 22:58:21 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:03:02.519 22:58:21 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:03:02.519 22:58:21 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:03:02.519 22:58:21 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:03:02.519 22:58:21 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:03:02.519 22:58:21 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:03:02.519 22:58:21 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:03:02.519 22:58:21 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:03:02.519 22:58:21 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:03:02.519 22:58:21 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:03:02.519 22:58:21 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:03:02.519 22:58:21 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:03:02.519 22:58:21 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:03:02.519 eeb0605f11 version: 23.11.0 00:03:02.519 238778122a doc: update release notes for 23.11 00:03:02.519 46aa6b3cfc doc: fix description of RSS features 00:03:02.519 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:03:02.519 7e421ae345 devtools: support skipping forbid rule check 00:03:02.519 22:58:21 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:03:02.519 22:58:21 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:03:02.519 22:58:21 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:03:02.519 22:58:21 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:03:02.519 22:58:21 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:03:02.519 22:58:21 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:03:02.519 22:58:21 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:03:02.519 22:58:21 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:03:02.519 22:58:21 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:03:02.519 22:58:21 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:03:02.519 22:58:21 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:03:02.519 22:58:21 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:03:02.519 22:58:21 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:03:02.519 22:58:21 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:03:02.519 22:58:21 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:03:02.519 22:58:21 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:03:02.519 22:58:21 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:03:02.519 22:58:21 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:03:02.519 22:58:21 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:03:02.519 patching file config/rte_config.h 00:03:02.519 Hunk #1 succeeded at 60 (offset 1 line). 00:03:02.519 22:58:21 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:03:02.519 22:58:21 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:03:02.519 patching file lib/pcapng/rte_pcapng.c 00:03:02.519 22:58:21 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 23.11.0 24.07.0 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 23.11.0 '>=' 24.07.0 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:03:02.519 22:58:21 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:03:02.519 22:58:21 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:03:02.520 22:58:21 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:03:02.520 22:58:21 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:03:02.520 22:58:21 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:03:02.520 22:58:21 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:03:06.706 The Meson build system 00:03:06.706 Version: 1.5.0 00:03:06.706 Source dir: /home/vagrant/spdk_repo/dpdk 00:03:06.706 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:03:06.706 Build type: native build 00:03:06.706 Program cat found: YES (/usr/bin/cat) 00:03:06.706 Project name: DPDK 00:03:06.706 Project version: 23.11.0 00:03:06.706 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:06.706 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:06.706 Host machine cpu family: x86_64 00:03:06.706 Host machine cpu: x86_64 00:03:06.706 Message: ## Building in Developer Mode ## 00:03:06.706 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:06.706 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:03:06.706 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:03:06.706 Program python3 found: YES (/usr/bin/python3) 00:03:06.706 Program cat found: YES (/usr/bin/cat) 00:03:06.706 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:03:06.706 Compiler for C supports arguments -march=native: YES 00:03:06.706 Checking for size of "void *" : 8 00:03:06.706 Checking for size of "void *" : 8 (cached) 00:03:06.706 Library m found: YES 00:03:06.706 Library numa found: YES 00:03:06.706 Has header "numaif.h" : YES 00:03:06.706 Library fdt found: NO 00:03:06.706 Library execinfo found: NO 00:03:06.706 Has header "execinfo.h" : YES 00:03:06.706 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:06.706 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:06.706 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:06.706 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:06.706 Run-time dependency openssl found: YES 3.1.1 00:03:06.706 Run-time dependency libpcap found: YES 1.10.4 00:03:06.706 Has header "pcap.h" with dependency libpcap: YES 00:03:06.706 Compiler for C supports arguments -Wcast-qual: YES 00:03:06.706 Compiler for C supports arguments -Wdeprecated: YES 00:03:06.706 Compiler for C supports arguments -Wformat: YES 00:03:06.706 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:06.706 Compiler for C supports arguments -Wformat-security: NO 00:03:06.706 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:06.706 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:06.706 Compiler for C supports arguments -Wnested-externs: YES 00:03:06.706 Compiler for C supports arguments -Wold-style-definition: YES 00:03:06.706 Compiler for C supports arguments -Wpointer-arith: YES 00:03:06.706 Compiler for C supports arguments -Wsign-compare: YES 00:03:06.706 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:06.706 Compiler for C supports arguments -Wundef: YES 00:03:06.706 Compiler for C supports arguments -Wwrite-strings: YES 00:03:06.706 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:06.706 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:06.706 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:06.706 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:06.706 Program objdump found: YES (/usr/bin/objdump) 00:03:06.706 Compiler for C supports arguments -mavx512f: YES 00:03:06.706 Checking if "AVX512 checking" compiles: YES 00:03:06.706 Fetching value of define "__SSE4_2__" : 1 00:03:06.706 Fetching value of define "__AES__" : 1 00:03:06.706 Fetching value of define "__AVX__" : 1 00:03:06.706 Fetching value of define "__AVX2__" : 1 00:03:06.706 Fetching value of define "__AVX512BW__" : 1 00:03:06.706 Fetching value of define "__AVX512CD__" : 1 00:03:06.706 Fetching value of define "__AVX512DQ__" : 1 00:03:06.706 Fetching value of define "__AVX512F__" : 1 00:03:06.706 Fetching value of define "__AVX512VL__" : 1 00:03:06.706 Fetching value of define "__PCLMUL__" : 1 00:03:06.706 Fetching value of define "__RDRND__" : 1 00:03:06.706 Fetching value of define "__RDSEED__" : 1 00:03:06.706 Fetching value of define "__VPCLMULQDQ__" : 1 00:03:06.706 Fetching value of define "__znver1__" : (undefined) 00:03:06.706 Fetching value of define "__znver2__" : (undefined) 00:03:06.706 Fetching value of define "__znver3__" : (undefined) 00:03:06.706 Fetching value of define "__znver4__" : (undefined) 00:03:06.706 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:06.706 Message: lib/log: Defining dependency "log" 00:03:06.706 Message: lib/kvargs: Defining dependency "kvargs" 00:03:06.706 Message: lib/telemetry: Defining dependency "telemetry" 00:03:06.706 Checking for function "getentropy" : NO 00:03:06.706 Message: lib/eal: Defining dependency "eal" 00:03:06.707 Message: lib/ring: Defining dependency "ring" 00:03:06.707 Message: lib/rcu: Defining dependency "rcu" 00:03:06.707 Message: lib/mempool: Defining dependency "mempool" 00:03:06.707 Message: lib/mbuf: Defining dependency "mbuf" 00:03:06.707 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:06.707 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:06.707 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:06.707 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:06.707 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:06.707 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:03:06.707 Compiler for C supports arguments -mpclmul: YES 00:03:06.707 Compiler for C supports arguments -maes: YES 00:03:06.707 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:06.707 Compiler for C supports arguments -mavx512bw: YES 00:03:06.707 Compiler for C supports arguments -mavx512dq: YES 00:03:06.707 Compiler for C supports arguments -mavx512vl: YES 00:03:06.707 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:06.707 Compiler for C supports arguments -mavx2: YES 00:03:06.707 Compiler for C supports arguments -mavx: YES 00:03:06.707 Message: lib/net: Defining dependency "net" 00:03:06.707 Message: lib/meter: Defining dependency "meter" 00:03:06.707 Message: lib/ethdev: Defining dependency "ethdev" 00:03:06.707 Message: lib/pci: Defining dependency "pci" 00:03:06.707 Message: lib/cmdline: Defining dependency "cmdline" 00:03:06.707 Message: lib/metrics: Defining dependency "metrics" 00:03:06.707 Message: lib/hash: Defining dependency "hash" 00:03:06.707 Message: lib/timer: Defining dependency "timer" 00:03:06.707 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:06.707 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:06.707 Fetching value of define "__AVX512CD__" : 1 (cached) 00:03:06.707 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:06.707 Message: lib/acl: Defining dependency "acl" 00:03:06.707 Message: lib/bbdev: Defining dependency "bbdev" 00:03:06.707 Message: lib/bitratestats: Defining dependency "bitratestats" 00:03:06.707 Run-time dependency libelf found: YES 0.191 00:03:06.707 Message: lib/bpf: Defining dependency "bpf" 00:03:06.707 Message: lib/cfgfile: Defining dependency "cfgfile" 00:03:06.707 Message: lib/compressdev: Defining dependency "compressdev" 00:03:06.707 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:06.707 Message: lib/distributor: Defining dependency "distributor" 00:03:06.707 Message: lib/dmadev: Defining dependency "dmadev" 00:03:06.707 Message: lib/efd: Defining dependency "efd" 00:03:06.707 Message: lib/eventdev: Defining dependency "eventdev" 00:03:06.707 Message: lib/dispatcher: Defining dependency "dispatcher" 00:03:06.707 Message: lib/gpudev: Defining dependency "gpudev" 00:03:06.707 Message: lib/gro: Defining dependency "gro" 00:03:06.707 Message: lib/gso: Defining dependency "gso" 00:03:06.707 Message: lib/ip_frag: Defining dependency "ip_frag" 00:03:06.707 Message: lib/jobstats: Defining dependency "jobstats" 00:03:06.707 Message: lib/latencystats: Defining dependency "latencystats" 00:03:06.707 Message: lib/lpm: Defining dependency "lpm" 00:03:06.707 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:06.707 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:06.707 Fetching value of define "__AVX512IFMA__" : 1 00:03:06.707 Message: lib/member: Defining dependency "member" 00:03:06.707 Message: lib/pcapng: Defining dependency "pcapng" 00:03:06.707 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:06.707 Message: lib/power: Defining dependency "power" 00:03:06.707 Message: lib/rawdev: Defining dependency "rawdev" 00:03:06.707 Message: lib/regexdev: Defining dependency "regexdev" 00:03:06.707 Message: lib/mldev: Defining dependency "mldev" 00:03:06.707 Message: lib/rib: Defining dependency "rib" 00:03:06.707 Message: lib/reorder: Defining dependency "reorder" 00:03:06.707 Message: lib/sched: Defining dependency "sched" 00:03:06.707 Message: lib/security: Defining dependency "security" 00:03:06.707 Message: lib/stack: Defining dependency "stack" 00:03:06.707 Has header "linux/userfaultfd.h" : YES 00:03:06.707 Has header "linux/vduse.h" : YES 00:03:06.707 Message: lib/vhost: Defining dependency "vhost" 00:03:06.707 Message: lib/ipsec: Defining dependency "ipsec" 00:03:06.707 Message: lib/pdcp: Defining dependency "pdcp" 00:03:06.707 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:06.707 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:06.707 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:06.707 Message: lib/fib: Defining dependency "fib" 00:03:06.707 Message: lib/port: Defining dependency "port" 00:03:06.707 Message: lib/pdump: Defining dependency "pdump" 00:03:06.707 Message: lib/table: Defining dependency "table" 00:03:06.707 Message: lib/pipeline: Defining dependency "pipeline" 00:03:06.707 Message: lib/graph: Defining dependency "graph" 00:03:06.707 Message: lib/node: Defining dependency "node" 00:03:06.707 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:06.707 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:06.707 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:06.707 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:08.079 Compiler for C supports arguments -Wno-sign-compare: YES 00:03:08.079 Compiler for C supports arguments -Wno-unused-value: YES 00:03:08.079 Compiler for C supports arguments -Wno-format: YES 00:03:08.079 Compiler for C supports arguments -Wno-format-security: YES 00:03:08.079 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:03:08.079 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:08.079 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:03:08.079 Compiler for C supports arguments -Wno-unused-parameter: YES 00:03:08.079 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:08.079 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:08.079 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:08.079 Compiler for C supports arguments -mavx512bw: YES (cached) 00:03:08.079 Compiler for C supports arguments -march=skylake-avx512: YES 00:03:08.079 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:03:08.079 Has header "sys/epoll.h" : YES 00:03:08.079 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:08.079 Configuring doxy-api-html.conf using configuration 00:03:08.079 Configuring doxy-api-man.conf using configuration 00:03:08.079 Program mandb found: YES (/usr/bin/mandb) 00:03:08.079 Program sphinx-build found: NO 00:03:08.079 Configuring rte_build_config.h using configuration 00:03:08.079 Message: 00:03:08.079 ================= 00:03:08.079 Applications Enabled 00:03:08.079 ================= 00:03:08.079 00:03:08.079 apps: 00:03:08.079 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:03:08.079 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:03:08.079 test-pmd, test-regex, test-sad, test-security-perf, 00:03:08.079 00:03:08.079 Message: 00:03:08.079 ================= 00:03:08.079 Libraries Enabled 00:03:08.079 ================= 00:03:08.079 00:03:08.079 libs: 00:03:08.079 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:08.079 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:03:08.079 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:03:08.079 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:03:08.079 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:03:08.079 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:03:08.079 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:03:08.079 00:03:08.079 00:03:08.079 Message: 00:03:08.079 =============== 00:03:08.079 Drivers Enabled 00:03:08.079 =============== 00:03:08.079 00:03:08.079 common: 00:03:08.079 00:03:08.079 bus: 00:03:08.079 pci, vdev, 00:03:08.079 mempool: 00:03:08.079 ring, 00:03:08.079 dma: 00:03:08.079 00:03:08.079 net: 00:03:08.079 i40e, 00:03:08.079 raw: 00:03:08.079 00:03:08.079 crypto: 00:03:08.079 00:03:08.079 compress: 00:03:08.079 00:03:08.079 regex: 00:03:08.079 00:03:08.079 ml: 00:03:08.079 00:03:08.079 vdpa: 00:03:08.079 00:03:08.079 event: 00:03:08.079 00:03:08.079 baseband: 00:03:08.079 00:03:08.079 gpu: 00:03:08.079 00:03:08.079 00:03:08.079 Message: 00:03:08.079 ================= 00:03:08.079 Content Skipped 00:03:08.079 ================= 00:03:08.079 00:03:08.079 apps: 00:03:08.079 00:03:08.079 libs: 00:03:08.079 00:03:08.079 drivers: 00:03:08.079 common/cpt: not in enabled drivers build config 00:03:08.079 common/dpaax: not in enabled drivers build config 00:03:08.079 common/iavf: not in enabled drivers build config 00:03:08.079 common/idpf: not in enabled drivers build config 00:03:08.079 common/mvep: not in enabled drivers build config 00:03:08.079 common/octeontx: not in enabled drivers build config 00:03:08.079 bus/auxiliary: not in enabled drivers build config 00:03:08.079 bus/cdx: not in enabled drivers build config 00:03:08.079 bus/dpaa: not in enabled drivers build config 00:03:08.079 bus/fslmc: not in enabled drivers build config 00:03:08.079 bus/ifpga: not in enabled drivers build config 00:03:08.079 bus/platform: not in enabled drivers build config 00:03:08.079 bus/vmbus: not in enabled drivers build config 00:03:08.079 common/cnxk: not in enabled drivers build config 00:03:08.079 common/mlx5: not in enabled drivers build config 00:03:08.079 common/nfp: not in enabled drivers build config 00:03:08.079 common/qat: not in enabled drivers build config 00:03:08.079 common/sfc_efx: not in enabled drivers build config 00:03:08.079 mempool/bucket: not in enabled drivers build config 00:03:08.079 mempool/cnxk: not in enabled drivers build config 00:03:08.079 mempool/dpaa: not in enabled drivers build config 00:03:08.079 mempool/dpaa2: not in enabled drivers build config 00:03:08.079 mempool/octeontx: not in enabled drivers build config 00:03:08.079 mempool/stack: not in enabled drivers build config 00:03:08.079 dma/cnxk: not in enabled drivers build config 00:03:08.079 dma/dpaa: not in enabled drivers build config 00:03:08.079 dma/dpaa2: not in enabled drivers build config 00:03:08.079 dma/hisilicon: not in enabled drivers build config 00:03:08.079 dma/idxd: not in enabled drivers build config 00:03:08.079 dma/ioat: not in enabled drivers build config 00:03:08.079 dma/skeleton: not in enabled drivers build config 00:03:08.079 net/af_packet: not in enabled drivers build config 00:03:08.079 net/af_xdp: not in enabled drivers build config 00:03:08.079 net/ark: not in enabled drivers build config 00:03:08.079 net/atlantic: not in enabled drivers build config 00:03:08.079 net/avp: not in enabled drivers build config 00:03:08.079 net/axgbe: not in enabled drivers build config 00:03:08.079 net/bnx2x: not in enabled drivers build config 00:03:08.079 net/bnxt: not in enabled drivers build config 00:03:08.079 net/bonding: not in enabled drivers build config 00:03:08.079 net/cnxk: not in enabled drivers build config 00:03:08.079 net/cpfl: not in enabled drivers build config 00:03:08.079 net/cxgbe: not in enabled drivers build config 00:03:08.079 net/dpaa: not in enabled drivers build config 00:03:08.079 net/dpaa2: not in enabled drivers build config 00:03:08.079 net/e1000: not in enabled drivers build config 00:03:08.079 net/ena: not in enabled drivers build config 00:03:08.079 net/enetc: not in enabled drivers build config 00:03:08.079 net/enetfec: not in enabled drivers build config 00:03:08.079 net/enic: not in enabled drivers build config 00:03:08.079 net/failsafe: not in enabled drivers build config 00:03:08.079 net/fm10k: not in enabled drivers build config 00:03:08.079 net/gve: not in enabled drivers build config 00:03:08.079 net/hinic: not in enabled drivers build config 00:03:08.079 net/hns3: not in enabled drivers build config 00:03:08.079 net/iavf: not in enabled drivers build config 00:03:08.079 net/ice: not in enabled drivers build config 00:03:08.079 net/idpf: not in enabled drivers build config 00:03:08.079 net/igc: not in enabled drivers build config 00:03:08.079 net/ionic: not in enabled drivers build config 00:03:08.079 net/ipn3ke: not in enabled drivers build config 00:03:08.079 net/ixgbe: not in enabled drivers build config 00:03:08.079 net/mana: not in enabled drivers build config 00:03:08.079 net/memif: not in enabled drivers build config 00:03:08.079 net/mlx4: not in enabled drivers build config 00:03:08.079 net/mlx5: not in enabled drivers build config 00:03:08.079 net/mvneta: not in enabled drivers build config 00:03:08.079 net/mvpp2: not in enabled drivers build config 00:03:08.079 net/netvsc: not in enabled drivers build config 00:03:08.079 net/nfb: not in enabled drivers build config 00:03:08.079 net/nfp: not in enabled drivers build config 00:03:08.079 net/ngbe: not in enabled drivers build config 00:03:08.079 net/null: not in enabled drivers build config 00:03:08.080 net/octeontx: not in enabled drivers build config 00:03:08.080 net/octeon_ep: not in enabled drivers build config 00:03:08.080 net/pcap: not in enabled drivers build config 00:03:08.080 net/pfe: not in enabled drivers build config 00:03:08.080 net/qede: not in enabled drivers build config 00:03:08.080 net/ring: not in enabled drivers build config 00:03:08.080 net/sfc: not in enabled drivers build config 00:03:08.080 net/softnic: not in enabled drivers build config 00:03:08.080 net/tap: not in enabled drivers build config 00:03:08.080 net/thunderx: not in enabled drivers build config 00:03:08.080 net/txgbe: not in enabled drivers build config 00:03:08.080 net/vdev_netvsc: not in enabled drivers build config 00:03:08.080 net/vhost: not in enabled drivers build config 00:03:08.080 net/virtio: not in enabled drivers build config 00:03:08.080 net/vmxnet3: not in enabled drivers build config 00:03:08.080 raw/cnxk_bphy: not in enabled drivers build config 00:03:08.080 raw/cnxk_gpio: not in enabled drivers build config 00:03:08.080 raw/dpaa2_cmdif: not in enabled drivers build config 00:03:08.080 raw/ifpga: not in enabled drivers build config 00:03:08.080 raw/ntb: not in enabled drivers build config 00:03:08.080 raw/skeleton: not in enabled drivers build config 00:03:08.080 crypto/armv8: not in enabled drivers build config 00:03:08.080 crypto/bcmfs: not in enabled drivers build config 00:03:08.080 crypto/caam_jr: not in enabled drivers build config 00:03:08.080 crypto/ccp: not in enabled drivers build config 00:03:08.080 crypto/cnxk: not in enabled drivers build config 00:03:08.080 crypto/dpaa_sec: not in enabled drivers build config 00:03:08.080 crypto/dpaa2_sec: not in enabled drivers build config 00:03:08.080 crypto/ipsec_mb: not in enabled drivers build config 00:03:08.080 crypto/mlx5: not in enabled drivers build config 00:03:08.080 crypto/mvsam: not in enabled drivers build config 00:03:08.080 crypto/nitrox: not in enabled drivers build config 00:03:08.080 crypto/null: not in enabled drivers build config 00:03:08.080 crypto/octeontx: not in enabled drivers build config 00:03:08.080 crypto/openssl: not in enabled drivers build config 00:03:08.080 crypto/scheduler: not in enabled drivers build config 00:03:08.080 crypto/uadk: not in enabled drivers build config 00:03:08.080 crypto/virtio: not in enabled drivers build config 00:03:08.080 compress/isal: not in enabled drivers build config 00:03:08.080 compress/mlx5: not in enabled drivers build config 00:03:08.080 compress/octeontx: not in enabled drivers build config 00:03:08.080 compress/zlib: not in enabled drivers build config 00:03:08.080 regex/mlx5: not in enabled drivers build config 00:03:08.080 regex/cn9k: not in enabled drivers build config 00:03:08.080 ml/cnxk: not in enabled drivers build config 00:03:08.080 vdpa/ifc: not in enabled drivers build config 00:03:08.080 vdpa/mlx5: not in enabled drivers build config 00:03:08.080 vdpa/nfp: not in enabled drivers build config 00:03:08.080 vdpa/sfc: not in enabled drivers build config 00:03:08.080 event/cnxk: not in enabled drivers build config 00:03:08.080 event/dlb2: not in enabled drivers build config 00:03:08.080 event/dpaa: not in enabled drivers build config 00:03:08.080 event/dpaa2: not in enabled drivers build config 00:03:08.080 event/dsw: not in enabled drivers build config 00:03:08.080 event/opdl: not in enabled drivers build config 00:03:08.080 event/skeleton: not in enabled drivers build config 00:03:08.080 event/sw: not in enabled drivers build config 00:03:08.080 event/octeontx: not in enabled drivers build config 00:03:08.080 baseband/acc: not in enabled drivers build config 00:03:08.080 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:03:08.080 baseband/fpga_lte_fec: not in enabled drivers build config 00:03:08.080 baseband/la12xx: not in enabled drivers build config 00:03:08.080 baseband/null: not in enabled drivers build config 00:03:08.080 baseband/turbo_sw: not in enabled drivers build config 00:03:08.080 gpu/cuda: not in enabled drivers build config 00:03:08.080 00:03:08.080 00:03:08.080 Build targets in project: 215 00:03:08.080 00:03:08.080 DPDK 23.11.0 00:03:08.080 00:03:08.080 User defined options 00:03:08.080 libdir : lib 00:03:08.080 prefix : /home/vagrant/spdk_repo/dpdk/build 00:03:08.080 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:03:08.080 c_link_args : 00:03:08.080 enable_docs : false 00:03:08.080 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:03:08.080 enable_kmods : false 00:03:08.080 machine : native 00:03:08.080 tests : false 00:03:08.080 00:03:08.080 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:08.080 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:03:08.080 22:58:27 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:03:08.080 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:08.338 [1/705] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:08.338 [2/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:08.338 [3/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:08.338 [4/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:08.338 [5/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:08.338 [6/705] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:08.338 [7/705] Linking static target lib/librte_kvargs.a 00:03:08.338 [8/705] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:08.338 [9/705] Linking static target lib/librte_log.a 00:03:08.338 [10/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:08.596 [11/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:08.596 [12/705] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.596 [13/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:08.596 [14/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:08.596 [15/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:08.596 [16/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:08.596 [17/705] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.854 [18/705] Linking target lib/librte_log.so.24.0 00:03:08.854 [19/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:08.854 [20/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:08.854 [21/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:08.854 [22/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:08.854 [23/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:08.854 [24/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:08.854 [25/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:09.113 [26/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:09.113 [27/705] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:03:09.113 [28/705] Linking target lib/librte_kvargs.so.24.0 00:03:09.113 [29/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:09.113 [30/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:09.113 [31/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:09.113 [32/705] Linking static target lib/librte_telemetry.a 00:03:09.113 [33/705] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:03:09.113 [34/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:09.371 [35/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:09.371 [36/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:09.371 [37/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:09.371 [38/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:09.371 [39/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:09.371 [40/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:09.371 [41/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:09.371 [42/705] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.371 [43/705] Linking target lib/librte_telemetry.so.24.0 00:03:09.371 [44/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:09.630 [45/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:09.630 [46/705] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:03:09.630 [47/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:09.630 [48/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:09.630 [49/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:09.892 [50/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:09.892 [51/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:09.892 [52/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:09.892 [53/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:09.892 [54/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:09.892 [55/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:09.892 [56/705] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:09.892 [57/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:09.892 [58/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:09.892 [59/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:09.892 [60/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:10.149 [61/705] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:10.149 [62/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:10.149 [63/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:10.149 [64/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:10.149 [65/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:10.149 [66/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:10.149 [67/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:10.149 [68/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:10.405 [69/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:10.405 [70/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:10.405 [71/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:10.405 [72/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:10.405 [73/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:10.405 [74/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:10.405 [75/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:10.405 [76/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:10.405 [77/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:10.405 [78/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:10.662 [79/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:10.662 [80/705] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:10.662 [81/705] Linking static target lib/librte_ring.a 00:03:10.662 [82/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:10.662 [83/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:10.662 [84/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:10.920 [85/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:10.920 [86/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:10.920 [87/705] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.920 [88/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:10.920 [89/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:10.920 [90/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:10.920 [91/705] Linking static target lib/librte_eal.a 00:03:11.178 [92/705] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:11.178 [93/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:11.178 [94/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:11.178 [95/705] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:11.178 [96/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:11.178 [97/705] Linking static target lib/librte_mempool.a 00:03:11.437 [98/705] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:11.437 [99/705] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:11.437 [100/705] Linking static target lib/librte_rcu.a 00:03:11.437 [101/705] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:11.437 [102/705] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:11.437 [103/705] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:11.437 [104/705] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:11.437 [105/705] Linking static target lib/librte_meter.a 00:03:11.695 [106/705] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.695 [107/705] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:03:11.695 [108/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:11.695 [109/705] Linking static target lib/librte_net.a 00:03:11.695 [110/705] Linking static target lib/librte_mbuf.a 00:03:11.695 [111/705] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.695 [112/705] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.695 [113/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:11.695 [114/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:11.953 [115/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:11.953 [116/705] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.953 [117/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:12.211 [118/705] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.211 [119/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:12.211 [120/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:12.468 [121/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:12.468 [122/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:12.468 [123/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:12.468 [124/705] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:12.468 [125/705] Linking static target lib/librte_pci.a 00:03:12.468 [126/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:12.468 [127/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:12.468 [128/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:12.726 [129/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:12.726 [130/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:12.726 [131/705] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.726 [132/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:12.726 [133/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:12.726 [134/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:12.726 [135/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:12.726 [136/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:12.726 [137/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:12.726 [138/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:12.726 [139/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:12.726 [140/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:12.984 [141/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:12.984 [142/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:12.984 [143/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:12.984 [144/705] Linking static target lib/librte_cmdline.a 00:03:13.242 [145/705] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:13.242 [146/705] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:03:13.242 [147/705] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:03:13.242 [148/705] Linking static target lib/librte_metrics.a 00:03:13.242 [149/705] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:13.500 [150/705] Linking static target lib/librte_timer.a 00:03:13.500 [151/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:13.500 [152/705] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.500 [153/705] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:13.758 [154/705] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.758 [155/705] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.758 [156/705] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:03:14.015 [157/705] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:03:14.015 [158/705] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:03:14.016 [159/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:03:14.273 [160/705] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:03:14.273 [161/705] Linking static target lib/librte_bitratestats.a 00:03:14.273 [162/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:03:14.274 [163/705] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:03:14.274 [164/705] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.531 [165/705] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:03:14.531 [166/705] Linking static target lib/librte_bbdev.a 00:03:14.531 [167/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:03:14.531 [168/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:03:14.531 [169/705] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:14.531 [170/705] Linking static target lib/librte_hash.a 00:03:14.789 [171/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:03:14.789 [172/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:14.789 [173/705] Linking static target lib/librte_ethdev.a 00:03:14.789 [174/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:03:14.789 [175/705] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.048 [176/705] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:03:15.048 [177/705] Linking static target lib/acl/libavx2_tmp.a 00:03:15.048 [178/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:03:15.048 [179/705] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.306 [180/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:03:15.306 [181/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:03:15.306 [182/705] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:03:15.306 [183/705] Linking static target lib/librte_cfgfile.a 00:03:15.307 [184/705] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.307 [185/705] Linking target lib/librte_eal.so.24.0 00:03:15.307 [186/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:03:15.565 [187/705] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.565 [188/705] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:03:15.565 [189/705] Linking target lib/librte_ring.so.24.0 00:03:15.565 [190/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:03:15.565 [191/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:15.565 [192/705] Linking target lib/librte_meter.so.24.0 00:03:15.565 [193/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:15.565 [194/705] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:03:15.565 [195/705] Linking target lib/librte_pci.so.24.0 00:03:15.565 [196/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:15.565 [197/705] Linking target lib/librte_timer.so.24.0 00:03:15.565 [198/705] Linking target lib/librte_mempool.so.24.0 00:03:15.565 [199/705] Linking target lib/librte_rcu.so.24.0 00:03:15.565 [200/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:03:15.565 [201/705] Linking static target lib/librte_bpf.a 00:03:15.565 [202/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:03:15.565 [203/705] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:03:15.565 [204/705] Linking static target lib/librte_acl.a 00:03:15.824 [205/705] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:03:15.824 [206/705] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:03:15.824 [207/705] Linking target lib/librte_cfgfile.so.24.0 00:03:15.824 [208/705] Linking static target lib/librte_compressdev.a 00:03:15.824 [209/705] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:03:15.824 [210/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:15.824 [211/705] Linking target lib/librte_mbuf.so.24.0 00:03:15.824 [212/705] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:03:15.824 [213/705] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:03:15.824 [214/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:15.824 [215/705] Linking target lib/librte_net.so.24.0 00:03:15.824 [216/705] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.824 [217/705] Linking target lib/librte_bbdev.so.24.0 00:03:15.824 [218/705] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.824 [219/705] Linking target lib/librte_acl.so.24.0 00:03:16.082 [220/705] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:03:16.082 [221/705] Linking target lib/librte_cmdline.so.24.0 00:03:16.082 [222/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:03:16.082 [223/705] Linking target lib/librte_hash.so.24.0 00:03:16.082 [224/705] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:03:16.082 [225/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:03:16.082 [226/705] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.082 [227/705] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:03:16.082 [228/705] Linking target lib/librte_compressdev.so.24.0 00:03:16.082 [229/705] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:16.340 [230/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:03:16.340 [231/705] Linking static target lib/librte_distributor.a 00:03:16.340 [232/705] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:03:16.340 [233/705] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:16.340 [234/705] Linking static target lib/librte_dmadev.a 00:03:16.340 [235/705] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.598 [236/705] Linking target lib/librte_distributor.so.24.0 00:03:16.598 [237/705] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:03:16.598 [238/705] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.857 [239/705] Linking target lib/librte_dmadev.so.24.0 00:03:16.857 [240/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:03:16.857 [241/705] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:03:16.857 [242/705] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:03:16.857 [243/705] Linking static target lib/librte_efd.a 00:03:17.161 [244/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:03:17.161 [245/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:03:17.161 [246/705] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.161 [247/705] Linking target lib/librte_efd.so.24.0 00:03:17.471 [248/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:03:17.471 [249/705] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:03:17.471 [250/705] Linking static target lib/librte_dispatcher.a 00:03:17.471 [251/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:03:17.471 [252/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:17.471 [253/705] Linking static target lib/librte_cryptodev.a 00:03:17.471 [254/705] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:03:17.471 [255/705] Linking static target lib/librte_gpudev.a 00:03:17.471 [256/705] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:03:17.729 [257/705] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:03:17.729 [258/705] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.729 [259/705] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:03:17.987 [260/705] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:03:17.987 [261/705] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:03:17.987 [262/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:03:17.987 [263/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:03:17.987 [264/705] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:03:17.987 [265/705] Linking static target lib/librte_eventdev.a 00:03:18.245 [266/705] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.245 [267/705] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:03:18.245 [268/705] Linking target lib/librte_gpudev.so.24.0 00:03:18.245 [269/705] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:03:18.245 [270/705] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:03:18.245 [271/705] Linking static target lib/librte_gro.a 00:03:18.245 [272/705] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:03:18.245 [273/705] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:03:18.503 [274/705] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.503 [275/705] Linking target lib/librte_cryptodev.so.24.0 00:03:18.503 [276/705] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.503 [277/705] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.503 [278/705] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:03:18.503 [279/705] Linking static target lib/librte_gso.a 00:03:18.503 [280/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:03:18.503 [281/705] Linking target lib/librte_ethdev.so.24.0 00:03:18.503 [282/705] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:03:18.503 [283/705] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:03:18.503 [284/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:03:18.761 [285/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:03:18.761 [286/705] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.761 [287/705] Linking target lib/librte_metrics.so.24.0 00:03:18.761 [288/705] Linking target lib/librte_gro.so.24.0 00:03:18.761 [289/705] Linking target lib/librte_bpf.so.24.0 00:03:18.761 [290/705] Linking target lib/librte_gso.so.24.0 00:03:18.761 [291/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:03:18.761 [292/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:03:18.761 [293/705] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:03:18.761 [294/705] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:03:18.761 [295/705] Linking target lib/librte_bitratestats.so.24.0 00:03:18.761 [296/705] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:03:18.762 [297/705] Linking static target lib/librte_jobstats.a 00:03:19.020 [298/705] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:03:19.020 [299/705] Linking static target lib/librte_latencystats.a 00:03:19.020 [300/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:03:19.020 [301/705] Linking static target lib/librte_ip_frag.a 00:03:19.020 [302/705] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:03:19.020 [303/705] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.020 [304/705] Linking target lib/librte_jobstats.so.24.0 00:03:19.020 [305/705] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.020 [306/705] Linking target lib/librte_latencystats.so.24.0 00:03:19.279 [307/705] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:03:19.279 [308/705] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.279 [309/705] Linking target lib/librte_ip_frag.so.24.0 00:03:19.279 [310/705] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:19.279 [311/705] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:03:19.279 [312/705] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:19.537 [313/705] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:03:19.538 [314/705] Linking static target lib/librte_lpm.a 00:03:19.538 [315/705] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:03:19.538 [316/705] Compiling C object lib/librte_member.a.p/member_rte_member_sketch_avx512.c.o 00:03:19.538 [317/705] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:19.538 [318/705] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:03:19.538 [319/705] Linking static target lib/librte_pcapng.a 00:03:19.538 [320/705] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:19.797 [321/705] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.797 [322/705] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.797 [323/705] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:19.797 [324/705] Linking target lib/librte_lpm.so.24.0 00:03:19.797 [325/705] Linking target lib/librte_eventdev.so.24.0 00:03:19.797 [326/705] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:19.797 [327/705] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.797 [328/705] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:03:19.797 [329/705] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:03:19.797 [330/705] Linking target lib/librte_pcapng.so.24.0 00:03:19.797 [331/705] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:03:19.797 [332/705] Linking target lib/librte_dispatcher.so.24.0 00:03:19.797 [333/705] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:20.054 [334/705] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:03:20.054 [335/705] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:20.054 [336/705] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:20.054 [337/705] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:20.054 [338/705] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:20.312 [339/705] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:03:20.312 [340/705] Linking static target lib/librte_power.a 00:03:20.312 [341/705] Linking static target lib/librte_member.a 00:03:20.312 [342/705] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:03:20.312 [343/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:03:20.312 [344/705] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:03:20.312 [345/705] Linking static target lib/librte_regexdev.a 00:03:20.312 [346/705] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:03:20.312 [347/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:03:20.312 [348/705] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:03:20.312 [349/705] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.312 [350/705] Linking static target lib/librte_rawdev.a 00:03:20.312 [351/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:03:20.312 [352/705] Linking static target lib/librte_mldev.a 00:03:20.571 [353/705] Linking target lib/librte_member.so.24.0 00:03:20.571 [354/705] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:03:20.571 [355/705] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:03:20.571 [356/705] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:03:20.571 [357/705] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.571 [358/705] Linking target lib/librte_power.so.24.0 00:03:20.571 [359/705] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:20.571 [360/705] Linking static target lib/librte_reorder.a 00:03:20.830 [361/705] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:03:20.830 [362/705] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.830 [363/705] Linking target lib/librte_rawdev.so.24.0 00:03:20.830 [364/705] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.830 [365/705] Linking target lib/librte_regexdev.so.24.0 00:03:20.830 [366/705] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:03:20.830 [367/705] Linking static target lib/librte_rib.a 00:03:20.830 [368/705] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:20.830 [369/705] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.830 [370/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:03:20.830 [371/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:03:21.090 [372/705] Linking target lib/librte_reorder.so.24.0 00:03:21.090 [373/705] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:03:21.090 [374/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:03:21.090 [375/705] Linking static target lib/librte_stack.a 00:03:21.090 [376/705] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.090 [377/705] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:21.090 [378/705] Linking static target lib/librte_security.a 00:03:21.090 [379/705] Linking target lib/librte_rib.so.24.0 00:03:21.349 [380/705] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.349 [381/705] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:21.349 [382/705] Linking target lib/librte_stack.so.24.0 00:03:21.349 [383/705] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:03:21.349 [384/705] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.349 [385/705] Linking target lib/librte_mldev.so.24.0 00:03:21.349 [386/705] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:21.607 [387/705] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:21.607 [388/705] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.607 [389/705] Linking target lib/librte_security.so.24.0 00:03:21.607 [390/705] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:03:21.607 [391/705] Linking static target lib/librte_sched.a 00:03:21.607 [392/705] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:03:21.867 [393/705] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.867 [394/705] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:21.867 [395/705] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:21.867 [396/705] Linking target lib/librte_sched.so.24.0 00:03:21.867 [397/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:21.867 [398/705] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:03:22.125 [399/705] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:03:22.125 [400/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:03:22.125 [401/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:03:22.386 [402/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:03:22.386 [403/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:03:22.386 [404/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:22.386 [405/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:03:22.710 [406/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:03:22.710 [407/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:03:22.710 [408/705] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:03:22.710 [409/705] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:03:22.710 [410/705] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:03:22.710 [411/705] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:03:22.968 [412/705] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:03:22.968 [413/705] Linking static target lib/librte_ipsec.a 00:03:22.968 [414/705] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:03:23.227 [415/705] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:03:23.227 [416/705] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.227 [417/705] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:03:23.227 [418/705] Linking target lib/librte_ipsec.so.24.0 00:03:23.485 [419/705] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:03:23.485 [420/705] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:03:23.485 [421/705] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:03:23.485 [422/705] Linking static target lib/librte_fib.a 00:03:23.485 [423/705] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:03:23.485 [424/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:03:23.485 [425/705] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:03:23.485 [426/705] Linking static target lib/librte_pdcp.a 00:03:23.485 [427/705] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.743 [428/705] Linking target lib/librte_fib.so.24.0 00:03:23.743 [429/705] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:03:23.743 [430/705] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:03:23.743 [431/705] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.743 [432/705] Linking target lib/librte_pdcp.so.24.0 00:03:24.001 [433/705] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:03:24.001 [434/705] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:03:24.260 [435/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:03:24.260 [436/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:03:24.260 [437/705] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:03:24.260 [438/705] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:03:24.518 [439/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:03:24.518 [440/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:03:24.518 [441/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:03:24.518 [442/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:03:24.518 [443/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:03:24.518 [444/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:03:24.776 [445/705] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:03:24.776 [446/705] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:03:24.776 [447/705] Linking static target lib/librte_pdump.a 00:03:24.776 [448/705] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:03:24.776 [449/705] Linking static target lib/librte_port.a 00:03:24.776 [450/705] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:03:24.776 [451/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:03:25.034 [452/705] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.034 [453/705] Linking target lib/librte_pdump.so.24.0 00:03:25.034 [454/705] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.292 [455/705] Linking target lib/librte_port.so.24.0 00:03:25.292 [456/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:03:25.292 [457/705] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:03:25.292 [458/705] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:03:25.292 [459/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:03:25.292 [460/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:03:25.292 [461/705] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:03:25.551 [462/705] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:03:25.551 [463/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:03:25.551 [464/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:03:25.551 [465/705] Linking static target lib/librte_table.a 00:03:25.551 [466/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:03:25.551 [467/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:25.809 [468/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:03:26.067 [469/705] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:03:26.067 [470/705] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.067 [471/705] Linking target lib/librte_table.so.24.0 00:03:26.067 [472/705] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:03:26.068 [473/705] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:03:26.068 [474/705] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:03:26.326 [475/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:03:26.326 [476/705] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:03:26.326 [477/705] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:03:26.585 [478/705] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:03:26.585 [479/705] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:03:26.585 [480/705] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:03:26.585 [481/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:03:26.844 [482/705] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:03:26.844 [483/705] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:03:26.844 [484/705] Linking static target lib/librte_graph.a 00:03:26.844 [485/705] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:26.844 [486/705] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:27.102 [487/705] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:03:27.102 [488/705] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:03:27.360 [489/705] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.360 [490/705] Linking target lib/librte_graph.so.24.0 00:03:27.361 [491/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:27.361 [492/705] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:03:27.361 [493/705] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:27.361 [494/705] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:27.619 [495/705] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:03:27.619 [496/705] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:03:27.619 [497/705] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:27.619 [498/705] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:27.619 [499/705] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:03:27.894 [500/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:27.894 [501/705] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:27.895 [502/705] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:03:27.895 [503/705] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:28.158 [504/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:28.158 [505/705] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:28.158 [506/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:28.158 [507/705] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:03:28.158 [508/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:28.158 [509/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:28.158 [510/705] Linking static target lib/librte_node.a 00:03:28.416 [511/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:28.417 [512/705] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:28.417 [513/705] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:28.417 [514/705] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.417 [515/705] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:28.417 [516/705] Linking target lib/librte_node.so.24.0 00:03:28.417 [517/705] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:28.417 [518/705] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:28.417 [519/705] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:28.417 [520/705] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:28.417 [521/705] Linking static target drivers/librte_bus_pci.a 00:03:28.417 [522/705] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:28.417 [523/705] Linking static target drivers/librte_bus_vdev.a 00:03:28.674 [524/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:28.674 [525/705] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:28.674 [526/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:28.674 [527/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:28.675 [528/705] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.675 [529/705] Linking target drivers/librte_bus_vdev.so.24.0 00:03:28.675 [530/705] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:03:28.932 [531/705] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.932 [532/705] Linking target drivers/librte_bus_pci.so.24.0 00:03:28.932 [533/705] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:28.932 [534/705] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:28.932 [535/705] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:03:28.932 [536/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:28.932 [537/705] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:28.932 [538/705] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:28.932 [539/705] Linking static target drivers/librte_mempool_ring.a 00:03:28.932 [540/705] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:29.189 [541/705] Linking target drivers/librte_mempool_ring.so.24.0 00:03:29.189 [542/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:29.448 [543/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:29.706 [544/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:29.706 [545/705] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:29.963 [546/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:30.220 [547/705] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:03:30.220 [548/705] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:03:30.220 [549/705] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:30.220 [550/705] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:30.220 [551/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:30.479 [552/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:30.737 [553/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:30.737 [554/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:30.737 [555/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:30.737 [556/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:30.737 [557/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:03:30.995 [558/705] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:03:30.995 [559/705] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:03:30.995 [560/705] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:03:31.253 [561/705] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:31.511 [562/705] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:03:31.511 [563/705] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:03:31.511 [564/705] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:03:31.511 [565/705] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:03:31.511 [566/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:31.511 [567/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:31.769 [568/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:31.769 [569/705] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:03:31.769 [570/705] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:03:31.769 [571/705] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:03:31.769 [572/705] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:03:31.769 [573/705] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:03:32.026 [574/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:32.026 [575/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:32.026 [576/705] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:32.026 [577/705] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:32.284 [578/705] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:32.284 [579/705] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:32.284 [580/705] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:32.284 [581/705] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:32.284 [582/705] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:32.284 [583/705] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:32.284 [584/705] Linking static target drivers/librte_net_i40e.a 00:03:32.543 [585/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:32.543 [586/705] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:32.543 [587/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:32.800 [588/705] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.800 [589/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:32.801 [590/705] Linking target drivers/librte_net_i40e.so.24.0 00:03:33.058 [591/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:33.058 [592/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:33.058 [593/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:33.058 [594/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:33.058 [595/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:33.316 [596/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:33.316 [597/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:33.316 [598/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:33.574 [599/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:33.574 [600/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:33.574 [601/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:33.574 [602/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:33.574 [603/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:33.832 [604/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:33.832 [605/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:33.832 [606/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:33.832 [607/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:33.832 [608/705] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:03:34.092 [609/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:34.092 [610/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:34.092 [611/705] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:03:34.350 [612/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:34.350 [613/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:34.350 [614/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:34.350 [615/705] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:34.350 [616/705] Linking static target lib/librte_vhost.a 00:03:34.916 [617/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:34.916 [618/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:34.916 [619/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:34.916 [620/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:35.173 [621/705] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:35.173 [622/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:35.173 [623/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:35.173 [624/705] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:35.431 [625/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:35.431 [626/705] Linking target lib/librte_vhost.so.24.0 00:03:35.431 [627/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:03:35.431 [628/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:03:35.431 [629/705] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:35.431 [630/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:35.431 [631/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:03:35.688 [632/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:03:35.688 [633/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:03:35.689 [634/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:03:35.689 [635/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:03:35.689 [636/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:03:35.945 [637/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:03:35.945 [638/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:35.946 [639/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:03:35.946 [640/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:03:35.946 [641/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:35.946 [642/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:36.203 [643/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:36.203 [644/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:36.203 [645/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:36.203 [646/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:36.203 [647/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:36.461 [648/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:36.461 [649/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:36.461 [650/705] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:36.719 [651/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:03:36.719 [652/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:36.719 [653/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:36.978 [654/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:36.978 [655/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:36.978 [656/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:36.978 [657/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:03:36.978 [658/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:37.236 [659/705] Linking static target lib/librte_pipeline.a 00:03:37.236 [660/705] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:37.236 [661/705] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:37.236 [662/705] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:37.236 [663/705] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:37.493 [664/705] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:37.493 [665/705] Linking target app/dpdk-dumpcap 00:03:37.752 [666/705] Linking target app/dpdk-graph 00:03:37.752 [667/705] Linking target app/dpdk-pdump 00:03:37.752 [668/705] Linking target app/dpdk-proc-info 00:03:37.752 [669/705] Linking target app/dpdk-test-acl 00:03:37.752 [670/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:37.752 [671/705] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:38.011 [672/705] Linking target app/dpdk-test-bbdev 00:03:38.011 [673/705] Linking target app/dpdk-test-cmdline 00:03:38.011 [674/705] Linking target app/dpdk-test-compress-perf 00:03:38.011 [675/705] Linking target app/dpdk-test-crypto-perf 00:03:38.011 [676/705] Linking target app/dpdk-test-dma-perf 00:03:38.011 [677/705] Linking target app/dpdk-test-eventdev 00:03:38.269 [678/705] Linking target app/dpdk-test-fib 00:03:38.269 [679/705] Linking target app/dpdk-test-flow-perf 00:03:38.269 [680/705] Linking target app/dpdk-test-mldev 00:03:38.269 [681/705] Linking target app/dpdk-test-gpudev 00:03:38.527 [682/705] Linking target app/dpdk-test-pipeline 00:03:38.527 [683/705] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:38.527 [684/705] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:03:38.527 [685/705] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:38.527 [686/705] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:38.785 [687/705] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:38.785 [688/705] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:39.044 [689/705] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:39.044 [690/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:39.044 [691/705] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:39.044 [692/705] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:39.304 [693/705] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:39.304 [694/705] Linking target lib/librte_pipeline.so.24.0 00:03:39.304 [695/705] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:39.304 [696/705] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:39.304 [697/705] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:39.304 [698/705] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:39.563 [699/705] Linking target app/dpdk-test-sad 00:03:39.563 [700/705] Linking target app/dpdk-test-regex 00:03:39.822 [701/705] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:39.822 [702/705] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:40.080 [703/705] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:40.080 [704/705] Linking target app/dpdk-test-security-perf 00:03:40.338 [705/705] Linking target app/dpdk-testpmd 00:03:40.338 22:58:59 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:03:40.338 22:58:59 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:40.338 22:58:59 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:40.338 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:40.338 [0/1] Installing files. 00:03:40.600 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:40.600 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:40.600 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:40.600 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:40.600 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:40.600 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:40.600 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:40.600 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:40.600 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:40.600 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:40.600 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:40.600 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:40.600 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:40.600 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:40.600 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:40.600 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:40.600 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:40.600 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:40.600 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:40.600 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:40.600 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:40.600 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:40.600 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:40.600 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:40.600 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:40.600 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:40.600 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:40.600 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:40.600 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:40.600 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:40.600 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:40.600 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:40.600 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:40.600 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:40.600 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:40.600 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:40.601 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.602 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.603 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:40.604 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:40.605 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:40.605 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.605 Installing lib/librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.967 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.967 Installing lib/librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.967 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.967 Installing lib/librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.967 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.967 Installing lib/librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.967 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.967 Installing lib/librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.967 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.967 Installing lib/librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.967 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.967 Installing lib/librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.967 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.967 Installing lib/librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.967 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.967 Installing lib/librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.967 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.967 Installing lib/librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.967 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.967 Installing lib/librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.967 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.967 Installing lib/librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.967 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.967 Installing lib/librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.967 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.967 Installing lib/librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.967 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.967 Installing lib/librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.967 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.967 Installing lib/librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.967 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.967 Installing lib/librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.967 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.967 Installing lib/librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.967 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.967 Installing lib/librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.967 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.967 Installing lib/librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.967 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.967 Installing lib/librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.967 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.967 Installing lib/librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.967 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.967 Installing lib/librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.967 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.967 Installing lib/librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.967 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.967 Installing lib/librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.967 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.967 Installing lib/librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.967 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.967 Installing lib/librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.967 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.967 Installing lib/librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.967 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.967 Installing lib/librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.967 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.967 Installing lib/librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.967 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.967 Installing lib/librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.967 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.967 Installing lib/librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.967 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.967 Installing lib/librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.967 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.967 Installing lib/librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.967 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.968 Installing lib/librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.968 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.968 Installing lib/librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.968 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.968 Installing lib/librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.968 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.968 Installing lib/librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.968 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.968 Installing lib/librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.968 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.968 Installing lib/librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.968 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.968 Installing lib/librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.968 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.968 Installing lib/librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.968 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.968 Installing lib/librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.968 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.968 Installing lib/librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.968 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.968 Installing lib/librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.968 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.968 Installing lib/librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.968 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.968 Installing lib/librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.968 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.968 Installing lib/librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.968 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.968 Installing lib/librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.968 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.968 Installing lib/librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.968 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.968 Installing lib/librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.968 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.968 Installing lib/librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.968 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.968 Installing lib/librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.968 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.968 Installing lib/librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.968 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.968 Installing lib/librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.968 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.968 Installing lib/librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.968 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.968 Installing drivers/librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:40.968 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.968 Installing drivers/librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:40.968 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.968 Installing drivers/librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:40.968 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:40.968 Installing drivers/librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:40.968 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:40.968 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:40.968 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:40.968 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:40.968 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:40.968 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:40.968 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:40.968 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:40.968 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:40.968 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:40.968 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:40.968 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:40.968 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:40.968 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:40.968 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:40.968 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:40.968 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:40.968 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:40.968 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:40.968 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:40.968 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.968 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.968 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.968 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.968 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:40.968 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:40.968 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:40.968 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:40.968 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:40.968 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:40.968 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:40.968 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:40.968 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:40.968 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:40.968 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:40.968 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:40.968 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.968 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.968 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.968 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.968 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.968 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.968 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.968 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.968 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.968 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.968 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.968 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.968 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.968 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.968 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.968 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.969 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.970 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.970 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.970 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.970 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.970 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.970 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.970 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.970 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.970 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.970 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.970 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.970 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.970 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.970 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.970 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.970 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.970 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.970 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.970 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.970 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.970 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.970 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.970 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.970 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.970 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.970 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.970 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.970 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.970 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.970 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.970 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.970 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.970 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.970 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.970 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.970 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.970 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.970 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.970 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.970 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.970 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.970 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.970 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.970 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.970 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.970 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.970 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.970 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.970 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.970 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.970 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.970 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.970 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.970 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.970 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.970 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.970 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.970 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.970 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.970 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.970 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.970 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.970 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.970 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.970 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.970 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.970 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.970 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.970 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.970 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.970 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:40.970 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.232 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.232 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.232 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.232 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.232 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.232 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.232 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.232 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.232 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.232 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.232 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.232 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.232 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.232 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.232 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.232 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.232 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.232 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.232 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.232 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.232 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.232 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.232 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.232 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.232 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.232 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.232 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.232 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.232 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.232 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.232 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.232 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.232 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.232 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.232 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.232 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.232 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.232 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.232 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.232 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.232 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.232 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.232 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.232 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.232 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.232 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.232 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.232 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.232 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.232 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.232 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.232 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.232 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.232 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.232 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.232 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.232 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.232 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.232 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.232 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.232 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.232 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.232 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.232 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.232 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.232 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.232 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.232 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.232 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.232 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.232 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.232 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.232 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.232 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.232 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.232 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.232 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.232 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.232 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.232 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.232 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.233 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.233 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.233 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.233 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.233 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.233 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.233 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.233 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.233 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.233 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.233 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.233 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.233 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.233 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.233 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.233 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.233 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.233 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.233 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.233 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.233 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.233 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.233 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.233 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.233 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.233 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.233 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.233 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.233 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.233 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.233 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.233 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.233 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.233 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.233 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.233 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:41.233 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:41.233 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:41.233 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:41.233 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:41.233 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:41.233 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:41.233 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:41.233 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:41.233 Installing symlink pointing to librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:03:41.233 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:03:41.233 Installing symlink pointing to librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:03:41.233 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:41.233 Installing symlink pointing to librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:03:41.233 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:41.233 Installing symlink pointing to librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:03:41.233 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:41.233 Installing symlink pointing to librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:03:41.233 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:41.233 Installing symlink pointing to librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:03:41.233 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:41.233 Installing symlink pointing to librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:03:41.233 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:41.233 Installing symlink pointing to librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:03:41.233 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:41.233 Installing symlink pointing to librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:03:41.233 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:41.233 Installing symlink pointing to librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:03:41.233 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:41.233 Installing symlink pointing to librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:03:41.233 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:41.233 Installing symlink pointing to librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:03:41.233 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:41.233 Installing symlink pointing to librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:03:41.233 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:41.233 Installing symlink pointing to librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:03:41.233 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:41.233 Installing symlink pointing to librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:03:41.233 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:41.233 Installing symlink pointing to librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:03:41.233 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:41.233 Installing symlink pointing to librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:03:41.233 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:41.233 Installing symlink pointing to librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:03:41.233 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:41.233 Installing symlink pointing to librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:03:41.233 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:41.233 Installing symlink pointing to librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:03:41.233 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:41.233 Installing symlink pointing to librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:03:41.233 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:41.233 Installing symlink pointing to librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:03:41.233 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:41.233 Installing symlink pointing to librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:03:41.233 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:41.233 Installing symlink pointing to librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:03:41.233 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:41.233 Installing symlink pointing to librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:03:41.233 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:41.233 Installing symlink pointing to librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:03:41.233 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:41.233 Installing symlink pointing to librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:03:41.233 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:41.233 Installing symlink pointing to librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:03:41.233 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:03:41.233 Installing symlink pointing to librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:03:41.233 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:41.233 Installing symlink pointing to librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:03:41.234 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:41.234 Installing symlink pointing to librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:03:41.234 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:41.234 Installing symlink pointing to librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:03:41.234 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:41.234 Installing symlink pointing to librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:03:41.234 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:41.234 Installing symlink pointing to librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:03:41.234 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:41.234 Installing symlink pointing to librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:03:41.234 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:41.234 Installing symlink pointing to librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:03:41.234 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:41.234 Installing symlink pointing to librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:03:41.234 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:41.234 Installing symlink pointing to librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:03:41.234 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:41.234 Installing symlink pointing to librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:03:41.234 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:41.234 Installing symlink pointing to librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:03:41.234 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:41.234 Installing symlink pointing to librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:03:41.234 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:03:41.234 Installing symlink pointing to librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:03:41.234 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:41.234 Installing symlink pointing to librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:03:41.234 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:41.234 Installing symlink pointing to librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:03:41.234 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:41.234 Installing symlink pointing to librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:03:41.234 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:41.234 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:03:41.234 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:03:41.234 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:03:41.234 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:03:41.234 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:03:41.234 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:03:41.234 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:03:41.234 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:03:41.234 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:03:41.234 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:03:41.234 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:03:41.234 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:03:41.234 Installing symlink pointing to librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:03:41.234 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:41.234 Installing symlink pointing to librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:03:41.234 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:41.234 Installing symlink pointing to librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:03:41.234 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:41.234 Installing symlink pointing to librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:03:41.234 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:03:41.234 Installing symlink pointing to librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:03:41.234 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:41.234 Installing symlink pointing to librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:03:41.234 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:41.234 Installing symlink pointing to librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:03:41.234 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:41.234 Installing symlink pointing to librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:03:41.234 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:41.234 Installing symlink pointing to librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:03:41.234 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:41.234 Installing symlink pointing to librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:03:41.234 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:41.234 Installing symlink pointing to librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:03:41.234 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:41.234 Installing symlink pointing to librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:03:41.234 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:03:41.234 Installing symlink pointing to librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:03:41.234 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:03:41.234 Installing symlink pointing to librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:03:41.234 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:03:41.234 Installing symlink pointing to librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:03:41.234 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:03:41.234 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:03:41.234 22:59:00 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:03:41.234 22:59:00 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:41.234 00:03:41.234 real 0m38.797s 00:03:41.234 user 4m28.983s 00:03:41.234 sys 0m41.735s 00:03:41.234 22:59:00 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:41.234 22:59:00 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:41.234 ************************************ 00:03:41.234 END TEST build_native_dpdk 00:03:41.234 ************************************ 00:03:41.234 22:59:00 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:41.234 22:59:00 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:41.234 22:59:00 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:41.234 22:59:00 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:41.234 22:59:00 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:41.234 22:59:00 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:41.234 22:59:00 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:41.234 22:59:00 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme --with-shared 00:03:41.234 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:41.497 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.497 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:41.497 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:41.763 Using 'verbs' RDMA provider 00:03:52.669 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:04:04.889 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:04:04.889 Creating mk/config.mk...done. 00:04:04.890 Creating mk/cc.flags.mk...done. 00:04:04.890 Type 'make' to build. 00:04:04.890 22:59:23 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:04:04.890 22:59:23 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:04:04.890 22:59:23 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:04:04.890 22:59:23 -- common/autotest_common.sh@10 -- $ set +x 00:04:04.890 ************************************ 00:04:04.890 START TEST make 00:04:04.890 ************************************ 00:04:04.890 22:59:23 make -- common/autotest_common.sh@1125 -- $ make -j10 00:04:04.890 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:04:04.890 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:04:04.890 meson setup builddir \ 00:04:04.890 -Dwith-libaio=enabled \ 00:04:04.890 -Dwith-liburing=enabled \ 00:04:04.890 -Dwith-libvfn=disabled \ 00:04:04.890 -Dwith-spdk=false && \ 00:04:04.890 meson compile -C builddir && \ 00:04:04.890 cd -) 00:04:04.890 make[1]: Nothing to be done for 'all'. 00:04:06.264 The Meson build system 00:04:06.264 Version: 1.5.0 00:04:06.264 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:04:06.264 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:04:06.264 Build type: native build 00:04:06.264 Project name: xnvme 00:04:06.264 Project version: 0.7.3 00:04:06.264 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:06.264 C linker for the host machine: gcc ld.bfd 2.40-14 00:04:06.264 Host machine cpu family: x86_64 00:04:06.264 Host machine cpu: x86_64 00:04:06.264 Message: host_machine.system: linux 00:04:06.264 Compiler for C supports arguments -Wno-missing-braces: YES 00:04:06.264 Compiler for C supports arguments -Wno-cast-function-type: YES 00:04:06.264 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:04:06.264 Run-time dependency threads found: YES 00:04:06.264 Has header "setupapi.h" : NO 00:04:06.264 Has header "linux/blkzoned.h" : YES 00:04:06.264 Has header "linux/blkzoned.h" : YES (cached) 00:04:06.264 Has header "libaio.h" : YES 00:04:06.264 Library aio found: YES 00:04:06.264 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:06.264 Run-time dependency liburing found: YES 2.2 00:04:06.264 Dependency libvfn skipped: feature with-libvfn disabled 00:04:06.264 Run-time dependency appleframeworks found: NO (tried framework) 00:04:06.264 Run-time dependency appleframeworks found: NO (tried framework) 00:04:06.264 Configuring xnvme_config.h using configuration 00:04:06.264 Configuring xnvme.spec using configuration 00:04:06.264 Run-time dependency bash-completion found: YES 2.11 00:04:06.264 Message: Bash-completions: /usr/share/bash-completion/completions 00:04:06.264 Program cp found: YES (/usr/bin/cp) 00:04:06.264 Has header "winsock2.h" : NO 00:04:06.264 Has header "dbghelp.h" : NO 00:04:06.264 Library rpcrt4 found: NO 00:04:06.264 Library rt found: YES 00:04:06.264 Checking for function "clock_gettime" with dependency -lrt: YES 00:04:06.264 Found CMake: /usr/bin/cmake (3.27.7) 00:04:06.264 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:04:06.264 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:04:06.264 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:04:06.264 Build targets in project: 32 00:04:06.264 00:04:06.264 xnvme 0.7.3 00:04:06.264 00:04:06.264 User defined options 00:04:06.264 with-libaio : enabled 00:04:06.264 with-liburing: enabled 00:04:06.264 with-libvfn : disabled 00:04:06.264 with-spdk : false 00:04:06.264 00:04:06.264 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:06.523 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:04:06.523 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:04:06.781 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:04:06.781 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:04:06.781 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:04:06.781 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:04:06.781 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:04:06.781 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:04:06.781 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:04:06.781 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:04:06.781 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:04:06.781 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:04:06.781 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:04:06.781 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:04:06.781 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:04:06.781 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:04:06.781 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:04:06.781 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:04:06.781 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:04:06.781 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:04:06.781 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:04:06.781 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:04:06.781 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:04:06.781 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:04:06.781 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:04:06.781 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:04:07.039 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:04:07.039 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:04:07.039 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:04:07.039 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:04:07.039 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:04:07.039 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:04:07.039 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:04:07.039 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:04:07.039 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:04:07.039 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:04:07.039 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:04:07.039 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:04:07.039 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:04:07.039 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:04:07.039 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:04:07.039 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:04:07.039 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:04:07.039 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:04:07.039 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:04:07.039 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:04:07.039 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:04:07.039 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:04:07.039 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:04:07.039 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:04:07.039 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:04:07.039 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:04:07.039 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:04:07.039 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:04:07.039 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:04:07.039 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:04:07.039 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:04:07.039 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:04:07.039 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:04:07.039 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:04:07.039 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:04:07.297 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:04:07.297 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:04:07.297 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:04:07.297 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:04:07.297 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:04:07.297 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:04:07.297 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:04:07.297 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:04:07.297 [69/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:04:07.297 [70/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:04:07.297 [71/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:04:07.297 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:04:07.297 [73/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:04:07.297 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:04:07.297 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:04:07.297 [76/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:04:07.297 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:04:07.297 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:04:07.297 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:04:07.556 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:04:07.556 [81/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:04:07.556 [82/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:04:07.556 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:04:07.556 [84/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:04:07.556 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:04:07.556 [86/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:04:07.556 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:04:07.556 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:04:07.556 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:04:07.556 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:04:07.556 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:04:07.556 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:04:07.556 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:04:07.556 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:04:07.556 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:04:07.556 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:04:07.556 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:04:07.814 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:04:07.814 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:04:07.814 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:04:07.814 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:04:07.814 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:04:07.814 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:04:07.814 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:04:07.814 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:04:07.814 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:04:07.814 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:04:07.814 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:04:07.814 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:04:07.814 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:04:07.814 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:04:07.814 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:04:07.814 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:04:07.814 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:04:07.814 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:04:07.814 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:04:07.814 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:04:07.814 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:04:07.814 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:04:07.814 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:04:07.814 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:04:07.814 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:04:07.814 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:04:07.814 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:04:07.814 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:04:07.814 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:04:07.814 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:04:07.814 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:04:07.814 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:04:07.814 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:04:07.814 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:04:07.814 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:04:07.814 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:04:08.073 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:04:08.073 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:04:08.073 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:04:08.073 [137/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:04:08.073 [138/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:04:08.073 [139/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:04:08.073 [140/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:04:08.073 [141/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:04:08.073 [142/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:04:08.074 [143/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:04:08.074 [144/203] Linking target lib/libxnvme.so 00:04:08.074 [145/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:04:08.074 [146/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:04:08.074 [147/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:04:08.074 [148/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:04:08.074 [149/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:04:08.074 [150/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:04:08.074 [151/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:04:08.332 [152/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:04:08.332 [153/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:04:08.332 [154/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:04:08.332 [155/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:04:08.332 [156/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:04:08.332 [157/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:04:08.332 [158/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:04:08.332 [159/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:04:08.332 [160/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:04:08.332 [161/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:04:08.333 [162/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:04:08.333 [163/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:04:08.333 [164/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:04:08.333 [165/203] Compiling C object tools/kvs.p/kvs.c.o 00:04:08.333 [166/203] Compiling C object tools/xdd.p/xdd.c.o 00:04:08.333 [167/203] Compiling C object tools/lblk.p/lblk.c.o 00:04:08.333 [168/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:04:08.333 [169/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:04:08.591 [170/203] Compiling C object tools/zoned.p/zoned.c.o 00:04:08.591 [171/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:04:08.591 [172/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:04:08.591 [173/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:04:08.591 [174/203] Linking static target lib/libxnvme.a 00:04:08.591 [175/203] Linking target tests/xnvme_tests_async_intf 00:04:08.850 [176/203] Linking target tests/xnvme_tests_cli 00:04:08.850 [177/203] Linking target tests/xnvme_tests_buf 00:04:08.850 [178/203] Linking target tests/xnvme_tests_lblk 00:04:08.850 [179/203] Linking target tests/xnvme_tests_scc 00:04:08.850 [180/203] Linking target tests/xnvme_tests_xnvme_cli 00:04:08.850 [181/203] Linking target tests/xnvme_tests_ioworker 00:04:08.850 [182/203] Linking target tests/xnvme_tests_znd_state 00:04:08.850 [183/203] Linking target tests/xnvme_tests_xnvme_file 00:04:08.850 [184/203] Linking target tests/xnvme_tests_znd_append 00:04:08.850 [185/203] Linking target tests/xnvme_tests_znd_zrwa 00:04:08.850 [186/203] Linking target tests/xnvme_tests_kvs 00:04:08.850 [187/203] Linking target tests/xnvme_tests_enum 00:04:08.850 [188/203] Linking target tools/xdd 00:04:08.850 [189/203] Linking target tests/xnvme_tests_map 00:04:08.850 [190/203] Linking target tools/zoned 00:04:08.850 [191/203] Linking target tools/xnvme 00:04:08.850 [192/203] Linking target tools/xnvme_file 00:04:08.850 [193/203] Linking target tools/lblk 00:04:08.850 [194/203] Linking target tests/xnvme_tests_znd_explicit_open 00:04:08.850 [195/203] Linking target examples/xnvme_dev 00:04:08.850 [196/203] Linking target examples/xnvme_single_async 00:04:08.850 [197/203] Linking target examples/xnvme_hello 00:04:08.850 [198/203] Linking target examples/xnvme_enum 00:04:08.850 [199/203] Linking target examples/xnvme_io_async 00:04:08.850 [200/203] Linking target tools/kvs 00:04:08.850 [201/203] Linking target examples/xnvme_single_sync 00:04:08.850 [202/203] Linking target examples/zoned_io_async 00:04:08.850 [203/203] Linking target examples/zoned_io_sync 00:04:08.850 INFO: autodetecting backend as ninja 00:04:08.850 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:04:08.850 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:04:40.938 CC lib/log/log.o 00:04:40.938 CC lib/log/log_flags.o 00:04:40.938 CC lib/log/log_deprecated.o 00:04:40.938 CC lib/ut/ut.o 00:04:40.938 CC lib/ut_mock/mock.o 00:04:40.938 LIB libspdk_log.a 00:04:40.938 LIB libspdk_ut.a 00:04:40.938 LIB libspdk_ut_mock.a 00:04:40.938 SO libspdk_log.so.7.0 00:04:40.938 SO libspdk_ut_mock.so.6.0 00:04:40.938 SO libspdk_ut.so.2.0 00:04:40.938 SYMLINK libspdk_log.so 00:04:40.938 SYMLINK libspdk_ut_mock.so 00:04:40.938 SYMLINK libspdk_ut.so 00:04:40.938 CC lib/dma/dma.o 00:04:40.938 CC lib/ioat/ioat.o 00:04:40.938 CC lib/util/base64.o 00:04:40.938 CC lib/util/bit_array.o 00:04:40.938 CC lib/util/crc16.o 00:04:40.938 CC lib/util/crc32.o 00:04:40.938 CC lib/util/cpuset.o 00:04:40.938 CC lib/util/crc32c.o 00:04:40.938 CXX lib/trace_parser/trace.o 00:04:40.938 CC lib/vfio_user/host/vfio_user_pci.o 00:04:40.938 CC lib/util/crc32_ieee.o 00:04:40.938 CC lib/util/crc64.o 00:04:40.938 CC lib/util/dif.o 00:04:40.938 LIB libspdk_dma.a 00:04:40.938 CC lib/util/fd.o 00:04:40.938 SO libspdk_dma.so.5.0 00:04:40.938 SYMLINK libspdk_dma.so 00:04:40.938 CC lib/vfio_user/host/vfio_user.o 00:04:40.938 CC lib/util/fd_group.o 00:04:40.938 CC lib/util/file.o 00:04:40.938 CC lib/util/hexlify.o 00:04:40.938 CC lib/util/iov.o 00:04:40.938 CC lib/util/math.o 00:04:40.938 CC lib/util/net.o 00:04:40.938 LIB libspdk_ioat.a 00:04:40.938 CC lib/util/pipe.o 00:04:40.938 SO libspdk_ioat.so.7.0 00:04:40.938 CC lib/util/strerror_tls.o 00:04:40.938 LIB libspdk_vfio_user.a 00:04:40.938 SYMLINK libspdk_ioat.so 00:04:40.938 CC lib/util/string.o 00:04:40.938 CC lib/util/uuid.o 00:04:40.938 CC lib/util/xor.o 00:04:40.938 SO libspdk_vfio_user.so.5.0 00:04:40.938 CC lib/util/zipf.o 00:04:40.938 CC lib/util/md5.o 00:04:40.938 SYMLINK libspdk_vfio_user.so 00:04:40.938 LIB libspdk_util.a 00:04:40.938 SO libspdk_util.so.10.0 00:04:40.938 SYMLINK libspdk_util.so 00:04:40.938 LIB libspdk_trace_parser.a 00:04:40.938 SO libspdk_trace_parser.so.6.0 00:04:40.938 SYMLINK libspdk_trace_parser.so 00:04:40.938 CC lib/json/json_parse.o 00:04:40.938 CC lib/json/json_util.o 00:04:40.938 CC lib/json/json_write.o 00:04:40.938 CC lib/conf/conf.o 00:04:40.938 CC lib/idxd/idxd_user.o 00:04:40.938 CC lib/vmd/vmd.o 00:04:40.938 CC lib/idxd/idxd.o 00:04:40.938 CC lib/env_dpdk/env.o 00:04:40.938 CC lib/rdma_utils/rdma_utils.o 00:04:40.938 CC lib/rdma_provider/common.o 00:04:40.938 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:40.938 CC lib/idxd/idxd_kernel.o 00:04:40.938 LIB libspdk_conf.a 00:04:40.938 CC lib/env_dpdk/memory.o 00:04:40.938 SO libspdk_conf.so.6.0 00:04:40.938 CC lib/vmd/led.o 00:04:40.938 LIB libspdk_rdma_utils.a 00:04:40.938 LIB libspdk_json.a 00:04:40.938 SO libspdk_rdma_utils.so.1.0 00:04:41.200 SYMLINK libspdk_conf.so 00:04:41.200 CC lib/env_dpdk/pci.o 00:04:41.200 SO libspdk_json.so.6.0 00:04:41.200 LIB libspdk_rdma_provider.a 00:04:41.200 SYMLINK libspdk_rdma_utils.so 00:04:41.200 SO libspdk_rdma_provider.so.6.0 00:04:41.200 CC lib/env_dpdk/init.o 00:04:41.200 CC lib/env_dpdk/threads.o 00:04:41.200 CC lib/env_dpdk/pci_ioat.o 00:04:41.200 SYMLINK libspdk_json.so 00:04:41.200 CC lib/env_dpdk/pci_virtio.o 00:04:41.200 SYMLINK libspdk_rdma_provider.so 00:04:41.200 CC lib/env_dpdk/pci_vmd.o 00:04:41.200 CC lib/env_dpdk/pci_idxd.o 00:04:41.200 CC lib/env_dpdk/pci_event.o 00:04:41.200 CC lib/jsonrpc/jsonrpc_server.o 00:04:41.457 CC lib/env_dpdk/sigbus_handler.o 00:04:41.457 CC lib/env_dpdk/pci_dpdk.o 00:04:41.457 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:41.457 LIB libspdk_idxd.a 00:04:41.457 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:41.457 LIB libspdk_vmd.a 00:04:41.457 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:41.457 SO libspdk_idxd.so.12.1 00:04:41.457 SO libspdk_vmd.so.6.0 00:04:41.457 CC lib/jsonrpc/jsonrpc_client.o 00:04:41.457 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:41.457 SYMLINK libspdk_idxd.so 00:04:41.457 SYMLINK libspdk_vmd.so 00:04:41.715 LIB libspdk_jsonrpc.a 00:04:41.715 SO libspdk_jsonrpc.so.6.0 00:04:41.715 SYMLINK libspdk_jsonrpc.so 00:04:41.971 CC lib/rpc/rpc.o 00:04:42.228 LIB libspdk_env_dpdk.a 00:04:42.228 SO libspdk_env_dpdk.so.15.0 00:04:42.228 LIB libspdk_rpc.a 00:04:42.228 SO libspdk_rpc.so.6.0 00:04:42.486 SYMLINK libspdk_env_dpdk.so 00:04:42.486 SYMLINK libspdk_rpc.so 00:04:42.486 CC lib/keyring/keyring.o 00:04:42.486 CC lib/keyring/keyring_rpc.o 00:04:42.486 CC lib/trace/trace_flags.o 00:04:42.486 CC lib/trace/trace_rpc.o 00:04:42.486 CC lib/trace/trace.o 00:04:42.486 CC lib/notify/notify_rpc.o 00:04:42.486 CC lib/notify/notify.o 00:04:42.744 LIB libspdk_notify.a 00:04:42.744 SO libspdk_notify.so.6.0 00:04:42.744 LIB libspdk_keyring.a 00:04:42.744 SYMLINK libspdk_notify.so 00:04:42.744 LIB libspdk_trace.a 00:04:42.744 SO libspdk_keyring.so.2.0 00:04:42.744 SO libspdk_trace.so.11.0 00:04:42.744 SYMLINK libspdk_keyring.so 00:04:43.001 SYMLINK libspdk_trace.so 00:04:43.001 CC lib/sock/sock.o 00:04:43.001 CC lib/sock/sock_rpc.o 00:04:43.001 CC lib/thread/thread.o 00:04:43.001 CC lib/thread/iobuf.o 00:04:43.565 LIB libspdk_sock.a 00:04:43.565 SO libspdk_sock.so.10.0 00:04:43.565 SYMLINK libspdk_sock.so 00:04:43.821 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:43.821 CC lib/nvme/nvme_ctrlr.o 00:04:43.821 CC lib/nvme/nvme_ns_cmd.o 00:04:43.821 CC lib/nvme/nvme_fabric.o 00:04:43.821 CC lib/nvme/nvme_pcie.o 00:04:43.821 CC lib/nvme/nvme.o 00:04:43.821 CC lib/nvme/nvme_ns.o 00:04:43.821 CC lib/nvme/nvme_qpair.o 00:04:43.821 CC lib/nvme/nvme_pcie_common.o 00:04:44.386 CC lib/nvme/nvme_quirks.o 00:04:44.386 CC lib/nvme/nvme_transport.o 00:04:44.644 CC lib/nvme/nvme_discovery.o 00:04:44.644 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:44.644 LIB libspdk_thread.a 00:04:44.644 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:44.644 CC lib/nvme/nvme_tcp.o 00:04:44.644 SO libspdk_thread.so.10.1 00:04:44.644 CC lib/nvme/nvme_opal.o 00:04:44.901 SYMLINK libspdk_thread.so 00:04:44.901 CC lib/nvme/nvme_io_msg.o 00:04:44.901 CC lib/nvme/nvme_poll_group.o 00:04:45.159 CC lib/accel/accel.o 00:04:45.159 CC lib/nvme/nvme_zns.o 00:04:45.160 CC lib/blob/blobstore.o 00:04:45.160 CC lib/blob/request.o 00:04:45.417 CC lib/accel/accel_rpc.o 00:04:45.417 CC lib/init/json_config.o 00:04:45.417 CC lib/virtio/virtio.o 00:04:45.417 CC lib/virtio/virtio_vhost_user.o 00:04:45.417 CC lib/virtio/virtio_vfio_user.o 00:04:45.417 CC lib/virtio/virtio_pci.o 00:04:45.674 CC lib/init/subsystem.o 00:04:45.674 CC lib/blob/zeroes.o 00:04:45.674 CC lib/init/subsystem_rpc.o 00:04:45.674 CC lib/blob/blob_bs_dev.o 00:04:45.674 CC lib/init/rpc.o 00:04:45.674 CC lib/nvme/nvme_stubs.o 00:04:45.674 CC lib/accel/accel_sw.o 00:04:45.674 LIB libspdk_virtio.a 00:04:45.931 SO libspdk_virtio.so.7.0 00:04:45.931 SYMLINK libspdk_virtio.so 00:04:45.931 CC lib/nvme/nvme_auth.o 00:04:45.931 LIB libspdk_init.a 00:04:45.931 SO libspdk_init.so.6.0 00:04:45.931 CC lib/nvme/nvme_cuse.o 00:04:45.931 SYMLINK libspdk_init.so 00:04:45.931 CC lib/nvme/nvme_rdma.o 00:04:45.931 CC lib/fsdev/fsdev.o 00:04:46.189 CC lib/fsdev/fsdev_io.o 00:04:46.189 CC lib/fsdev/fsdev_rpc.o 00:04:46.189 CC lib/event/app.o 00:04:46.189 LIB libspdk_accel.a 00:04:46.189 CC lib/event/reactor.o 00:04:46.189 SO libspdk_accel.so.16.0 00:04:46.446 CC lib/event/log_rpc.o 00:04:46.446 SYMLINK libspdk_accel.so 00:04:46.446 CC lib/event/app_rpc.o 00:04:46.446 CC lib/event/scheduler_static.o 00:04:46.704 CC lib/bdev/bdev.o 00:04:46.704 CC lib/bdev/bdev_rpc.o 00:04:46.704 CC lib/bdev/bdev_zone.o 00:04:46.704 CC lib/bdev/part.o 00:04:46.704 CC lib/bdev/scsi_nvme.o 00:04:46.704 LIB libspdk_fsdev.a 00:04:46.704 LIB libspdk_event.a 00:04:46.704 SO libspdk_fsdev.so.1.0 00:04:46.704 SO libspdk_event.so.14.0 00:04:46.704 SYMLINK libspdk_fsdev.so 00:04:46.704 SYMLINK libspdk_event.so 00:04:46.962 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:46.962 LIB libspdk_nvme.a 00:04:47.220 SO libspdk_nvme.so.14.0 00:04:47.478 SYMLINK libspdk_nvme.so 00:04:47.478 LIB libspdk_fuse_dispatcher.a 00:04:47.478 SO libspdk_fuse_dispatcher.so.1.0 00:04:47.734 SYMLINK libspdk_fuse_dispatcher.so 00:04:48.298 LIB libspdk_blob.a 00:04:48.298 SO libspdk_blob.so.11.0 00:04:48.555 SYMLINK libspdk_blob.so 00:04:48.812 CC lib/lvol/lvol.o 00:04:48.812 CC lib/blobfs/tree.o 00:04:48.812 CC lib/blobfs/blobfs.o 00:04:49.070 LIB libspdk_bdev.a 00:04:49.070 SO libspdk_bdev.so.16.0 00:04:49.070 SYMLINK libspdk_bdev.so 00:04:49.342 CC lib/ublk/ublk.o 00:04:49.342 CC lib/nvmf/ctrlr.o 00:04:49.342 CC lib/ublk/ublk_rpc.o 00:04:49.342 CC lib/nvmf/ctrlr_discovery.o 00:04:49.342 CC lib/nvmf/ctrlr_bdev.o 00:04:49.342 CC lib/ftl/ftl_core.o 00:04:49.342 CC lib/scsi/dev.o 00:04:49.342 CC lib/nbd/nbd.o 00:04:49.342 CC lib/scsi/lun.o 00:04:49.605 CC lib/nbd/nbd_rpc.o 00:04:49.605 CC lib/ftl/ftl_init.o 00:04:49.605 CC lib/nvmf/subsystem.o 00:04:49.605 LIB libspdk_blobfs.a 00:04:49.605 CC lib/ftl/ftl_layout.o 00:04:49.605 SO libspdk_blobfs.so.10.0 00:04:49.605 LIB libspdk_nbd.a 00:04:49.605 SO libspdk_nbd.so.7.0 00:04:49.605 CC lib/scsi/port.o 00:04:49.605 LIB libspdk_lvol.a 00:04:49.605 SYMLINK libspdk_blobfs.so 00:04:49.605 CC lib/scsi/scsi.o 00:04:49.605 SO libspdk_lvol.so.10.0 00:04:49.605 CC lib/scsi/scsi_bdev.o 00:04:49.864 SYMLINK libspdk_nbd.so 00:04:49.864 CC lib/scsi/scsi_pr.o 00:04:49.864 LIB libspdk_ublk.a 00:04:49.864 SO libspdk_ublk.so.3.0 00:04:49.864 SYMLINK libspdk_lvol.so 00:04:49.864 CC lib/ftl/ftl_debug.o 00:04:49.864 CC lib/ftl/ftl_io.o 00:04:49.864 CC lib/ftl/ftl_sb.o 00:04:49.864 SYMLINK libspdk_ublk.so 00:04:49.864 CC lib/ftl/ftl_l2p.o 00:04:49.864 CC lib/scsi/scsi_rpc.o 00:04:49.864 CC lib/nvmf/nvmf.o 00:04:49.864 CC lib/nvmf/nvmf_rpc.o 00:04:49.864 CC lib/ftl/ftl_l2p_flat.o 00:04:49.864 CC lib/ftl/ftl_nv_cache.o 00:04:50.120 CC lib/scsi/task.o 00:04:50.120 CC lib/ftl/ftl_band.o 00:04:50.120 CC lib/ftl/ftl_band_ops.o 00:04:50.120 CC lib/ftl/ftl_writer.o 00:04:50.120 CC lib/ftl/ftl_rq.o 00:04:50.120 LIB libspdk_scsi.a 00:04:50.378 SO libspdk_scsi.so.9.0 00:04:50.378 CC lib/nvmf/transport.o 00:04:50.378 SYMLINK libspdk_scsi.so 00:04:50.378 CC lib/ftl/ftl_reloc.o 00:04:50.378 CC lib/ftl/ftl_l2p_cache.o 00:04:50.378 CC lib/ftl/ftl_p2l.o 00:04:50.636 CC lib/iscsi/conn.o 00:04:50.636 CC lib/ftl/ftl_p2l_log.o 00:04:50.636 CC lib/ftl/mngt/ftl_mngt.o 00:04:50.636 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:50.893 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:50.893 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:50.893 CC lib/nvmf/tcp.o 00:04:50.893 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:50.893 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:50.893 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:50.893 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:50.893 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:50.893 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:50.893 CC lib/nvmf/stubs.o 00:04:50.893 CC lib/nvmf/mdns_server.o 00:04:50.893 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:50.893 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:51.151 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:51.151 CC lib/iscsi/init_grp.o 00:04:51.151 CC lib/ftl/utils/ftl_conf.o 00:04:51.151 CC lib/nvmf/rdma.o 00:04:51.151 CC lib/iscsi/iscsi.o 00:04:51.151 CC lib/ftl/utils/ftl_md.o 00:04:51.151 CC lib/vhost/vhost.o 00:04:51.151 CC lib/iscsi/param.o 00:04:51.411 CC lib/iscsi/portal_grp.o 00:04:51.411 CC lib/iscsi/tgt_node.o 00:04:51.411 CC lib/iscsi/iscsi_subsystem.o 00:04:51.411 CC lib/nvmf/auth.o 00:04:51.411 CC lib/iscsi/iscsi_rpc.o 00:04:51.411 CC lib/iscsi/task.o 00:04:51.669 CC lib/ftl/utils/ftl_mempool.o 00:04:51.669 CC lib/vhost/vhost_rpc.o 00:04:51.669 CC lib/vhost/vhost_scsi.o 00:04:51.669 CC lib/ftl/utils/ftl_bitmap.o 00:04:51.669 CC lib/vhost/vhost_blk.o 00:04:51.669 CC lib/ftl/utils/ftl_property.o 00:04:51.669 CC lib/vhost/rte_vhost_user.o 00:04:51.927 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:51.927 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:51.927 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:52.185 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:52.186 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:52.186 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:52.186 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:52.186 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:52.186 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:52.447 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:52.447 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:52.447 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:52.447 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:52.447 CC lib/ftl/base/ftl_base_dev.o 00:04:52.447 CC lib/ftl/base/ftl_base_bdev.o 00:04:52.447 CC lib/ftl/ftl_trace.o 00:04:52.447 LIB libspdk_iscsi.a 00:04:52.707 SO libspdk_iscsi.so.8.0 00:04:52.707 SYMLINK libspdk_iscsi.so 00:04:52.707 LIB libspdk_ftl.a 00:04:52.707 LIB libspdk_vhost.a 00:04:52.707 SO libspdk_vhost.so.8.0 00:04:52.966 SYMLINK libspdk_vhost.so 00:04:52.966 SO libspdk_ftl.so.9.0 00:04:52.966 LIB libspdk_nvmf.a 00:04:53.255 SO libspdk_nvmf.so.19.0 00:04:53.255 SYMLINK libspdk_ftl.so 00:04:53.255 SYMLINK libspdk_nvmf.so 00:04:53.513 CC module/env_dpdk/env_dpdk_rpc.o 00:04:53.513 CC module/fsdev/aio/fsdev_aio.o 00:04:53.513 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:53.513 CC module/blob/bdev/blob_bdev.o 00:04:53.513 CC module/scheduler/gscheduler/gscheduler.o 00:04:53.771 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:53.771 CC module/accel/ioat/accel_ioat.o 00:04:53.771 CC module/keyring/file/keyring.o 00:04:53.771 CC module/accel/error/accel_error.o 00:04:53.771 CC module/sock/posix/posix.o 00:04:53.771 LIB libspdk_env_dpdk_rpc.a 00:04:53.771 SO libspdk_env_dpdk_rpc.so.6.0 00:04:53.771 SYMLINK libspdk_env_dpdk_rpc.so 00:04:53.771 CC module/accel/error/accel_error_rpc.o 00:04:53.771 CC module/keyring/file/keyring_rpc.o 00:04:53.771 LIB libspdk_scheduler_gscheduler.a 00:04:53.771 LIB libspdk_scheduler_dpdk_governor.a 00:04:53.771 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:53.771 CC module/accel/ioat/accel_ioat_rpc.o 00:04:53.771 SO libspdk_scheduler_gscheduler.so.4.0 00:04:53.771 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:53.771 SYMLINK libspdk_scheduler_gscheduler.so 00:04:53.771 LIB libspdk_scheduler_dynamic.a 00:04:53.771 LIB libspdk_accel_error.a 00:04:53.771 LIB libspdk_keyring_file.a 00:04:53.771 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:53.771 SO libspdk_scheduler_dynamic.so.4.0 00:04:53.771 SO libspdk_accel_error.so.2.0 00:04:53.771 SO libspdk_keyring_file.so.2.0 00:04:54.028 CC module/fsdev/aio/linux_aio_mgr.o 00:04:54.028 SYMLINK libspdk_scheduler_dynamic.so 00:04:54.028 SYMLINK libspdk_keyring_file.so 00:04:54.028 SYMLINK libspdk_accel_error.so 00:04:54.028 LIB libspdk_blob_bdev.a 00:04:54.028 LIB libspdk_accel_ioat.a 00:04:54.028 SO libspdk_blob_bdev.so.11.0 00:04:54.028 SO libspdk_accel_ioat.so.6.0 00:04:54.028 CC module/accel/dsa/accel_dsa.o 00:04:54.028 CC module/accel/dsa/accel_dsa_rpc.o 00:04:54.028 CC module/accel/iaa/accel_iaa.o 00:04:54.028 CC module/accel/iaa/accel_iaa_rpc.o 00:04:54.028 SYMLINK libspdk_blob_bdev.so 00:04:54.028 SYMLINK libspdk_accel_ioat.so 00:04:54.028 CC module/keyring/linux/keyring.o 00:04:54.028 CC module/keyring/linux/keyring_rpc.o 00:04:54.286 LIB libspdk_accel_iaa.a 00:04:54.286 LIB libspdk_fsdev_aio.a 00:04:54.286 CC module/bdev/delay/vbdev_delay.o 00:04:54.286 SO libspdk_accel_iaa.so.3.0 00:04:54.286 LIB libspdk_accel_dsa.a 00:04:54.286 SO libspdk_fsdev_aio.so.1.0 00:04:54.286 LIB libspdk_keyring_linux.a 00:04:54.286 CC module/blobfs/bdev/blobfs_bdev.o 00:04:54.286 SO libspdk_accel_dsa.so.5.0 00:04:54.286 SO libspdk_keyring_linux.so.1.0 00:04:54.286 SYMLINK libspdk_accel_iaa.so 00:04:54.286 CC module/bdev/error/vbdev_error.o 00:04:54.286 CC module/bdev/gpt/gpt.o 00:04:54.286 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:54.286 SYMLINK libspdk_fsdev_aio.so 00:04:54.286 CC module/bdev/gpt/vbdev_gpt.o 00:04:54.286 SYMLINK libspdk_accel_dsa.so 00:04:54.286 SYMLINK libspdk_keyring_linux.so 00:04:54.286 CC module/bdev/error/vbdev_error_rpc.o 00:04:54.286 CC module/bdev/lvol/vbdev_lvol.o 00:04:54.286 LIB libspdk_sock_posix.a 00:04:54.286 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:54.286 SO libspdk_sock_posix.so.6.0 00:04:54.544 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:54.544 CC module/bdev/malloc/bdev_malloc.o 00:04:54.544 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:54.544 SYMLINK libspdk_sock_posix.so 00:04:54.545 LIB libspdk_bdev_error.a 00:04:54.545 SO libspdk_bdev_error.so.6.0 00:04:54.545 LIB libspdk_bdev_gpt.a 00:04:54.545 LIB libspdk_blobfs_bdev.a 00:04:54.545 SO libspdk_bdev_gpt.so.6.0 00:04:54.545 LIB libspdk_bdev_delay.a 00:04:54.545 SO libspdk_blobfs_bdev.so.6.0 00:04:54.545 SYMLINK libspdk_bdev_error.so 00:04:54.545 CC module/bdev/null/bdev_null.o 00:04:54.545 SO libspdk_bdev_delay.so.6.0 00:04:54.545 CC module/bdev/null/bdev_null_rpc.o 00:04:54.545 CC module/bdev/nvme/bdev_nvme.o 00:04:54.545 SYMLINK libspdk_bdev_gpt.so 00:04:54.545 SYMLINK libspdk_blobfs_bdev.so 00:04:54.803 SYMLINK libspdk_bdev_delay.so 00:04:54.803 LIB libspdk_bdev_lvol.a 00:04:54.803 CC module/bdev/passthru/vbdev_passthru.o 00:04:54.803 SO libspdk_bdev_lvol.so.6.0 00:04:54.803 CC module/bdev/split/vbdev_split.o 00:04:54.803 CC module/bdev/raid/bdev_raid.o 00:04:54.803 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:54.803 LIB libspdk_bdev_null.a 00:04:54.803 LIB libspdk_bdev_malloc.a 00:04:54.803 SYMLINK libspdk_bdev_lvol.so 00:04:54.803 CC module/bdev/xnvme/bdev_xnvme.o 00:04:54.803 CC module/bdev/raid/bdev_raid_rpc.o 00:04:54.803 SO libspdk_bdev_null.so.6.0 00:04:54.803 CC module/bdev/aio/bdev_aio.o 00:04:54.803 SO libspdk_bdev_malloc.so.6.0 00:04:55.061 SYMLINK libspdk_bdev_null.so 00:04:55.061 CC module/bdev/raid/bdev_raid_sb.o 00:04:55.061 SYMLINK libspdk_bdev_malloc.so 00:04:55.061 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:04:55.061 CC module/bdev/split/vbdev_split_rpc.o 00:04:55.061 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:55.061 CC module/bdev/raid/raid0.o 00:04:55.061 CC module/bdev/raid/raid1.o 00:04:55.061 LIB libspdk_bdev_split.a 00:04:55.061 LIB libspdk_bdev_xnvme.a 00:04:55.061 SO libspdk_bdev_split.so.6.0 00:04:55.061 LIB libspdk_bdev_passthru.a 00:04:55.061 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:55.061 SO libspdk_bdev_xnvme.so.3.0 00:04:55.061 SO libspdk_bdev_passthru.so.6.0 00:04:55.320 SYMLINK libspdk_bdev_split.so 00:04:55.320 SYMLINK libspdk_bdev_xnvme.so 00:04:55.320 CC module/bdev/raid/concat.o 00:04:55.320 CC module/bdev/aio/bdev_aio_rpc.o 00:04:55.320 SYMLINK libspdk_bdev_passthru.so 00:04:55.320 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:55.320 LIB libspdk_bdev_zone_block.a 00:04:55.320 SO libspdk_bdev_zone_block.so.6.0 00:04:55.320 CC module/bdev/ftl/bdev_ftl.o 00:04:55.320 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:55.320 LIB libspdk_bdev_aio.a 00:04:55.320 SO libspdk_bdev_aio.so.6.0 00:04:55.320 SYMLINK libspdk_bdev_zone_block.so 00:04:55.320 CC module/bdev/nvme/nvme_rpc.o 00:04:55.320 CC module/bdev/iscsi/bdev_iscsi.o 00:04:55.320 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:55.577 SYMLINK libspdk_bdev_aio.so 00:04:55.577 CC module/bdev/nvme/bdev_mdns_client.o 00:04:55.577 CC module/bdev/nvme/vbdev_opal.o 00:04:55.577 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:55.577 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:55.577 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:55.577 LIB libspdk_bdev_ftl.a 00:04:55.577 SO libspdk_bdev_ftl.so.6.0 00:04:55.834 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:55.834 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:55.834 SYMLINK libspdk_bdev_ftl.so 00:04:55.834 LIB libspdk_bdev_iscsi.a 00:04:55.834 SO libspdk_bdev_iscsi.so.6.0 00:04:55.834 LIB libspdk_bdev_raid.a 00:04:55.834 SYMLINK libspdk_bdev_iscsi.so 00:04:55.834 SO libspdk_bdev_raid.so.6.0 00:04:55.834 SYMLINK libspdk_bdev_raid.so 00:04:55.834 LIB libspdk_bdev_virtio.a 00:04:56.092 SO libspdk_bdev_virtio.so.6.0 00:04:56.092 SYMLINK libspdk_bdev_virtio.so 00:04:57.025 LIB libspdk_bdev_nvme.a 00:04:57.025 SO libspdk_bdev_nvme.so.7.0 00:04:57.025 SYMLINK libspdk_bdev_nvme.so 00:04:57.590 CC module/event/subsystems/iobuf/iobuf.o 00:04:57.590 CC module/event/subsystems/fsdev/fsdev.o 00:04:57.590 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:57.590 CC module/event/subsystems/scheduler/scheduler.o 00:04:57.590 CC module/event/subsystems/keyring/keyring.o 00:04:57.590 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:57.590 CC module/event/subsystems/sock/sock.o 00:04:57.590 CC module/event/subsystems/vmd/vmd.o 00:04:57.590 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:57.590 LIB libspdk_event_vhost_blk.a 00:04:57.590 LIB libspdk_event_scheduler.a 00:04:57.590 LIB libspdk_event_fsdev.a 00:04:57.590 LIB libspdk_event_keyring.a 00:04:57.590 LIB libspdk_event_sock.a 00:04:57.590 LIB libspdk_event_iobuf.a 00:04:57.590 LIB libspdk_event_vmd.a 00:04:57.590 SO libspdk_event_keyring.so.1.0 00:04:57.590 SO libspdk_event_vhost_blk.so.3.0 00:04:57.590 SO libspdk_event_scheduler.so.4.0 00:04:57.590 SO libspdk_event_fsdev.so.1.0 00:04:57.590 SO libspdk_event_sock.so.5.0 00:04:57.590 SO libspdk_event_iobuf.so.3.0 00:04:57.590 SO libspdk_event_vmd.so.6.0 00:04:57.590 SYMLINK libspdk_event_vhost_blk.so 00:04:57.590 SYMLINK libspdk_event_scheduler.so 00:04:57.590 SYMLINK libspdk_event_keyring.so 00:04:57.590 SYMLINK libspdk_event_fsdev.so 00:04:57.590 SYMLINK libspdk_event_sock.so 00:04:57.590 SYMLINK libspdk_event_iobuf.so 00:04:57.590 SYMLINK libspdk_event_vmd.so 00:04:57.849 CC module/event/subsystems/accel/accel.o 00:04:58.108 LIB libspdk_event_accel.a 00:04:58.108 SO libspdk_event_accel.so.6.0 00:04:58.108 SYMLINK libspdk_event_accel.so 00:04:58.365 CC module/event/subsystems/bdev/bdev.o 00:04:58.622 LIB libspdk_event_bdev.a 00:04:58.622 SO libspdk_event_bdev.so.6.0 00:04:58.622 SYMLINK libspdk_event_bdev.so 00:04:58.622 CC module/event/subsystems/scsi/scsi.o 00:04:58.880 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:58.880 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:58.880 CC module/event/subsystems/ublk/ublk.o 00:04:58.880 CC module/event/subsystems/nbd/nbd.o 00:04:58.880 LIB libspdk_event_ublk.a 00:04:58.880 LIB libspdk_event_scsi.a 00:04:58.880 LIB libspdk_event_nbd.a 00:04:58.880 SO libspdk_event_ublk.so.3.0 00:04:58.880 SO libspdk_event_scsi.so.6.0 00:04:58.880 SO libspdk_event_nbd.so.6.0 00:04:58.880 SYMLINK libspdk_event_ublk.so 00:04:58.880 LIB libspdk_event_nvmf.a 00:04:58.880 SYMLINK libspdk_event_scsi.so 00:04:58.880 SYMLINK libspdk_event_nbd.so 00:04:58.880 SO libspdk_event_nvmf.so.6.0 00:04:59.139 SYMLINK libspdk_event_nvmf.so 00:04:59.139 CC module/event/subsystems/iscsi/iscsi.o 00:04:59.139 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:59.399 LIB libspdk_event_vhost_scsi.a 00:04:59.399 LIB libspdk_event_iscsi.a 00:04:59.399 SO libspdk_event_vhost_scsi.so.3.0 00:04:59.399 SO libspdk_event_iscsi.so.6.0 00:04:59.399 SYMLINK libspdk_event_vhost_scsi.so 00:04:59.399 SYMLINK libspdk_event_iscsi.so 00:04:59.399 SO libspdk.so.6.0 00:04:59.399 SYMLINK libspdk.so 00:04:59.659 CC app/spdk_nvme_perf/perf.o 00:04:59.659 CXX app/trace/trace.o 00:04:59.659 CC app/spdk_nvme_identify/identify.o 00:04:59.659 CC app/trace_record/trace_record.o 00:04:59.659 CC app/spdk_lspci/spdk_lspci.o 00:04:59.659 CC app/iscsi_tgt/iscsi_tgt.o 00:04:59.659 CC app/nvmf_tgt/nvmf_main.o 00:04:59.659 CC app/spdk_tgt/spdk_tgt.o 00:04:59.659 CC examples/util/zipf/zipf.o 00:04:59.919 CC test/thread/poller_perf/poller_perf.o 00:04:59.919 LINK spdk_lspci 00:04:59.919 LINK nvmf_tgt 00:04:59.919 LINK spdk_tgt 00:04:59.919 LINK poller_perf 00:04:59.919 LINK iscsi_tgt 00:04:59.919 LINK zipf 00:04:59.919 LINK spdk_trace_record 00:04:59.919 LINK spdk_trace 00:05:00.178 CC app/spdk_nvme_discover/discovery_aer.o 00:05:00.178 CC app/spdk_top/spdk_top.o 00:05:00.178 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:00.178 CC examples/ioat/perf/perf.o 00:05:00.178 CC examples/vmd/lsvmd/lsvmd.o 00:05:00.178 CC test/dma/test_dma/test_dma.o 00:05:00.178 CC examples/idxd/perf/perf.o 00:05:00.178 LINK spdk_nvme_discover 00:05:00.436 CC examples/thread/thread/thread_ex.o 00:05:00.436 LINK interrupt_tgt 00:05:00.436 LINK lsvmd 00:05:00.436 LINK ioat_perf 00:05:00.436 LINK spdk_nvme_identify 00:05:00.436 LINK thread 00:05:00.436 LINK spdk_nvme_perf 00:05:00.436 CC examples/sock/hello_world/hello_sock.o 00:05:00.695 LINK idxd_perf 00:05:00.695 CC examples/vmd/led/led.o 00:05:00.695 CC examples/ioat/verify/verify.o 00:05:00.695 CC test/app/bdev_svc/bdev_svc.o 00:05:00.695 LINK test_dma 00:05:00.695 LINK led 00:05:00.695 CC app/spdk_dd/spdk_dd.o 00:05:00.695 CC test/app/histogram_perf/histogram_perf.o 00:05:00.695 LINK verify 00:05:00.695 LINK hello_sock 00:05:00.953 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:00.953 CC app/fio/nvme/fio_plugin.o 00:05:00.953 LINK bdev_svc 00:05:00.953 LINK spdk_top 00:05:00.953 LINK histogram_perf 00:05:00.953 TEST_HEADER include/spdk/accel.h 00:05:00.953 TEST_HEADER include/spdk/accel_module.h 00:05:00.953 TEST_HEADER include/spdk/assert.h 00:05:00.953 TEST_HEADER include/spdk/barrier.h 00:05:00.953 TEST_HEADER include/spdk/base64.h 00:05:00.953 TEST_HEADER include/spdk/bdev.h 00:05:00.953 TEST_HEADER include/spdk/bdev_module.h 00:05:00.953 TEST_HEADER include/spdk/bdev_zone.h 00:05:00.953 TEST_HEADER include/spdk/bit_array.h 00:05:00.953 TEST_HEADER include/spdk/bit_pool.h 00:05:00.953 CC app/fio/bdev/fio_plugin.o 00:05:00.953 TEST_HEADER include/spdk/blob_bdev.h 00:05:00.953 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:00.953 TEST_HEADER include/spdk/blobfs.h 00:05:00.953 TEST_HEADER include/spdk/blob.h 00:05:00.953 TEST_HEADER include/spdk/conf.h 00:05:00.953 TEST_HEADER include/spdk/config.h 00:05:00.953 TEST_HEADER include/spdk/cpuset.h 00:05:00.953 TEST_HEADER include/spdk/crc16.h 00:05:00.953 TEST_HEADER include/spdk/crc32.h 00:05:00.953 TEST_HEADER include/spdk/crc64.h 00:05:00.953 TEST_HEADER include/spdk/dif.h 00:05:00.953 TEST_HEADER include/spdk/dma.h 00:05:00.953 TEST_HEADER include/spdk/endian.h 00:05:00.953 TEST_HEADER include/spdk/env_dpdk.h 00:05:00.953 TEST_HEADER include/spdk/env.h 00:05:00.953 TEST_HEADER include/spdk/event.h 00:05:00.953 TEST_HEADER include/spdk/fd_group.h 00:05:00.953 TEST_HEADER include/spdk/fd.h 00:05:00.953 TEST_HEADER include/spdk/file.h 00:05:00.953 TEST_HEADER include/spdk/fsdev.h 00:05:00.953 TEST_HEADER include/spdk/fsdev_module.h 00:05:00.953 TEST_HEADER include/spdk/ftl.h 00:05:00.953 TEST_HEADER include/spdk/fuse_dispatcher.h 00:05:00.953 TEST_HEADER include/spdk/gpt_spec.h 00:05:00.953 TEST_HEADER include/spdk/hexlify.h 00:05:00.953 TEST_HEADER include/spdk/histogram_data.h 00:05:00.953 TEST_HEADER include/spdk/idxd.h 00:05:00.953 TEST_HEADER include/spdk/idxd_spec.h 00:05:00.953 TEST_HEADER include/spdk/init.h 00:05:00.953 TEST_HEADER include/spdk/ioat.h 00:05:00.953 TEST_HEADER include/spdk/ioat_spec.h 00:05:00.953 TEST_HEADER include/spdk/iscsi_spec.h 00:05:00.953 TEST_HEADER include/spdk/json.h 00:05:00.953 TEST_HEADER include/spdk/jsonrpc.h 00:05:00.953 TEST_HEADER include/spdk/keyring.h 00:05:00.953 TEST_HEADER include/spdk/keyring_module.h 00:05:00.953 TEST_HEADER include/spdk/likely.h 00:05:00.953 TEST_HEADER include/spdk/log.h 00:05:00.953 TEST_HEADER include/spdk/lvol.h 00:05:00.953 TEST_HEADER include/spdk/md5.h 00:05:00.953 TEST_HEADER include/spdk/memory.h 00:05:00.953 TEST_HEADER include/spdk/mmio.h 00:05:00.953 TEST_HEADER include/spdk/nbd.h 00:05:00.953 TEST_HEADER include/spdk/net.h 00:05:00.953 TEST_HEADER include/spdk/notify.h 00:05:00.953 CC app/vhost/vhost.o 00:05:00.953 TEST_HEADER include/spdk/nvme.h 00:05:00.953 TEST_HEADER include/spdk/nvme_intel.h 00:05:00.953 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:00.953 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:00.953 TEST_HEADER include/spdk/nvme_spec.h 00:05:00.953 TEST_HEADER include/spdk/nvme_zns.h 00:05:00.953 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:00.953 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:00.953 TEST_HEADER include/spdk/nvmf.h 00:05:00.953 TEST_HEADER include/spdk/nvmf_spec.h 00:05:00.953 TEST_HEADER include/spdk/nvmf_transport.h 00:05:00.953 TEST_HEADER include/spdk/opal.h 00:05:00.953 TEST_HEADER include/spdk/opal_spec.h 00:05:01.212 TEST_HEADER include/spdk/pci_ids.h 00:05:01.212 TEST_HEADER include/spdk/pipe.h 00:05:01.212 TEST_HEADER include/spdk/queue.h 00:05:01.212 TEST_HEADER include/spdk/reduce.h 00:05:01.212 TEST_HEADER include/spdk/rpc.h 00:05:01.212 CC examples/accel/perf/accel_perf.o 00:05:01.212 TEST_HEADER include/spdk/scheduler.h 00:05:01.212 TEST_HEADER include/spdk/scsi.h 00:05:01.212 TEST_HEADER include/spdk/scsi_spec.h 00:05:01.212 TEST_HEADER include/spdk/sock.h 00:05:01.212 LINK spdk_dd 00:05:01.212 TEST_HEADER include/spdk/stdinc.h 00:05:01.212 TEST_HEADER include/spdk/string.h 00:05:01.212 TEST_HEADER include/spdk/thread.h 00:05:01.212 TEST_HEADER include/spdk/trace.h 00:05:01.212 TEST_HEADER include/spdk/trace_parser.h 00:05:01.212 TEST_HEADER include/spdk/tree.h 00:05:01.212 TEST_HEADER include/spdk/ublk.h 00:05:01.212 TEST_HEADER include/spdk/util.h 00:05:01.212 TEST_HEADER include/spdk/uuid.h 00:05:01.212 TEST_HEADER include/spdk/version.h 00:05:01.212 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:01.212 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:01.212 TEST_HEADER include/spdk/vhost.h 00:05:01.212 TEST_HEADER include/spdk/vmd.h 00:05:01.212 TEST_HEADER include/spdk/xor.h 00:05:01.212 TEST_HEADER include/spdk/zipf.h 00:05:01.212 CXX test/cpp_headers/accel.o 00:05:01.212 CC examples/blob/hello_world/hello_blob.o 00:05:01.212 CC examples/nvme/hello_world/hello_world.o 00:05:01.212 CC examples/fsdev/hello_world/hello_fsdev.o 00:05:01.212 LINK vhost 00:05:01.212 CXX test/cpp_headers/accel_module.o 00:05:01.212 LINK nvme_fuzz 00:05:01.212 LINK spdk_nvme 00:05:01.470 CC examples/blob/cli/blobcli.o 00:05:01.470 LINK spdk_bdev 00:05:01.470 CXX test/cpp_headers/assert.o 00:05:01.470 LINK hello_world 00:05:01.471 LINK hello_blob 00:05:01.471 CXX test/cpp_headers/barrier.o 00:05:01.471 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:01.471 CC test/app/jsoncat/jsoncat.o 00:05:01.471 CXX test/cpp_headers/base64.o 00:05:01.471 LINK hello_fsdev 00:05:01.471 CXX test/cpp_headers/bdev.o 00:05:01.729 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:01.729 LINK accel_perf 00:05:01.729 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:01.729 LINK jsoncat 00:05:01.729 CXX test/cpp_headers/bdev_module.o 00:05:01.729 CC examples/nvme/reconnect/reconnect.o 00:05:01.729 CC test/app/stub/stub.o 00:05:01.729 LINK blobcli 00:05:01.729 CXX test/cpp_headers/bdev_zone.o 00:05:01.729 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:01.729 CC examples/nvme/arbitration/arbitration.o 00:05:01.729 CC examples/nvme/hotplug/hotplug.o 00:05:02.014 LINK stub 00:05:02.014 LINK vhost_fuzz 00:05:02.014 CXX test/cpp_headers/bit_array.o 00:05:02.014 CC test/env/mem_callbacks/mem_callbacks.o 00:05:02.014 LINK reconnect 00:05:02.014 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:02.014 CC test/env/vtophys/vtophys.o 00:05:02.014 CXX test/cpp_headers/bit_pool.o 00:05:02.014 LINK hotplug 00:05:02.014 CC test/env/memory/memory_ut.o 00:05:02.282 CC test/env/pci/pci_ut.o 00:05:02.282 LINK arbitration 00:05:02.282 LINK vtophys 00:05:02.282 LINK env_dpdk_post_init 00:05:02.282 CXX test/cpp_headers/blob_bdev.o 00:05:02.282 LINK nvme_manage 00:05:02.282 CXX test/cpp_headers/blobfs_bdev.o 00:05:02.282 CXX test/cpp_headers/blobfs.o 00:05:02.282 CXX test/cpp_headers/blob.o 00:05:02.282 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:02.541 CC examples/nvme/abort/abort.o 00:05:02.541 LINK mem_callbacks 00:05:02.541 CXX test/cpp_headers/conf.o 00:05:02.541 LINK pci_ut 00:05:02.541 CC test/event/event_perf/event_perf.o 00:05:02.541 LINK cmb_copy 00:05:02.541 CXX test/cpp_headers/config.o 00:05:02.541 CC examples/bdev/hello_world/hello_bdev.o 00:05:02.541 CXX test/cpp_headers/cpuset.o 00:05:02.541 LINK event_perf 00:05:02.541 CC test/event/reactor/reactor.o 00:05:02.541 CC examples/bdev/bdevperf/bdevperf.o 00:05:02.799 CC test/event/reactor_perf/reactor_perf.o 00:05:02.799 CXX test/cpp_headers/crc16.o 00:05:02.799 LINK reactor 00:05:02.799 CC test/event/app_repeat/app_repeat.o 00:05:02.799 LINK abort 00:05:02.799 LINK hello_bdev 00:05:02.799 LINK reactor_perf 00:05:02.799 CC test/event/scheduler/scheduler.o 00:05:02.799 CXX test/cpp_headers/crc32.o 00:05:02.799 CXX test/cpp_headers/crc64.o 00:05:02.799 LINK app_repeat 00:05:03.058 CXX test/cpp_headers/dif.o 00:05:03.058 CXX test/cpp_headers/dma.o 00:05:03.058 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:03.058 LINK scheduler 00:05:03.058 CXX test/cpp_headers/endian.o 00:05:03.058 CC test/rpc_client/rpc_client_test.o 00:05:03.058 LINK pmr_persistence 00:05:03.058 CC test/nvme/aer/aer.o 00:05:03.058 LINK iscsi_fuzz 00:05:03.058 CC test/nvme/reset/reset.o 00:05:03.317 CC test/accel/dif/dif.o 00:05:03.317 CXX test/cpp_headers/env_dpdk.o 00:05:03.317 LINK memory_ut 00:05:03.317 LINK rpc_client_test 00:05:03.317 CC test/nvme/sgl/sgl.o 00:05:03.317 LINK bdevperf 00:05:03.317 CC test/nvme/e2edp/nvme_dp.o 00:05:03.317 LINK reset 00:05:03.317 CXX test/cpp_headers/env.o 00:05:03.317 CC test/nvme/overhead/overhead.o 00:05:03.317 LINK aer 00:05:03.580 CXX test/cpp_headers/event.o 00:05:03.580 LINK sgl 00:05:03.580 CC test/nvme/err_injection/err_injection.o 00:05:03.580 CC test/blobfs/mkfs/mkfs.o 00:05:03.580 CXX test/cpp_headers/fd_group.o 00:05:03.580 CC test/lvol/esnap/esnap.o 00:05:03.580 LINK nvme_dp 00:05:03.580 CC examples/nvmf/nvmf/nvmf.o 00:05:03.580 CXX test/cpp_headers/fd.o 00:05:03.580 LINK err_injection 00:05:03.580 LINK overhead 00:05:03.580 CC test/nvme/startup/startup.o 00:05:03.837 LINK mkfs 00:05:03.837 LINK dif 00:05:03.837 CC test/nvme/reserve/reserve.o 00:05:03.837 CXX test/cpp_headers/file.o 00:05:03.837 CC test/nvme/simple_copy/simple_copy.o 00:05:03.837 LINK startup 00:05:03.837 CC test/nvme/boot_partition/boot_partition.o 00:05:03.837 CC test/nvme/connect_stress/connect_stress.o 00:05:03.837 CXX test/cpp_headers/fsdev.o 00:05:03.837 LINK nvmf 00:05:04.096 CC test/nvme/compliance/nvme_compliance.o 00:05:04.096 LINK reserve 00:05:04.096 CC test/nvme/fused_ordering/fused_ordering.o 00:05:04.096 CXX test/cpp_headers/fsdev_module.o 00:05:04.096 LINK boot_partition 00:05:04.096 LINK simple_copy 00:05:04.096 LINK connect_stress 00:05:04.096 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:04.096 CXX test/cpp_headers/ftl.o 00:05:04.096 CC test/nvme/fdp/fdp.o 00:05:04.096 LINK fused_ordering 00:05:04.096 CXX test/cpp_headers/fuse_dispatcher.o 00:05:04.096 CXX test/cpp_headers/gpt_spec.o 00:05:04.353 CC test/nvme/cuse/cuse.o 00:05:04.353 CXX test/cpp_headers/hexlify.o 00:05:04.353 LINK doorbell_aers 00:05:04.353 CXX test/cpp_headers/histogram_data.o 00:05:04.353 LINK nvme_compliance 00:05:04.353 CC test/bdev/bdevio/bdevio.o 00:05:04.353 CXX test/cpp_headers/idxd.o 00:05:04.353 CXX test/cpp_headers/idxd_spec.o 00:05:04.353 CXX test/cpp_headers/init.o 00:05:04.353 CXX test/cpp_headers/ioat.o 00:05:04.353 LINK fdp 00:05:04.353 CXX test/cpp_headers/ioat_spec.o 00:05:04.353 CXX test/cpp_headers/iscsi_spec.o 00:05:04.353 CXX test/cpp_headers/json.o 00:05:04.626 CXX test/cpp_headers/jsonrpc.o 00:05:04.626 CXX test/cpp_headers/keyring.o 00:05:04.626 CXX test/cpp_headers/keyring_module.o 00:05:04.626 CXX test/cpp_headers/likely.o 00:05:04.626 CXX test/cpp_headers/log.o 00:05:04.626 CXX test/cpp_headers/lvol.o 00:05:04.626 CXX test/cpp_headers/md5.o 00:05:04.626 CXX test/cpp_headers/memory.o 00:05:04.626 CXX test/cpp_headers/mmio.o 00:05:04.626 CXX test/cpp_headers/nbd.o 00:05:04.626 CXX test/cpp_headers/net.o 00:05:04.626 LINK bdevio 00:05:04.626 CXX test/cpp_headers/notify.o 00:05:04.626 CXX test/cpp_headers/nvme.o 00:05:04.626 CXX test/cpp_headers/nvme_intel.o 00:05:04.888 CXX test/cpp_headers/nvme_ocssd.o 00:05:04.888 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:04.888 CXX test/cpp_headers/nvme_spec.o 00:05:04.888 CXX test/cpp_headers/nvme_zns.o 00:05:04.888 CXX test/cpp_headers/nvmf_cmd.o 00:05:04.888 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:04.888 CXX test/cpp_headers/nvmf.o 00:05:04.888 CXX test/cpp_headers/nvmf_spec.o 00:05:04.888 CXX test/cpp_headers/nvmf_transport.o 00:05:04.888 CXX test/cpp_headers/opal.o 00:05:04.888 CXX test/cpp_headers/opal_spec.o 00:05:04.888 CXX test/cpp_headers/pci_ids.o 00:05:04.888 CXX test/cpp_headers/pipe.o 00:05:05.146 CXX test/cpp_headers/queue.o 00:05:05.146 CXX test/cpp_headers/reduce.o 00:05:05.146 CXX test/cpp_headers/rpc.o 00:05:05.146 CXX test/cpp_headers/scheduler.o 00:05:05.146 CXX test/cpp_headers/scsi.o 00:05:05.146 CXX test/cpp_headers/scsi_spec.o 00:05:05.146 CXX test/cpp_headers/sock.o 00:05:05.146 CXX test/cpp_headers/stdinc.o 00:05:05.146 CXX test/cpp_headers/string.o 00:05:05.146 CXX test/cpp_headers/thread.o 00:05:05.146 CXX test/cpp_headers/trace.o 00:05:05.146 CXX test/cpp_headers/trace_parser.o 00:05:05.146 CXX test/cpp_headers/tree.o 00:05:05.146 CXX test/cpp_headers/ublk.o 00:05:05.146 CXX test/cpp_headers/util.o 00:05:05.146 CXX test/cpp_headers/uuid.o 00:05:05.146 CXX test/cpp_headers/version.o 00:05:05.146 CXX test/cpp_headers/vfio_user_pci.o 00:05:05.146 CXX test/cpp_headers/vfio_user_spec.o 00:05:05.404 CXX test/cpp_headers/vhost.o 00:05:05.404 CXX test/cpp_headers/vmd.o 00:05:05.404 CXX test/cpp_headers/xor.o 00:05:05.404 CXX test/cpp_headers/zipf.o 00:05:05.404 LINK cuse 00:05:07.935 LINK esnap 00:05:07.935 00:05:07.935 real 1m4.233s 00:05:07.935 user 5m17.587s 00:05:07.935 sys 0m55.862s 00:05:07.935 23:00:27 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:05:07.935 23:00:27 make -- common/autotest_common.sh@10 -- $ set +x 00:05:07.935 ************************************ 00:05:07.935 END TEST make 00:05:07.935 ************************************ 00:05:08.203 23:00:27 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:08.203 23:00:27 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:08.203 23:00:27 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:08.203 23:00:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:08.203 23:00:27 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:05:08.203 23:00:27 -- pm/common@44 -- $ pid=5801 00:05:08.203 23:00:27 -- pm/common@50 -- $ kill -TERM 5801 00:05:08.203 23:00:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:08.203 23:00:27 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:05:08.203 23:00:27 -- pm/common@44 -- $ pid=5802 00:05:08.203 23:00:27 -- pm/common@50 -- $ kill -TERM 5802 00:05:08.203 23:00:27 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:08.203 23:00:27 -- common/autotest_common.sh@1681 -- # lcov --version 00:05:08.203 23:00:27 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:08.203 23:00:27 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:08.203 23:00:27 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:08.203 23:00:27 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:08.203 23:00:27 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:08.203 23:00:27 -- scripts/common.sh@336 -- # IFS=.-: 00:05:08.203 23:00:27 -- scripts/common.sh@336 -- # read -ra ver1 00:05:08.203 23:00:27 -- scripts/common.sh@337 -- # IFS=.-: 00:05:08.203 23:00:27 -- scripts/common.sh@337 -- # read -ra ver2 00:05:08.203 23:00:27 -- scripts/common.sh@338 -- # local 'op=<' 00:05:08.203 23:00:27 -- scripts/common.sh@340 -- # ver1_l=2 00:05:08.203 23:00:27 -- scripts/common.sh@341 -- # ver2_l=1 00:05:08.203 23:00:27 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:08.203 23:00:27 -- scripts/common.sh@344 -- # case "$op" in 00:05:08.203 23:00:27 -- scripts/common.sh@345 -- # : 1 00:05:08.203 23:00:27 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:08.203 23:00:27 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:08.203 23:00:27 -- scripts/common.sh@365 -- # decimal 1 00:05:08.203 23:00:27 -- scripts/common.sh@353 -- # local d=1 00:05:08.203 23:00:27 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:08.203 23:00:27 -- scripts/common.sh@355 -- # echo 1 00:05:08.203 23:00:27 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:08.203 23:00:27 -- scripts/common.sh@366 -- # decimal 2 00:05:08.203 23:00:27 -- scripts/common.sh@353 -- # local d=2 00:05:08.203 23:00:27 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:08.203 23:00:27 -- scripts/common.sh@355 -- # echo 2 00:05:08.203 23:00:27 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:08.203 23:00:27 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:08.203 23:00:27 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:08.203 23:00:27 -- scripts/common.sh@368 -- # return 0 00:05:08.203 23:00:27 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:08.203 23:00:27 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:08.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.203 --rc genhtml_branch_coverage=1 00:05:08.203 --rc genhtml_function_coverage=1 00:05:08.203 --rc genhtml_legend=1 00:05:08.203 --rc geninfo_all_blocks=1 00:05:08.203 --rc geninfo_unexecuted_blocks=1 00:05:08.203 00:05:08.203 ' 00:05:08.203 23:00:27 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:08.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.203 --rc genhtml_branch_coverage=1 00:05:08.203 --rc genhtml_function_coverage=1 00:05:08.203 --rc genhtml_legend=1 00:05:08.203 --rc geninfo_all_blocks=1 00:05:08.203 --rc geninfo_unexecuted_blocks=1 00:05:08.203 00:05:08.203 ' 00:05:08.203 23:00:27 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:08.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.203 --rc genhtml_branch_coverage=1 00:05:08.203 --rc genhtml_function_coverage=1 00:05:08.203 --rc genhtml_legend=1 00:05:08.203 --rc geninfo_all_blocks=1 00:05:08.203 --rc geninfo_unexecuted_blocks=1 00:05:08.203 00:05:08.203 ' 00:05:08.203 23:00:27 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:08.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.203 --rc genhtml_branch_coverage=1 00:05:08.203 --rc genhtml_function_coverage=1 00:05:08.203 --rc genhtml_legend=1 00:05:08.203 --rc geninfo_all_blocks=1 00:05:08.203 --rc geninfo_unexecuted_blocks=1 00:05:08.203 00:05:08.203 ' 00:05:08.203 23:00:27 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:08.203 23:00:27 -- nvmf/common.sh@7 -- # uname -s 00:05:08.203 23:00:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:08.203 23:00:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:08.203 23:00:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:08.203 23:00:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:08.203 23:00:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:08.203 23:00:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:08.203 23:00:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:08.203 23:00:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:08.203 23:00:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:08.203 23:00:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:08.203 23:00:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:68fa7114-5336-4e39-bec4-9ce93b826881 00:05:08.203 23:00:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=68fa7114-5336-4e39-bec4-9ce93b826881 00:05:08.203 23:00:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:08.203 23:00:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:08.203 23:00:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:08.203 23:00:27 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:08.203 23:00:27 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:08.203 23:00:27 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:08.203 23:00:27 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:08.203 23:00:27 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:08.203 23:00:27 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:08.203 23:00:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:08.203 23:00:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:08.203 23:00:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:08.203 23:00:27 -- paths/export.sh@5 -- # export PATH 00:05:08.203 23:00:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:08.203 23:00:27 -- nvmf/common.sh@51 -- # : 0 00:05:08.203 23:00:27 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:08.203 23:00:27 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:08.203 23:00:27 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:08.203 23:00:27 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:08.203 23:00:27 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:08.203 23:00:27 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:08.203 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:08.203 23:00:27 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:08.203 23:00:27 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:08.203 23:00:27 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:08.203 23:00:27 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:08.203 23:00:27 -- spdk/autotest.sh@32 -- # uname -s 00:05:08.203 23:00:27 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:08.203 23:00:27 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:08.203 23:00:27 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:08.203 23:00:27 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:05:08.203 23:00:27 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:08.203 23:00:27 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:08.203 23:00:27 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:08.203 23:00:27 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:08.203 23:00:27 -- spdk/autotest.sh@48 -- # udevadm_pid=66998 00:05:08.203 23:00:27 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:08.203 23:00:27 -- pm/common@17 -- # local monitor 00:05:08.203 23:00:27 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:08.203 23:00:27 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:08.203 23:00:27 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:08.203 23:00:27 -- pm/common@21 -- # date +%s 00:05:08.203 23:00:27 -- pm/common@25 -- # sleep 1 00:05:08.203 23:00:27 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731970827 00:05:08.203 23:00:27 -- pm/common@21 -- # date +%s 00:05:08.465 23:00:27 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731970827 00:05:08.465 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731970827_collect-cpu-load.pm.log 00:05:08.465 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731970827_collect-vmstat.pm.log 00:05:09.409 23:00:28 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:09.409 23:00:28 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:09.409 23:00:28 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:09.409 23:00:28 -- common/autotest_common.sh@10 -- # set +x 00:05:09.409 23:00:28 -- spdk/autotest.sh@59 -- # create_test_list 00:05:09.409 23:00:28 -- common/autotest_common.sh@748 -- # xtrace_disable 00:05:09.409 23:00:28 -- common/autotest_common.sh@10 -- # set +x 00:05:09.409 23:00:28 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:09.409 23:00:28 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:09.409 23:00:28 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:05:09.409 23:00:28 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:09.409 23:00:28 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:05:09.409 23:00:28 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:09.409 23:00:28 -- common/autotest_common.sh@1455 -- # uname 00:05:09.409 23:00:28 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:05:09.409 23:00:28 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:09.409 23:00:28 -- common/autotest_common.sh@1475 -- # uname 00:05:09.409 23:00:28 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:05:09.409 23:00:28 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:09.409 23:00:28 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:09.409 lcov: LCOV version 1.15 00:05:09.409 23:00:28 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:24.315 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:24.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:39.251 23:00:58 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:39.251 23:00:58 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:39.251 23:00:58 -- common/autotest_common.sh@10 -- # set +x 00:05:39.251 23:00:58 -- spdk/autotest.sh@78 -- # rm -f 00:05:39.251 23:00:58 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:39.557 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:40.131 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:40.131 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:40.131 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:05:40.131 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:05:40.131 23:00:59 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:40.131 23:00:59 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:40.131 23:00:59 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:40.131 23:00:59 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:40.131 23:00:59 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:40.131 23:00:59 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:40.131 23:00:59 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:40.131 23:00:59 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:40.131 23:00:59 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:40.131 23:00:59 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:40.131 23:00:59 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:05:40.131 23:00:59 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:05:40.131 23:00:59 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:40.131 23:00:59 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:40.131 23:00:59 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:40.131 23:00:59 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:05:40.131 23:00:59 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:05:40.131 23:00:59 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:40.131 23:00:59 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:40.131 23:00:59 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:40.131 23:00:59 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:05:40.131 23:00:59 -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:05:40.131 23:00:59 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:05:40.131 23:00:59 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:40.131 23:00:59 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:40.131 23:00:59 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:05:40.131 23:00:59 -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:05:40.131 23:00:59 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:05:40.131 23:00:59 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:40.131 23:00:59 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:40.131 23:00:59 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:05:40.131 23:00:59 -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:05:40.131 23:00:59 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:05:40.131 23:00:59 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:40.131 23:00:59 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:40.131 23:00:59 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:05:40.131 23:00:59 -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:05:40.131 23:00:59 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:40.131 23:00:59 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:40.131 23:00:59 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:40.131 23:00:59 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:40.131 23:00:59 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:40.131 23:00:59 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:40.131 23:00:59 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:40.131 23:00:59 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:40.131 No valid GPT data, bailing 00:05:40.131 23:00:59 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:40.131 23:00:59 -- scripts/common.sh@394 -- # pt= 00:05:40.131 23:00:59 -- scripts/common.sh@395 -- # return 1 00:05:40.131 23:00:59 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:40.131 1+0 records in 00:05:40.131 1+0 records out 00:05:40.131 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0308248 s, 34.0 MB/s 00:05:40.131 23:00:59 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:40.132 23:00:59 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:40.132 23:00:59 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:40.132 23:00:59 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:40.132 23:00:59 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:40.392 No valid GPT data, bailing 00:05:40.392 23:00:59 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:40.392 23:00:59 -- scripts/common.sh@394 -- # pt= 00:05:40.392 23:00:59 -- scripts/common.sh@395 -- # return 1 00:05:40.392 23:00:59 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:40.392 1+0 records in 00:05:40.392 1+0 records out 00:05:40.392 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00510626 s, 205 MB/s 00:05:40.392 23:00:59 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:40.392 23:00:59 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:40.392 23:00:59 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:05:40.392 23:00:59 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:05:40.392 23:00:59 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:05:40.392 No valid GPT data, bailing 00:05:40.392 23:00:59 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:40.392 23:00:59 -- scripts/common.sh@394 -- # pt= 00:05:40.392 23:00:59 -- scripts/common.sh@395 -- # return 1 00:05:40.392 23:00:59 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:05:40.392 1+0 records in 00:05:40.392 1+0 records out 00:05:40.393 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00596159 s, 176 MB/s 00:05:40.393 23:00:59 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:40.393 23:00:59 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:40.393 23:00:59 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:05:40.393 23:00:59 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:05:40.393 23:00:59 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:05:40.393 No valid GPT data, bailing 00:05:40.393 23:00:59 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:05:40.393 23:00:59 -- scripts/common.sh@394 -- # pt= 00:05:40.393 23:00:59 -- scripts/common.sh@395 -- # return 1 00:05:40.393 23:00:59 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:05:40.393 1+0 records in 00:05:40.393 1+0 records out 00:05:40.393 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00718929 s, 146 MB/s 00:05:40.393 23:00:59 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:40.393 23:00:59 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:40.393 23:00:59 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:05:40.393 23:00:59 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:05:40.393 23:00:59 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:05:40.654 No valid GPT data, bailing 00:05:40.654 23:00:59 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:05:40.654 23:00:59 -- scripts/common.sh@394 -- # pt= 00:05:40.654 23:00:59 -- scripts/common.sh@395 -- # return 1 00:05:40.654 23:00:59 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:05:40.654 1+0 records in 00:05:40.654 1+0 records out 00:05:40.654 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00606956 s, 173 MB/s 00:05:40.654 23:00:59 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:40.654 23:00:59 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:40.654 23:00:59 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:05:40.654 23:00:59 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:05:40.654 23:00:59 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:05:40.654 No valid GPT data, bailing 00:05:40.654 23:00:59 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:40.654 23:00:59 -- scripts/common.sh@394 -- # pt= 00:05:40.654 23:00:59 -- scripts/common.sh@395 -- # return 1 00:05:40.654 23:00:59 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:05:40.654 1+0 records in 00:05:40.654 1+0 records out 00:05:40.654 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00619986 s, 169 MB/s 00:05:40.654 23:00:59 -- spdk/autotest.sh@105 -- # sync 00:05:40.915 23:01:00 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:40.915 23:01:00 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:40.915 23:01:00 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:42.829 23:01:01 -- spdk/autotest.sh@111 -- # uname -s 00:05:42.829 23:01:01 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:42.829 23:01:01 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:42.829 23:01:01 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:43.087 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:43.656 Hugepages 00:05:43.656 node hugesize free / total 00:05:43.656 node0 1048576kB 0 / 0 00:05:43.656 node0 2048kB 0 / 0 00:05:43.656 00:05:43.656 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:43.656 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:43.656 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:43.656 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:43.917 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:05:43.917 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:05:43.917 23:01:03 -- spdk/autotest.sh@117 -- # uname -s 00:05:43.917 23:01:03 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:43.917 23:01:03 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:43.917 23:01:03 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:44.490 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:45.433 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:45.433 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:45.433 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:45.692 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:45.692 23:01:05 -- common/autotest_common.sh@1515 -- # sleep 1 00:05:47.075 23:01:06 -- common/autotest_common.sh@1516 -- # bdfs=() 00:05:47.075 23:01:06 -- common/autotest_common.sh@1516 -- # local bdfs 00:05:47.075 23:01:06 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:05:47.075 23:01:06 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:05:47.075 23:01:06 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:47.075 23:01:06 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:47.075 23:01:06 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:47.075 23:01:06 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:47.075 23:01:06 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:47.075 23:01:06 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:05:47.075 23:01:06 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:47.075 23:01:06 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:47.075 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:47.336 Waiting for block devices as requested 00:05:47.336 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:47.597 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:47.597 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:05:47.597 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:05:52.903 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:05:52.903 23:01:12 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:52.903 23:01:12 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:52.903 23:01:12 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:52.903 23:01:12 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:05:52.903 23:01:12 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:52.903 23:01:12 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:52.903 23:01:12 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:52.903 23:01:12 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:05:52.903 23:01:12 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:05:52.903 23:01:12 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:05:52.903 23:01:12 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:05:52.903 23:01:12 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:52.903 23:01:12 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:52.903 23:01:12 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:52.903 23:01:12 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:52.903 23:01:12 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:52.903 23:01:12 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:05:52.903 23:01:12 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:52.903 23:01:12 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:52.903 23:01:12 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:52.903 23:01:12 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:52.903 23:01:12 -- common/autotest_common.sh@1541 -- # continue 00:05:52.903 23:01:12 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:52.903 23:01:12 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:52.903 23:01:12 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:52.903 23:01:12 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:05:52.903 23:01:12 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:52.903 23:01:12 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:52.903 23:01:12 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:52.903 23:01:12 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:05:52.903 23:01:12 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:05:52.903 23:01:12 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:05:52.903 23:01:12 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:05:52.903 23:01:12 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:52.903 23:01:12 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:52.903 23:01:12 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:52.903 23:01:12 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:52.903 23:01:12 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:52.903 23:01:12 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:05:52.903 23:01:12 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:52.903 23:01:12 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:52.903 23:01:12 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:52.903 23:01:12 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:52.903 23:01:12 -- common/autotest_common.sh@1541 -- # continue 00:05:52.903 23:01:12 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:52.903 23:01:12 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:05:52.903 23:01:12 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:52.903 23:01:12 -- common/autotest_common.sh@1485 -- # grep 0000:00:12.0/nvme/nvme 00:05:52.904 23:01:12 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:52.904 23:01:12 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:05:52.904 23:01:12 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:52.904 23:01:12 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme2 00:05:52.904 23:01:12 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme2 00:05:52.904 23:01:12 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme2 ]] 00:05:52.904 23:01:12 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme2 00:05:52.904 23:01:12 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:52.904 23:01:12 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:52.904 23:01:12 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:52.904 23:01:12 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:52.904 23:01:12 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:52.904 23:01:12 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme2 00:05:52.904 23:01:12 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:52.904 23:01:12 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:52.904 23:01:12 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:52.904 23:01:12 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:52.904 23:01:12 -- common/autotest_common.sh@1541 -- # continue 00:05:52.904 23:01:12 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:52.904 23:01:12 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:05:52.904 23:01:12 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:52.904 23:01:12 -- common/autotest_common.sh@1485 -- # grep 0000:00:13.0/nvme/nvme 00:05:52.904 23:01:12 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:52.904 23:01:12 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:05:52.904 23:01:12 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:52.904 23:01:12 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme3 00:05:52.904 23:01:12 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme3 00:05:52.904 23:01:12 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme3 ]] 00:05:52.904 23:01:12 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme3 00:05:52.904 23:01:12 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:52.904 23:01:12 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:52.904 23:01:12 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:52.904 23:01:12 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:52.904 23:01:12 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:52.904 23:01:12 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:52.904 23:01:12 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme3 00:05:52.904 23:01:12 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:52.904 23:01:12 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:52.904 23:01:12 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:52.904 23:01:12 -- common/autotest_common.sh@1541 -- # continue 00:05:52.904 23:01:12 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:52.904 23:01:12 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:52.904 23:01:12 -- common/autotest_common.sh@10 -- # set +x 00:05:52.904 23:01:12 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:52.904 23:01:12 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:52.904 23:01:12 -- common/autotest_common.sh@10 -- # set +x 00:05:52.904 23:01:12 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:53.479 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:54.049 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:54.049 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:54.049 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:54.049 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:54.308 23:01:13 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:54.308 23:01:13 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:54.308 23:01:13 -- common/autotest_common.sh@10 -- # set +x 00:05:54.308 23:01:13 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:54.308 23:01:13 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:54.308 23:01:13 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:54.308 23:01:13 -- common/autotest_common.sh@1561 -- # bdfs=() 00:05:54.308 23:01:13 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:05:54.308 23:01:13 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:05:54.308 23:01:13 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:05:54.308 23:01:13 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:05:54.308 23:01:13 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:54.308 23:01:13 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:54.308 23:01:13 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:54.308 23:01:13 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:54.308 23:01:13 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:54.308 23:01:13 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:05:54.308 23:01:13 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:54.308 23:01:13 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:54.308 23:01:13 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:54.308 23:01:13 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:54.308 23:01:13 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:54.308 23:01:13 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:54.308 23:01:13 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:54.308 23:01:13 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:54.308 23:01:13 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:54.308 23:01:13 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:54.308 23:01:13 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:05:54.308 23:01:13 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:54.308 23:01:13 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:54.308 23:01:13 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:54.308 23:01:13 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:05:54.308 23:01:13 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:54.308 23:01:13 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:54.308 23:01:13 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:05:54.308 23:01:13 -- common/autotest_common.sh@1570 -- # return 0 00:05:54.308 23:01:13 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:54.308 23:01:13 -- common/autotest_common.sh@1578 -- # return 0 00:05:54.308 23:01:13 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:54.308 23:01:13 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:54.308 23:01:13 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:54.308 23:01:13 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:54.308 23:01:13 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:54.308 23:01:13 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:54.308 23:01:13 -- common/autotest_common.sh@10 -- # set +x 00:05:54.308 23:01:13 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:54.308 23:01:13 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:54.308 23:01:13 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:54.308 23:01:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:54.308 23:01:13 -- common/autotest_common.sh@10 -- # set +x 00:05:54.308 ************************************ 00:05:54.308 START TEST env 00:05:54.308 ************************************ 00:05:54.308 23:01:13 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:54.574 * Looking for test storage... 00:05:54.574 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:54.574 23:01:13 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:54.574 23:01:13 env -- common/autotest_common.sh@1681 -- # lcov --version 00:05:54.574 23:01:13 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:54.574 23:01:13 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:54.574 23:01:13 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:54.574 23:01:13 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:54.574 23:01:13 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:54.574 23:01:13 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:54.574 23:01:13 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:54.574 23:01:13 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:54.574 23:01:13 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:54.574 23:01:13 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:54.574 23:01:13 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:54.574 23:01:13 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:54.574 23:01:13 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:54.574 23:01:13 env -- scripts/common.sh@344 -- # case "$op" in 00:05:54.574 23:01:13 env -- scripts/common.sh@345 -- # : 1 00:05:54.574 23:01:13 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:54.574 23:01:13 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:54.574 23:01:13 env -- scripts/common.sh@365 -- # decimal 1 00:05:54.574 23:01:13 env -- scripts/common.sh@353 -- # local d=1 00:05:54.574 23:01:13 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:54.574 23:01:13 env -- scripts/common.sh@355 -- # echo 1 00:05:54.574 23:01:13 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:54.574 23:01:13 env -- scripts/common.sh@366 -- # decimal 2 00:05:54.574 23:01:13 env -- scripts/common.sh@353 -- # local d=2 00:05:54.574 23:01:13 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:54.574 23:01:13 env -- scripts/common.sh@355 -- # echo 2 00:05:54.574 23:01:13 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:54.574 23:01:13 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:54.574 23:01:13 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:54.574 23:01:13 env -- scripts/common.sh@368 -- # return 0 00:05:54.574 23:01:13 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:54.574 23:01:13 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:54.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.574 --rc genhtml_branch_coverage=1 00:05:54.574 --rc genhtml_function_coverage=1 00:05:54.574 --rc genhtml_legend=1 00:05:54.574 --rc geninfo_all_blocks=1 00:05:54.574 --rc geninfo_unexecuted_blocks=1 00:05:54.574 00:05:54.574 ' 00:05:54.574 23:01:13 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:54.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.574 --rc genhtml_branch_coverage=1 00:05:54.574 --rc genhtml_function_coverage=1 00:05:54.574 --rc genhtml_legend=1 00:05:54.574 --rc geninfo_all_blocks=1 00:05:54.574 --rc geninfo_unexecuted_blocks=1 00:05:54.574 00:05:54.574 ' 00:05:54.574 23:01:13 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:54.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.574 --rc genhtml_branch_coverage=1 00:05:54.574 --rc genhtml_function_coverage=1 00:05:54.574 --rc genhtml_legend=1 00:05:54.574 --rc geninfo_all_blocks=1 00:05:54.574 --rc geninfo_unexecuted_blocks=1 00:05:54.574 00:05:54.574 ' 00:05:54.574 23:01:13 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:54.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.574 --rc genhtml_branch_coverage=1 00:05:54.574 --rc genhtml_function_coverage=1 00:05:54.574 --rc genhtml_legend=1 00:05:54.574 --rc geninfo_all_blocks=1 00:05:54.574 --rc geninfo_unexecuted_blocks=1 00:05:54.574 00:05:54.574 ' 00:05:54.574 23:01:13 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:54.574 23:01:13 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:54.574 23:01:13 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:54.574 23:01:13 env -- common/autotest_common.sh@10 -- # set +x 00:05:54.574 ************************************ 00:05:54.574 START TEST env_memory 00:05:54.574 ************************************ 00:05:54.574 23:01:13 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:54.574 00:05:54.574 00:05:54.574 CUnit - A unit testing framework for C - Version 2.1-3 00:05:54.574 http://cunit.sourceforge.net/ 00:05:54.574 00:05:54.574 00:05:54.574 Suite: memory 00:05:54.574 Test: alloc and free memory map ...[2024-11-18 23:01:13.843409] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:54.574 passed 00:05:54.574 Test: mem map translation ...[2024-11-18 23:01:13.882094] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:54.574 [2024-11-18 23:01:13.882133] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:54.574 [2024-11-18 23:01:13.882195] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:54.574 [2024-11-18 23:01:13.882210] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:54.574 passed 00:05:54.832 Test: mem map registration ...[2024-11-18 23:01:13.951696] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:54.832 [2024-11-18 23:01:13.951734] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:54.832 passed 00:05:54.832 Test: mem map adjacent registrations ...passed 00:05:54.832 00:05:54.832 Run Summary: Type Total Ran Passed Failed Inactive 00:05:54.832 suites 1 1 n/a 0 0 00:05:54.832 tests 4 4 4 0 0 00:05:54.832 asserts 152 152 152 0 n/a 00:05:54.832 00:05:54.832 Elapsed time = 0.234 seconds 00:05:54.832 00:05:54.832 real 0m0.270s 00:05:54.832 user 0m0.242s 00:05:54.832 sys 0m0.021s 00:05:54.832 23:01:14 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:54.832 ************************************ 00:05:54.832 23:01:14 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:54.832 END TEST env_memory 00:05:54.832 ************************************ 00:05:54.832 23:01:14 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:54.832 23:01:14 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:54.832 23:01:14 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:54.832 23:01:14 env -- common/autotest_common.sh@10 -- # set +x 00:05:54.832 ************************************ 00:05:54.832 START TEST env_vtophys 00:05:54.832 ************************************ 00:05:54.832 23:01:14 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:54.832 EAL: lib.eal log level changed from notice to debug 00:05:54.832 EAL: Detected lcore 0 as core 0 on socket 0 00:05:54.832 EAL: Detected lcore 1 as core 0 on socket 0 00:05:54.832 EAL: Detected lcore 2 as core 0 on socket 0 00:05:54.832 EAL: Detected lcore 3 as core 0 on socket 0 00:05:54.832 EAL: Detected lcore 4 as core 0 on socket 0 00:05:54.832 EAL: Detected lcore 5 as core 0 on socket 0 00:05:54.832 EAL: Detected lcore 6 as core 0 on socket 0 00:05:54.832 EAL: Detected lcore 7 as core 0 on socket 0 00:05:54.832 EAL: Detected lcore 8 as core 0 on socket 0 00:05:54.832 EAL: Detected lcore 9 as core 0 on socket 0 00:05:54.832 EAL: Maximum logical cores by configuration: 128 00:05:54.832 EAL: Detected CPU lcores: 10 00:05:54.832 EAL: Detected NUMA nodes: 1 00:05:54.832 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:54.832 EAL: Detected shared linkage of DPDK 00:05:54.832 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:54.832 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:54.832 EAL: Registered [vdev] bus. 00:05:54.832 EAL: bus.vdev log level changed from disabled to notice 00:05:54.832 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:54.832 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:54.832 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:54.832 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:54.832 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:54.832 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:54.832 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:54.832 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:54.832 EAL: No shared files mode enabled, IPC will be disabled 00:05:54.832 EAL: No shared files mode enabled, IPC is disabled 00:05:54.832 EAL: Selected IOVA mode 'PA' 00:05:54.832 EAL: Probing VFIO support... 00:05:54.832 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:54.832 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:54.832 EAL: Ask a virtual area of 0x2e000 bytes 00:05:54.832 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:54.832 EAL: Setting up physically contiguous memory... 00:05:54.832 EAL: Setting maximum number of open files to 524288 00:05:54.832 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:54.832 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:54.832 EAL: Ask a virtual area of 0x61000 bytes 00:05:54.832 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:54.832 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:54.832 EAL: Ask a virtual area of 0x400000000 bytes 00:05:54.832 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:54.832 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:54.832 EAL: Ask a virtual area of 0x61000 bytes 00:05:54.832 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:54.832 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:54.832 EAL: Ask a virtual area of 0x400000000 bytes 00:05:54.832 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:54.832 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:54.832 EAL: Ask a virtual area of 0x61000 bytes 00:05:54.832 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:54.832 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:54.832 EAL: Ask a virtual area of 0x400000000 bytes 00:05:54.832 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:54.832 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:54.832 EAL: Ask a virtual area of 0x61000 bytes 00:05:54.832 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:54.832 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:54.832 EAL: Ask a virtual area of 0x400000000 bytes 00:05:54.832 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:54.832 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:54.832 EAL: Hugepages will be freed exactly as allocated. 00:05:54.832 EAL: No shared files mode enabled, IPC is disabled 00:05:54.832 EAL: No shared files mode enabled, IPC is disabled 00:05:55.090 EAL: TSC frequency is ~2600000 KHz 00:05:55.090 EAL: Main lcore 0 is ready (tid=7f7bf1716a40;cpuset=[0]) 00:05:55.090 EAL: Trying to obtain current memory policy. 00:05:55.090 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:55.090 EAL: Restoring previous memory policy: 0 00:05:55.090 EAL: request: mp_malloc_sync 00:05:55.090 EAL: No shared files mode enabled, IPC is disabled 00:05:55.090 EAL: Heap on socket 0 was expanded by 2MB 00:05:55.090 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:55.090 EAL: No shared files mode enabled, IPC is disabled 00:05:55.090 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:55.090 EAL: Mem event callback 'spdk:(nil)' registered 00:05:55.090 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:55.090 00:05:55.090 00:05:55.090 CUnit - A unit testing framework for C - Version 2.1-3 00:05:55.090 http://cunit.sourceforge.net/ 00:05:55.090 00:05:55.090 00:05:55.090 Suite: components_suite 00:05:55.355 Test: vtophys_malloc_test ...passed 00:05:55.355 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:55.356 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:55.356 EAL: Restoring previous memory policy: 4 00:05:55.356 EAL: Calling mem event callback 'spdk:(nil)' 00:05:55.356 EAL: request: mp_malloc_sync 00:05:55.356 EAL: No shared files mode enabled, IPC is disabled 00:05:55.356 EAL: Heap on socket 0 was expanded by 4MB 00:05:55.356 EAL: Calling mem event callback 'spdk:(nil)' 00:05:55.356 EAL: request: mp_malloc_sync 00:05:55.356 EAL: No shared files mode enabled, IPC is disabled 00:05:55.356 EAL: Heap on socket 0 was shrunk by 4MB 00:05:55.356 EAL: Trying to obtain current memory policy. 00:05:55.356 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:55.356 EAL: Restoring previous memory policy: 4 00:05:55.356 EAL: Calling mem event callback 'spdk:(nil)' 00:05:55.356 EAL: request: mp_malloc_sync 00:05:55.356 EAL: No shared files mode enabled, IPC is disabled 00:05:55.356 EAL: Heap on socket 0 was expanded by 6MB 00:05:55.356 EAL: Calling mem event callback 'spdk:(nil)' 00:05:55.356 EAL: request: mp_malloc_sync 00:05:55.356 EAL: No shared files mode enabled, IPC is disabled 00:05:55.356 EAL: Heap on socket 0 was shrunk by 6MB 00:05:55.356 EAL: Trying to obtain current memory policy. 00:05:55.356 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:55.356 EAL: Restoring previous memory policy: 4 00:05:55.356 EAL: Calling mem event callback 'spdk:(nil)' 00:05:55.356 EAL: request: mp_malloc_sync 00:05:55.356 EAL: No shared files mode enabled, IPC is disabled 00:05:55.356 EAL: Heap on socket 0 was expanded by 10MB 00:05:55.356 EAL: Calling mem event callback 'spdk:(nil)' 00:05:55.356 EAL: request: mp_malloc_sync 00:05:55.356 EAL: No shared files mode enabled, IPC is disabled 00:05:55.356 EAL: Heap on socket 0 was shrunk by 10MB 00:05:55.356 EAL: Trying to obtain current memory policy. 00:05:55.357 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:55.357 EAL: Restoring previous memory policy: 4 00:05:55.357 EAL: Calling mem event callback 'spdk:(nil)' 00:05:55.357 EAL: request: mp_malloc_sync 00:05:55.357 EAL: No shared files mode enabled, IPC is disabled 00:05:55.357 EAL: Heap on socket 0 was expanded by 18MB 00:05:55.357 EAL: Calling mem event callback 'spdk:(nil)' 00:05:55.357 EAL: request: mp_malloc_sync 00:05:55.357 EAL: No shared files mode enabled, IPC is disabled 00:05:55.357 EAL: Heap on socket 0 was shrunk by 18MB 00:05:55.357 EAL: Trying to obtain current memory policy. 00:05:55.357 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:55.357 EAL: Restoring previous memory policy: 4 00:05:55.357 EAL: Calling mem event callback 'spdk:(nil)' 00:05:55.357 EAL: request: mp_malloc_sync 00:05:55.357 EAL: No shared files mode enabled, IPC is disabled 00:05:55.357 EAL: Heap on socket 0 was expanded by 34MB 00:05:55.357 EAL: Calling mem event callback 'spdk:(nil)' 00:05:55.357 EAL: request: mp_malloc_sync 00:05:55.357 EAL: No shared files mode enabled, IPC is disabled 00:05:55.357 EAL: Heap on socket 0 was shrunk by 34MB 00:05:55.357 EAL: Trying to obtain current memory policy. 00:05:55.357 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:55.357 EAL: Restoring previous memory policy: 4 00:05:55.357 EAL: Calling mem event callback 'spdk:(nil)' 00:05:55.357 EAL: request: mp_malloc_sync 00:05:55.357 EAL: No shared files mode enabled, IPC is disabled 00:05:55.357 EAL: Heap on socket 0 was expanded by 66MB 00:05:55.357 EAL: Calling mem event callback 'spdk:(nil)' 00:05:55.357 EAL: request: mp_malloc_sync 00:05:55.357 EAL: No shared files mode enabled, IPC is disabled 00:05:55.357 EAL: Heap on socket 0 was shrunk by 66MB 00:05:55.357 EAL: Trying to obtain current memory policy. 00:05:55.357 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:55.357 EAL: Restoring previous memory policy: 4 00:05:55.358 EAL: Calling mem event callback 'spdk:(nil)' 00:05:55.358 EAL: request: mp_malloc_sync 00:05:55.358 EAL: No shared files mode enabled, IPC is disabled 00:05:55.358 EAL: Heap on socket 0 was expanded by 130MB 00:05:55.358 EAL: Calling mem event callback 'spdk:(nil)' 00:05:55.358 EAL: request: mp_malloc_sync 00:05:55.358 EAL: No shared files mode enabled, IPC is disabled 00:05:55.358 EAL: Heap on socket 0 was shrunk by 130MB 00:05:55.358 EAL: Trying to obtain current memory policy. 00:05:55.358 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:55.630 EAL: Restoring previous memory policy: 4 00:05:55.630 EAL: Calling mem event callback 'spdk:(nil)' 00:05:55.630 EAL: request: mp_malloc_sync 00:05:55.630 EAL: No shared files mode enabled, IPC is disabled 00:05:55.630 EAL: Heap on socket 0 was expanded by 258MB 00:05:55.630 EAL: Calling mem event callback 'spdk:(nil)' 00:05:55.630 EAL: request: mp_malloc_sync 00:05:55.630 EAL: No shared files mode enabled, IPC is disabled 00:05:55.630 EAL: Heap on socket 0 was shrunk by 258MB 00:05:55.630 EAL: Trying to obtain current memory policy. 00:05:55.630 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:55.630 EAL: Restoring previous memory policy: 4 00:05:55.630 EAL: Calling mem event callback 'spdk:(nil)' 00:05:55.630 EAL: request: mp_malloc_sync 00:05:55.630 EAL: No shared files mode enabled, IPC is disabled 00:05:55.630 EAL: Heap on socket 0 was expanded by 514MB 00:05:55.630 EAL: Calling mem event callback 'spdk:(nil)' 00:05:55.888 EAL: request: mp_malloc_sync 00:05:55.888 EAL: No shared files mode enabled, IPC is disabled 00:05:55.888 EAL: Heap on socket 0 was shrunk by 514MB 00:05:55.888 EAL: Trying to obtain current memory policy. 00:05:55.888 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:55.888 EAL: Restoring previous memory policy: 4 00:05:55.888 EAL: Calling mem event callback 'spdk:(nil)' 00:05:55.888 EAL: request: mp_malloc_sync 00:05:55.888 EAL: No shared files mode enabled, IPC is disabled 00:05:55.888 EAL: Heap on socket 0 was expanded by 1026MB 00:05:56.146 EAL: Calling mem event callback 'spdk:(nil)' 00:05:56.406 passed 00:05:56.406 00:05:56.406 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.406 suites 1 1 n/a 0 0 00:05:56.406 tests 2 2 2 0 0 00:05:56.406 asserts 5246 5246 5246 0 n/a 00:05:56.406 00:05:56.406 Elapsed time = 1.216 seconds 00:05:56.406 EAL: request: mp_malloc_sync 00:05:56.406 EAL: No shared files mode enabled, IPC is disabled 00:05:56.406 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:56.406 EAL: Calling mem event callback 'spdk:(nil)' 00:05:56.406 EAL: request: mp_malloc_sync 00:05:56.406 EAL: No shared files mode enabled, IPC is disabled 00:05:56.406 EAL: Heap on socket 0 was shrunk by 2MB 00:05:56.406 EAL: No shared files mode enabled, IPC is disabled 00:05:56.406 EAL: No shared files mode enabled, IPC is disabled 00:05:56.406 EAL: No shared files mode enabled, IPC is disabled 00:05:56.406 00:05:56.406 real 0m1.442s 00:05:56.406 user 0m0.604s 00:05:56.406 sys 0m0.698s 00:05:56.406 23:01:15 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:56.406 ************************************ 00:05:56.406 END TEST env_vtophys 00:05:56.406 ************************************ 00:05:56.406 23:01:15 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:56.406 23:01:15 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:56.406 23:01:15 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:56.406 23:01:15 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:56.406 23:01:15 env -- common/autotest_common.sh@10 -- # set +x 00:05:56.406 ************************************ 00:05:56.406 START TEST env_pci 00:05:56.406 ************************************ 00:05:56.406 23:01:15 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:56.406 00:05:56.406 00:05:56.406 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.406 http://cunit.sourceforge.net/ 00:05:56.406 00:05:56.406 00:05:56.406 Suite: pci 00:05:56.406 Test: pci_hook ...[2024-11-18 23:01:15.607979] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 69760 has claimed it 00:05:56.406 passed 00:05:56.406 00:05:56.406 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.406 suites 1 1 n/a 0 0 00:05:56.406 tests 1 1 1 0 0 00:05:56.406 asserts 25 25 25 0 n/a 00:05:56.406 00:05:56.406 Elapsed time = 0.006 seconds 00:05:56.406 EAL: Cannot find device (10000:00:01.0) 00:05:56.406 EAL: Failed to attach device on primary process 00:05:56.406 00:05:56.406 real 0m0.055s 00:05:56.406 user 0m0.020s 00:05:56.406 sys 0m0.034s 00:05:56.406 23:01:15 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:56.406 ************************************ 00:05:56.406 END TEST env_pci 00:05:56.406 ************************************ 00:05:56.406 23:01:15 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:56.406 23:01:15 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:56.406 23:01:15 env -- env/env.sh@15 -- # uname 00:05:56.406 23:01:15 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:56.406 23:01:15 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:56.406 23:01:15 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:56.406 23:01:15 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:56.406 23:01:15 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:56.406 23:01:15 env -- common/autotest_common.sh@10 -- # set +x 00:05:56.406 ************************************ 00:05:56.406 START TEST env_dpdk_post_init 00:05:56.406 ************************************ 00:05:56.406 23:01:15 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:56.406 EAL: Detected CPU lcores: 10 00:05:56.406 EAL: Detected NUMA nodes: 1 00:05:56.406 EAL: Detected shared linkage of DPDK 00:05:56.406 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:56.406 EAL: Selected IOVA mode 'PA' 00:05:56.665 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:56.665 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:56.665 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:56.665 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:05:56.665 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:05:56.665 Starting DPDK initialization... 00:05:56.665 Starting SPDK post initialization... 00:05:56.665 SPDK NVMe probe 00:05:56.665 Attaching to 0000:00:10.0 00:05:56.665 Attaching to 0000:00:11.0 00:05:56.665 Attaching to 0000:00:12.0 00:05:56.665 Attaching to 0000:00:13.0 00:05:56.665 Attached to 0000:00:13.0 00:05:56.665 Attached to 0000:00:10.0 00:05:56.665 Attached to 0000:00:11.0 00:05:56.665 Attached to 0000:00:12.0 00:05:56.665 Cleaning up... 00:05:56.665 00:05:56.665 real 0m0.224s 00:05:56.665 user 0m0.063s 00:05:56.665 sys 0m0.064s 00:05:56.665 23:01:15 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:56.665 23:01:15 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:56.665 ************************************ 00:05:56.665 END TEST env_dpdk_post_init 00:05:56.665 ************************************ 00:05:56.665 23:01:15 env -- env/env.sh@26 -- # uname 00:05:56.665 23:01:15 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:56.665 23:01:15 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:56.665 23:01:15 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:56.665 23:01:15 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:56.665 23:01:15 env -- common/autotest_common.sh@10 -- # set +x 00:05:56.665 ************************************ 00:05:56.665 START TEST env_mem_callbacks 00:05:56.665 ************************************ 00:05:56.665 23:01:15 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:56.665 EAL: Detected CPU lcores: 10 00:05:56.665 EAL: Detected NUMA nodes: 1 00:05:56.665 EAL: Detected shared linkage of DPDK 00:05:56.665 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:56.665 EAL: Selected IOVA mode 'PA' 00:05:56.924 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:56.924 00:05:56.924 00:05:56.924 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.924 http://cunit.sourceforge.net/ 00:05:56.924 00:05:56.924 00:05:56.924 Suite: memory 00:05:56.924 Test: test ... 00:05:56.924 register 0x200000200000 2097152 00:05:56.924 malloc 3145728 00:05:56.924 register 0x200000400000 4194304 00:05:56.924 buf 0x200000500000 len 3145728 PASSED 00:05:56.924 malloc 64 00:05:56.924 buf 0x2000004fff40 len 64 PASSED 00:05:56.924 malloc 4194304 00:05:56.924 register 0x200000800000 6291456 00:05:56.924 buf 0x200000a00000 len 4194304 PASSED 00:05:56.924 free 0x200000500000 3145728 00:05:56.924 free 0x2000004fff40 64 00:05:56.924 unregister 0x200000400000 4194304 PASSED 00:05:56.924 free 0x200000a00000 4194304 00:05:56.924 unregister 0x200000800000 6291456 PASSED 00:05:56.924 malloc 8388608 00:05:56.924 register 0x200000400000 10485760 00:05:56.924 buf 0x200000600000 len 8388608 PASSED 00:05:56.924 free 0x200000600000 8388608 00:05:56.924 unregister 0x200000400000 10485760 PASSED 00:05:56.924 passed 00:05:56.924 00:05:56.924 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.924 suites 1 1 n/a 0 0 00:05:56.924 tests 1 1 1 0 0 00:05:56.924 asserts 15 15 15 0 n/a 00:05:56.924 00:05:56.924 Elapsed time = 0.011 seconds 00:05:56.924 00:05:56.924 real 0m0.168s 00:05:56.924 user 0m0.027s 00:05:56.924 sys 0m0.039s 00:05:56.924 23:01:16 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:56.924 ************************************ 00:05:56.924 END TEST env_mem_callbacks 00:05:56.924 ************************************ 00:05:56.924 23:01:16 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:56.924 00:05:56.924 real 0m2.547s 00:05:56.924 user 0m1.093s 00:05:56.924 sys 0m1.090s 00:05:56.924 23:01:16 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:56.924 23:01:16 env -- common/autotest_common.sh@10 -- # set +x 00:05:56.924 ************************************ 00:05:56.924 END TEST env 00:05:56.924 ************************************ 00:05:56.924 23:01:16 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:56.924 23:01:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:56.924 23:01:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:56.924 23:01:16 -- common/autotest_common.sh@10 -- # set +x 00:05:56.924 ************************************ 00:05:56.924 START TEST rpc 00:05:56.924 ************************************ 00:05:56.924 23:01:16 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:56.924 * Looking for test storage... 00:05:56.924 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:56.924 23:01:16 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:56.924 23:01:16 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:56.924 23:01:16 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:57.183 23:01:16 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:57.183 23:01:16 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:57.183 23:01:16 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:57.183 23:01:16 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:57.183 23:01:16 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:57.183 23:01:16 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:57.183 23:01:16 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:57.183 23:01:16 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:57.183 23:01:16 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:57.183 23:01:16 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:57.183 23:01:16 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:57.183 23:01:16 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:57.183 23:01:16 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:57.183 23:01:16 rpc -- scripts/common.sh@345 -- # : 1 00:05:57.183 23:01:16 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:57.183 23:01:16 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:57.183 23:01:16 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:57.183 23:01:16 rpc -- scripts/common.sh@353 -- # local d=1 00:05:57.183 23:01:16 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:57.183 23:01:16 rpc -- scripts/common.sh@355 -- # echo 1 00:05:57.183 23:01:16 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:57.183 23:01:16 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:57.183 23:01:16 rpc -- scripts/common.sh@353 -- # local d=2 00:05:57.183 23:01:16 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:57.183 23:01:16 rpc -- scripts/common.sh@355 -- # echo 2 00:05:57.183 23:01:16 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:57.183 23:01:16 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:57.183 23:01:16 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:57.183 23:01:16 rpc -- scripts/common.sh@368 -- # return 0 00:05:57.183 23:01:16 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:57.183 23:01:16 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:57.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.183 --rc genhtml_branch_coverage=1 00:05:57.183 --rc genhtml_function_coverage=1 00:05:57.183 --rc genhtml_legend=1 00:05:57.183 --rc geninfo_all_blocks=1 00:05:57.183 --rc geninfo_unexecuted_blocks=1 00:05:57.183 00:05:57.183 ' 00:05:57.183 23:01:16 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:57.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.183 --rc genhtml_branch_coverage=1 00:05:57.183 --rc genhtml_function_coverage=1 00:05:57.183 --rc genhtml_legend=1 00:05:57.183 --rc geninfo_all_blocks=1 00:05:57.183 --rc geninfo_unexecuted_blocks=1 00:05:57.183 00:05:57.183 ' 00:05:57.183 23:01:16 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:57.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.183 --rc genhtml_branch_coverage=1 00:05:57.183 --rc genhtml_function_coverage=1 00:05:57.183 --rc genhtml_legend=1 00:05:57.183 --rc geninfo_all_blocks=1 00:05:57.183 --rc geninfo_unexecuted_blocks=1 00:05:57.183 00:05:57.183 ' 00:05:57.183 23:01:16 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:57.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.183 --rc genhtml_branch_coverage=1 00:05:57.183 --rc genhtml_function_coverage=1 00:05:57.183 --rc genhtml_legend=1 00:05:57.183 --rc geninfo_all_blocks=1 00:05:57.183 --rc geninfo_unexecuted_blocks=1 00:05:57.183 00:05:57.183 ' 00:05:57.183 23:01:16 rpc -- rpc/rpc.sh@65 -- # spdk_pid=69887 00:05:57.183 23:01:16 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:57.183 23:01:16 rpc -- rpc/rpc.sh@67 -- # waitforlisten 69887 00:05:57.183 23:01:16 rpc -- common/autotest_common.sh@831 -- # '[' -z 69887 ']' 00:05:57.183 23:01:16 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.183 23:01:16 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:57.183 23:01:16 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:57.183 23:01:16 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.183 23:01:16 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:57.183 23:01:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.183 [2024-11-18 23:01:16.436757] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:57.183 [2024-11-18 23:01:16.436862] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69887 ] 00:05:57.498 [2024-11-18 23:01:16.586970] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.498 [2024-11-18 23:01:16.620138] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:57.498 [2024-11-18 23:01:16.620201] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 69887' to capture a snapshot of events at runtime. 00:05:57.498 [2024-11-18 23:01:16.620217] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:57.498 [2024-11-18 23:01:16.620225] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:57.498 [2024-11-18 23:01:16.620240] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid69887 for offline analysis/debug. 00:05:57.498 [2024-11-18 23:01:16.620271] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.082 23:01:17 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:58.082 23:01:17 rpc -- common/autotest_common.sh@864 -- # return 0 00:05:58.082 23:01:17 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:58.082 23:01:17 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:58.082 23:01:17 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:58.082 23:01:17 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:58.082 23:01:17 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:58.082 23:01:17 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:58.082 23:01:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.082 ************************************ 00:05:58.082 START TEST rpc_integrity 00:05:58.082 ************************************ 00:05:58.082 23:01:17 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:58.082 23:01:17 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:58.082 23:01:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.082 23:01:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:58.082 23:01:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.082 23:01:17 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:58.082 23:01:17 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:58.082 23:01:17 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:58.082 23:01:17 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:58.082 23:01:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.082 23:01:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:58.082 23:01:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.082 23:01:17 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:58.082 23:01:17 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:58.082 23:01:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.082 23:01:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:58.082 23:01:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.082 23:01:17 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:58.082 { 00:05:58.082 "name": "Malloc0", 00:05:58.082 "aliases": [ 00:05:58.082 "33f4cf9f-e7b7-42e1-896b-c154a7c52887" 00:05:58.082 ], 00:05:58.083 "product_name": "Malloc disk", 00:05:58.083 "block_size": 512, 00:05:58.083 "num_blocks": 16384, 00:05:58.083 "uuid": "33f4cf9f-e7b7-42e1-896b-c154a7c52887", 00:05:58.083 "assigned_rate_limits": { 00:05:58.083 "rw_ios_per_sec": 0, 00:05:58.083 "rw_mbytes_per_sec": 0, 00:05:58.083 "r_mbytes_per_sec": 0, 00:05:58.083 "w_mbytes_per_sec": 0 00:05:58.083 }, 00:05:58.083 "claimed": false, 00:05:58.083 "zoned": false, 00:05:58.083 "supported_io_types": { 00:05:58.083 "read": true, 00:05:58.083 "write": true, 00:05:58.083 "unmap": true, 00:05:58.083 "flush": true, 00:05:58.083 "reset": true, 00:05:58.083 "nvme_admin": false, 00:05:58.083 "nvme_io": false, 00:05:58.083 "nvme_io_md": false, 00:05:58.083 "write_zeroes": true, 00:05:58.083 "zcopy": true, 00:05:58.083 "get_zone_info": false, 00:05:58.083 "zone_management": false, 00:05:58.083 "zone_append": false, 00:05:58.083 "compare": false, 00:05:58.083 "compare_and_write": false, 00:05:58.083 "abort": true, 00:05:58.083 "seek_hole": false, 00:05:58.083 "seek_data": false, 00:05:58.083 "copy": true, 00:05:58.083 "nvme_iov_md": false 00:05:58.083 }, 00:05:58.083 "memory_domains": [ 00:05:58.083 { 00:05:58.083 "dma_device_id": "system", 00:05:58.083 "dma_device_type": 1 00:05:58.083 }, 00:05:58.083 { 00:05:58.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:58.083 "dma_device_type": 2 00:05:58.083 } 00:05:58.083 ], 00:05:58.083 "driver_specific": {} 00:05:58.083 } 00:05:58.083 ]' 00:05:58.083 23:01:17 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:58.083 23:01:17 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:58.083 23:01:17 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:58.083 23:01:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.083 23:01:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:58.083 [2024-11-18 23:01:17.383625] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:58.083 [2024-11-18 23:01:17.383682] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:58.083 [2024-11-18 23:01:17.383712] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:05:58.083 [2024-11-18 23:01:17.383721] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:58.083 [2024-11-18 23:01:17.386037] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:58.083 [2024-11-18 23:01:17.386074] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:58.083 Passthru0 00:05:58.083 23:01:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.083 23:01:17 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:58.083 23:01:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.083 23:01:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:58.083 23:01:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.083 23:01:17 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:58.083 { 00:05:58.083 "name": "Malloc0", 00:05:58.083 "aliases": [ 00:05:58.083 "33f4cf9f-e7b7-42e1-896b-c154a7c52887" 00:05:58.083 ], 00:05:58.083 "product_name": "Malloc disk", 00:05:58.083 "block_size": 512, 00:05:58.083 "num_blocks": 16384, 00:05:58.083 "uuid": "33f4cf9f-e7b7-42e1-896b-c154a7c52887", 00:05:58.083 "assigned_rate_limits": { 00:05:58.083 "rw_ios_per_sec": 0, 00:05:58.083 "rw_mbytes_per_sec": 0, 00:05:58.083 "r_mbytes_per_sec": 0, 00:05:58.083 "w_mbytes_per_sec": 0 00:05:58.083 }, 00:05:58.083 "claimed": true, 00:05:58.083 "claim_type": "exclusive_write", 00:05:58.083 "zoned": false, 00:05:58.083 "supported_io_types": { 00:05:58.083 "read": true, 00:05:58.083 "write": true, 00:05:58.083 "unmap": true, 00:05:58.083 "flush": true, 00:05:58.083 "reset": true, 00:05:58.083 "nvme_admin": false, 00:05:58.083 "nvme_io": false, 00:05:58.083 "nvme_io_md": false, 00:05:58.083 "write_zeroes": true, 00:05:58.083 "zcopy": true, 00:05:58.083 "get_zone_info": false, 00:05:58.083 "zone_management": false, 00:05:58.083 "zone_append": false, 00:05:58.083 "compare": false, 00:05:58.083 "compare_and_write": false, 00:05:58.083 "abort": true, 00:05:58.083 "seek_hole": false, 00:05:58.083 "seek_data": false, 00:05:58.083 "copy": true, 00:05:58.083 "nvme_iov_md": false 00:05:58.083 }, 00:05:58.083 "memory_domains": [ 00:05:58.083 { 00:05:58.083 "dma_device_id": "system", 00:05:58.083 "dma_device_type": 1 00:05:58.083 }, 00:05:58.083 { 00:05:58.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:58.083 "dma_device_type": 2 00:05:58.083 } 00:05:58.083 ], 00:05:58.083 "driver_specific": {} 00:05:58.083 }, 00:05:58.083 { 00:05:58.083 "name": "Passthru0", 00:05:58.083 "aliases": [ 00:05:58.083 "546d5f7e-9c09-5c46-a18e-6528a795f1bb" 00:05:58.083 ], 00:05:58.083 "product_name": "passthru", 00:05:58.083 "block_size": 512, 00:05:58.083 "num_blocks": 16384, 00:05:58.083 "uuid": "546d5f7e-9c09-5c46-a18e-6528a795f1bb", 00:05:58.083 "assigned_rate_limits": { 00:05:58.083 "rw_ios_per_sec": 0, 00:05:58.083 "rw_mbytes_per_sec": 0, 00:05:58.083 "r_mbytes_per_sec": 0, 00:05:58.083 "w_mbytes_per_sec": 0 00:05:58.083 }, 00:05:58.083 "claimed": false, 00:05:58.083 "zoned": false, 00:05:58.083 "supported_io_types": { 00:05:58.083 "read": true, 00:05:58.083 "write": true, 00:05:58.083 "unmap": true, 00:05:58.083 "flush": true, 00:05:58.083 "reset": true, 00:05:58.083 "nvme_admin": false, 00:05:58.083 "nvme_io": false, 00:05:58.083 "nvme_io_md": false, 00:05:58.083 "write_zeroes": true, 00:05:58.083 "zcopy": true, 00:05:58.083 "get_zone_info": false, 00:05:58.083 "zone_management": false, 00:05:58.083 "zone_append": false, 00:05:58.083 "compare": false, 00:05:58.083 "compare_and_write": false, 00:05:58.083 "abort": true, 00:05:58.083 "seek_hole": false, 00:05:58.083 "seek_data": false, 00:05:58.083 "copy": true, 00:05:58.083 "nvme_iov_md": false 00:05:58.083 }, 00:05:58.083 "memory_domains": [ 00:05:58.083 { 00:05:58.083 "dma_device_id": "system", 00:05:58.083 "dma_device_type": 1 00:05:58.083 }, 00:05:58.083 { 00:05:58.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:58.083 "dma_device_type": 2 00:05:58.083 } 00:05:58.083 ], 00:05:58.083 "driver_specific": { 00:05:58.083 "passthru": { 00:05:58.083 "name": "Passthru0", 00:05:58.083 "base_bdev_name": "Malloc0" 00:05:58.083 } 00:05:58.083 } 00:05:58.083 } 00:05:58.083 ]' 00:05:58.083 23:01:17 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:58.083 23:01:17 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:58.083 23:01:17 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:58.083 23:01:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.083 23:01:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:58.083 23:01:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.083 23:01:17 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:58.083 23:01:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.083 23:01:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:58.344 23:01:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.344 23:01:17 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:58.344 23:01:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.344 23:01:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:58.344 23:01:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.344 23:01:17 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:58.345 23:01:17 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:58.345 ************************************ 00:05:58.345 END TEST rpc_integrity 00:05:58.345 ************************************ 00:05:58.345 23:01:17 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:58.345 00:05:58.345 real 0m0.232s 00:05:58.345 user 0m0.128s 00:05:58.345 sys 0m0.037s 00:05:58.345 23:01:17 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:58.345 23:01:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:58.345 23:01:17 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:58.345 23:01:17 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:58.345 23:01:17 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:58.345 23:01:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.345 ************************************ 00:05:58.345 START TEST rpc_plugins 00:05:58.345 ************************************ 00:05:58.345 23:01:17 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:05:58.345 23:01:17 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:58.345 23:01:17 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.345 23:01:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:58.345 23:01:17 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.345 23:01:17 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:58.345 23:01:17 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:58.345 23:01:17 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.345 23:01:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:58.345 23:01:17 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.345 23:01:17 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:58.345 { 00:05:58.345 "name": "Malloc1", 00:05:58.345 "aliases": [ 00:05:58.345 "e0dabd91-2545-4efb-9119-c61325ab6eaa" 00:05:58.345 ], 00:05:58.345 "product_name": "Malloc disk", 00:05:58.345 "block_size": 4096, 00:05:58.345 "num_blocks": 256, 00:05:58.345 "uuid": "e0dabd91-2545-4efb-9119-c61325ab6eaa", 00:05:58.345 "assigned_rate_limits": { 00:05:58.345 "rw_ios_per_sec": 0, 00:05:58.345 "rw_mbytes_per_sec": 0, 00:05:58.345 "r_mbytes_per_sec": 0, 00:05:58.345 "w_mbytes_per_sec": 0 00:05:58.345 }, 00:05:58.345 "claimed": false, 00:05:58.345 "zoned": false, 00:05:58.345 "supported_io_types": { 00:05:58.345 "read": true, 00:05:58.345 "write": true, 00:05:58.345 "unmap": true, 00:05:58.345 "flush": true, 00:05:58.345 "reset": true, 00:05:58.345 "nvme_admin": false, 00:05:58.345 "nvme_io": false, 00:05:58.345 "nvme_io_md": false, 00:05:58.345 "write_zeroes": true, 00:05:58.345 "zcopy": true, 00:05:58.345 "get_zone_info": false, 00:05:58.345 "zone_management": false, 00:05:58.345 "zone_append": false, 00:05:58.345 "compare": false, 00:05:58.345 "compare_and_write": false, 00:05:58.345 "abort": true, 00:05:58.345 "seek_hole": false, 00:05:58.345 "seek_data": false, 00:05:58.345 "copy": true, 00:05:58.345 "nvme_iov_md": false 00:05:58.345 }, 00:05:58.345 "memory_domains": [ 00:05:58.345 { 00:05:58.345 "dma_device_id": "system", 00:05:58.345 "dma_device_type": 1 00:05:58.345 }, 00:05:58.345 { 00:05:58.345 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:58.345 "dma_device_type": 2 00:05:58.345 } 00:05:58.345 ], 00:05:58.345 "driver_specific": {} 00:05:58.345 } 00:05:58.345 ]' 00:05:58.345 23:01:17 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:58.345 23:01:17 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:58.345 23:01:17 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:58.345 23:01:17 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.345 23:01:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:58.345 23:01:17 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.345 23:01:17 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:58.345 23:01:17 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.345 23:01:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:58.345 23:01:17 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.345 23:01:17 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:58.345 23:01:17 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:58.345 ************************************ 00:05:58.345 END TEST rpc_plugins 00:05:58.345 ************************************ 00:05:58.345 23:01:17 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:58.345 00:05:58.345 real 0m0.117s 00:05:58.345 user 0m0.068s 00:05:58.345 sys 0m0.015s 00:05:58.345 23:01:17 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:58.345 23:01:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:58.345 23:01:17 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:58.345 23:01:17 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:58.345 23:01:17 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:58.345 23:01:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.345 ************************************ 00:05:58.345 START TEST rpc_trace_cmd_test 00:05:58.345 ************************************ 00:05:58.345 23:01:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:05:58.345 23:01:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:58.345 23:01:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:58.345 23:01:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.345 23:01:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:58.615 23:01:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.615 23:01:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:58.615 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid69887", 00:05:58.615 "tpoint_group_mask": "0x8", 00:05:58.615 "iscsi_conn": { 00:05:58.615 "mask": "0x2", 00:05:58.615 "tpoint_mask": "0x0" 00:05:58.615 }, 00:05:58.617 "scsi": { 00:05:58.617 "mask": "0x4", 00:05:58.617 "tpoint_mask": "0x0" 00:05:58.617 }, 00:05:58.617 "bdev": { 00:05:58.617 "mask": "0x8", 00:05:58.617 "tpoint_mask": "0xffffffffffffffff" 00:05:58.617 }, 00:05:58.617 "nvmf_rdma": { 00:05:58.617 "mask": "0x10", 00:05:58.617 "tpoint_mask": "0x0" 00:05:58.617 }, 00:05:58.617 "nvmf_tcp": { 00:05:58.617 "mask": "0x20", 00:05:58.617 "tpoint_mask": "0x0" 00:05:58.617 }, 00:05:58.617 "ftl": { 00:05:58.617 "mask": "0x40", 00:05:58.617 "tpoint_mask": "0x0" 00:05:58.617 }, 00:05:58.618 "blobfs": { 00:05:58.618 "mask": "0x80", 00:05:58.618 "tpoint_mask": "0x0" 00:05:58.618 }, 00:05:58.618 "dsa": { 00:05:58.618 "mask": "0x200", 00:05:58.618 "tpoint_mask": "0x0" 00:05:58.618 }, 00:05:58.618 "thread": { 00:05:58.618 "mask": "0x400", 00:05:58.618 "tpoint_mask": "0x0" 00:05:58.618 }, 00:05:58.618 "nvme_pcie": { 00:05:58.618 "mask": "0x800", 00:05:58.618 "tpoint_mask": "0x0" 00:05:58.618 }, 00:05:58.618 "iaa": { 00:05:58.618 "mask": "0x1000", 00:05:58.618 "tpoint_mask": "0x0" 00:05:58.618 }, 00:05:58.618 "nvme_tcp": { 00:05:58.618 "mask": "0x2000", 00:05:58.618 "tpoint_mask": "0x0" 00:05:58.618 }, 00:05:58.618 "bdev_nvme": { 00:05:58.618 "mask": "0x4000", 00:05:58.618 "tpoint_mask": "0x0" 00:05:58.618 }, 00:05:58.618 "sock": { 00:05:58.618 "mask": "0x8000", 00:05:58.618 "tpoint_mask": "0x0" 00:05:58.618 }, 00:05:58.618 "blob": { 00:05:58.618 "mask": "0x10000", 00:05:58.618 "tpoint_mask": "0x0" 00:05:58.618 }, 00:05:58.618 "bdev_raid": { 00:05:58.618 "mask": "0x20000", 00:05:58.618 "tpoint_mask": "0x0" 00:05:58.618 } 00:05:58.618 }' 00:05:58.618 23:01:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:58.618 23:01:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:05:58.618 23:01:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:58.618 23:01:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:58.618 23:01:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:58.618 23:01:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:58.618 23:01:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:58.618 23:01:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:58.618 23:01:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:58.618 ************************************ 00:05:58.618 END TEST rpc_trace_cmd_test 00:05:58.618 ************************************ 00:05:58.619 23:01:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:58.619 00:05:58.619 real 0m0.184s 00:05:58.619 user 0m0.146s 00:05:58.619 sys 0m0.028s 00:05:58.619 23:01:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:58.619 23:01:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:58.619 23:01:17 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:58.619 23:01:17 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:58.619 23:01:17 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:58.619 23:01:17 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:58.619 23:01:17 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:58.619 23:01:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.619 ************************************ 00:05:58.619 START TEST rpc_daemon_integrity 00:05:58.619 ************************************ 00:05:58.619 23:01:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:58.619 23:01:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:58.619 23:01:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.619 23:01:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:58.619 23:01:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.619 23:01:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:58.619 23:01:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:58.619 23:01:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:58.619 23:01:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:58.619 23:01:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.619 23:01:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:58.619 23:01:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.620 23:01:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:58.888 23:01:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:58.888 23:01:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.888 23:01:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:58.888 23:01:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.888 23:01:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:58.888 { 00:05:58.888 "name": "Malloc2", 00:05:58.888 "aliases": [ 00:05:58.888 "8e3ce3f1-2093-45cc-b9b8-bbccadefc5b0" 00:05:58.888 ], 00:05:58.888 "product_name": "Malloc disk", 00:05:58.888 "block_size": 512, 00:05:58.888 "num_blocks": 16384, 00:05:58.888 "uuid": "8e3ce3f1-2093-45cc-b9b8-bbccadefc5b0", 00:05:58.888 "assigned_rate_limits": { 00:05:58.888 "rw_ios_per_sec": 0, 00:05:58.888 "rw_mbytes_per_sec": 0, 00:05:58.888 "r_mbytes_per_sec": 0, 00:05:58.888 "w_mbytes_per_sec": 0 00:05:58.888 }, 00:05:58.888 "claimed": false, 00:05:58.888 "zoned": false, 00:05:58.888 "supported_io_types": { 00:05:58.888 "read": true, 00:05:58.888 "write": true, 00:05:58.888 "unmap": true, 00:05:58.888 "flush": true, 00:05:58.888 "reset": true, 00:05:58.888 "nvme_admin": false, 00:05:58.888 "nvme_io": false, 00:05:58.888 "nvme_io_md": false, 00:05:58.888 "write_zeroes": true, 00:05:58.888 "zcopy": true, 00:05:58.888 "get_zone_info": false, 00:05:58.888 "zone_management": false, 00:05:58.888 "zone_append": false, 00:05:58.888 "compare": false, 00:05:58.888 "compare_and_write": false, 00:05:58.888 "abort": true, 00:05:58.888 "seek_hole": false, 00:05:58.888 "seek_data": false, 00:05:58.888 "copy": true, 00:05:58.888 "nvme_iov_md": false 00:05:58.888 }, 00:05:58.888 "memory_domains": [ 00:05:58.888 { 00:05:58.888 "dma_device_id": "system", 00:05:58.888 "dma_device_type": 1 00:05:58.888 }, 00:05:58.888 { 00:05:58.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:58.888 "dma_device_type": 2 00:05:58.888 } 00:05:58.888 ], 00:05:58.888 "driver_specific": {} 00:05:58.888 } 00:05:58.888 ]' 00:05:58.888 23:01:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:58.888 23:01:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:58.888 23:01:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:58.888 23:01:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.888 23:01:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:58.888 [2024-11-18 23:01:18.036023] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:58.888 [2024-11-18 23:01:18.036077] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:58.888 [2024-11-18 23:01:18.036098] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:05:58.888 [2024-11-18 23:01:18.036107] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:58.888 [2024-11-18 23:01:18.038313] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:58.888 [2024-11-18 23:01:18.038347] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:58.888 Passthru0 00:05:58.888 23:01:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.888 23:01:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:58.888 23:01:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.888 23:01:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:58.888 23:01:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.888 23:01:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:58.888 { 00:05:58.888 "name": "Malloc2", 00:05:58.888 "aliases": [ 00:05:58.888 "8e3ce3f1-2093-45cc-b9b8-bbccadefc5b0" 00:05:58.888 ], 00:05:58.888 "product_name": "Malloc disk", 00:05:58.888 "block_size": 512, 00:05:58.888 "num_blocks": 16384, 00:05:58.888 "uuid": "8e3ce3f1-2093-45cc-b9b8-bbccadefc5b0", 00:05:58.888 "assigned_rate_limits": { 00:05:58.888 "rw_ios_per_sec": 0, 00:05:58.888 "rw_mbytes_per_sec": 0, 00:05:58.888 "r_mbytes_per_sec": 0, 00:05:58.888 "w_mbytes_per_sec": 0 00:05:58.888 }, 00:05:58.888 "claimed": true, 00:05:58.888 "claim_type": "exclusive_write", 00:05:58.888 "zoned": false, 00:05:58.888 "supported_io_types": { 00:05:58.888 "read": true, 00:05:58.888 "write": true, 00:05:58.888 "unmap": true, 00:05:58.888 "flush": true, 00:05:58.888 "reset": true, 00:05:58.888 "nvme_admin": false, 00:05:58.888 "nvme_io": false, 00:05:58.888 "nvme_io_md": false, 00:05:58.888 "write_zeroes": true, 00:05:58.888 "zcopy": true, 00:05:58.888 "get_zone_info": false, 00:05:58.888 "zone_management": false, 00:05:58.888 "zone_append": false, 00:05:58.888 "compare": false, 00:05:58.888 "compare_and_write": false, 00:05:58.888 "abort": true, 00:05:58.888 "seek_hole": false, 00:05:58.888 "seek_data": false, 00:05:58.888 "copy": true, 00:05:58.888 "nvme_iov_md": false 00:05:58.888 }, 00:05:58.888 "memory_domains": [ 00:05:58.888 { 00:05:58.888 "dma_device_id": "system", 00:05:58.888 "dma_device_type": 1 00:05:58.888 }, 00:05:58.888 { 00:05:58.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:58.888 "dma_device_type": 2 00:05:58.888 } 00:05:58.888 ], 00:05:58.888 "driver_specific": {} 00:05:58.888 }, 00:05:58.888 { 00:05:58.888 "name": "Passthru0", 00:05:58.888 "aliases": [ 00:05:58.888 "ca4e1853-c077-5059-81d8-84683d24f02c" 00:05:58.888 ], 00:05:58.888 "product_name": "passthru", 00:05:58.888 "block_size": 512, 00:05:58.888 "num_blocks": 16384, 00:05:58.888 "uuid": "ca4e1853-c077-5059-81d8-84683d24f02c", 00:05:58.888 "assigned_rate_limits": { 00:05:58.888 "rw_ios_per_sec": 0, 00:05:58.888 "rw_mbytes_per_sec": 0, 00:05:58.888 "r_mbytes_per_sec": 0, 00:05:58.888 "w_mbytes_per_sec": 0 00:05:58.888 }, 00:05:58.888 "claimed": false, 00:05:58.888 "zoned": false, 00:05:58.888 "supported_io_types": { 00:05:58.888 "read": true, 00:05:58.888 "write": true, 00:05:58.888 "unmap": true, 00:05:58.888 "flush": true, 00:05:58.888 "reset": true, 00:05:58.888 "nvme_admin": false, 00:05:58.888 "nvme_io": false, 00:05:58.888 "nvme_io_md": false, 00:05:58.888 "write_zeroes": true, 00:05:58.888 "zcopy": true, 00:05:58.888 "get_zone_info": false, 00:05:58.888 "zone_management": false, 00:05:58.888 "zone_append": false, 00:05:58.888 "compare": false, 00:05:58.888 "compare_and_write": false, 00:05:58.888 "abort": true, 00:05:58.888 "seek_hole": false, 00:05:58.888 "seek_data": false, 00:05:58.888 "copy": true, 00:05:58.888 "nvme_iov_md": false 00:05:58.888 }, 00:05:58.888 "memory_domains": [ 00:05:58.888 { 00:05:58.888 "dma_device_id": "system", 00:05:58.888 "dma_device_type": 1 00:05:58.888 }, 00:05:58.888 { 00:05:58.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:58.888 "dma_device_type": 2 00:05:58.888 } 00:05:58.888 ], 00:05:58.888 "driver_specific": { 00:05:58.888 "passthru": { 00:05:58.888 "name": "Passthru0", 00:05:58.888 "base_bdev_name": "Malloc2" 00:05:58.888 } 00:05:58.888 } 00:05:58.888 } 00:05:58.888 ]' 00:05:58.888 23:01:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:58.888 23:01:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:58.888 23:01:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:58.888 23:01:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.888 23:01:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:58.888 23:01:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.889 23:01:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:58.889 23:01:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.889 23:01:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:58.889 23:01:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.889 23:01:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:58.889 23:01:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.889 23:01:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:58.889 23:01:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.889 23:01:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:58.889 23:01:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:58.889 ************************************ 00:05:58.889 END TEST rpc_daemon_integrity 00:05:58.889 ************************************ 00:05:58.889 23:01:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:58.889 00:05:58.889 real 0m0.216s 00:05:58.889 user 0m0.127s 00:05:58.889 sys 0m0.031s 00:05:58.889 23:01:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:58.889 23:01:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:58.889 23:01:18 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:58.889 23:01:18 rpc -- rpc/rpc.sh@84 -- # killprocess 69887 00:05:58.889 23:01:18 rpc -- common/autotest_common.sh@950 -- # '[' -z 69887 ']' 00:05:58.889 23:01:18 rpc -- common/autotest_common.sh@954 -- # kill -0 69887 00:05:58.889 23:01:18 rpc -- common/autotest_common.sh@955 -- # uname 00:05:58.889 23:01:18 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:58.889 23:01:18 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69887 00:05:58.889 killing process with pid 69887 00:05:58.889 23:01:18 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:58.889 23:01:18 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:58.889 23:01:18 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69887' 00:05:58.889 23:01:18 rpc -- common/autotest_common.sh@969 -- # kill 69887 00:05:58.889 23:01:18 rpc -- common/autotest_common.sh@974 -- # wait 69887 00:05:59.146 ************************************ 00:05:59.146 END TEST rpc 00:05:59.146 ************************************ 00:05:59.146 00:05:59.146 real 0m2.240s 00:05:59.146 user 0m2.678s 00:05:59.146 sys 0m0.592s 00:05:59.146 23:01:18 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:59.146 23:01:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.146 23:01:18 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:59.146 23:01:18 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:59.146 23:01:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:59.146 23:01:18 -- common/autotest_common.sh@10 -- # set +x 00:05:59.146 ************************************ 00:05:59.146 START TEST skip_rpc 00:05:59.146 ************************************ 00:05:59.146 23:01:18 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:59.404 * Looking for test storage... 00:05:59.404 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:59.404 23:01:18 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:59.404 23:01:18 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:59.404 23:01:18 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:59.404 23:01:18 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:59.404 23:01:18 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:59.404 23:01:18 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:59.404 23:01:18 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:59.404 23:01:18 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:59.404 23:01:18 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:59.404 23:01:18 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:59.404 23:01:18 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:59.404 23:01:18 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:59.404 23:01:18 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:59.404 23:01:18 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:59.404 23:01:18 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:59.404 23:01:18 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:59.404 23:01:18 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:59.404 23:01:18 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:59.404 23:01:18 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:59.404 23:01:18 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:59.404 23:01:18 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:59.404 23:01:18 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:59.404 23:01:18 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:59.404 23:01:18 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:59.404 23:01:18 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:59.404 23:01:18 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:59.404 23:01:18 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:59.404 23:01:18 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:59.404 23:01:18 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:59.404 23:01:18 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:59.404 23:01:18 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:59.404 23:01:18 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:59.404 23:01:18 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:59.404 23:01:18 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:59.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.404 --rc genhtml_branch_coverage=1 00:05:59.404 --rc genhtml_function_coverage=1 00:05:59.404 --rc genhtml_legend=1 00:05:59.405 --rc geninfo_all_blocks=1 00:05:59.405 --rc geninfo_unexecuted_blocks=1 00:05:59.405 00:05:59.405 ' 00:05:59.405 23:01:18 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:59.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.405 --rc genhtml_branch_coverage=1 00:05:59.405 --rc genhtml_function_coverage=1 00:05:59.405 --rc genhtml_legend=1 00:05:59.405 --rc geninfo_all_blocks=1 00:05:59.405 --rc geninfo_unexecuted_blocks=1 00:05:59.405 00:05:59.405 ' 00:05:59.405 23:01:18 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:59.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.405 --rc genhtml_branch_coverage=1 00:05:59.405 --rc genhtml_function_coverage=1 00:05:59.405 --rc genhtml_legend=1 00:05:59.405 --rc geninfo_all_blocks=1 00:05:59.405 --rc geninfo_unexecuted_blocks=1 00:05:59.405 00:05:59.405 ' 00:05:59.405 23:01:18 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:59.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.405 --rc genhtml_branch_coverage=1 00:05:59.405 --rc genhtml_function_coverage=1 00:05:59.405 --rc genhtml_legend=1 00:05:59.405 --rc geninfo_all_blocks=1 00:05:59.405 --rc geninfo_unexecuted_blocks=1 00:05:59.405 00:05:59.405 ' 00:05:59.405 23:01:18 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:59.405 23:01:18 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:59.405 23:01:18 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:59.405 23:01:18 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:59.405 23:01:18 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:59.405 23:01:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.405 ************************************ 00:05:59.405 START TEST skip_rpc 00:05:59.405 ************************************ 00:05:59.405 23:01:18 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:05:59.405 23:01:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=70083 00:05:59.405 23:01:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:59.405 23:01:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:59.405 23:01:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:59.405 [2024-11-18 23:01:18.722693] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:59.405 [2024-11-18 23:01:18.722805] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70083 ] 00:05:59.662 [2024-11-18 23:01:18.871196] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.662 [2024-11-18 23:01:18.904703] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.926 23:01:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:04.926 23:01:23 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:04.926 23:01:23 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:04.926 23:01:23 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:04.926 23:01:23 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:04.926 23:01:23 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:04.926 23:01:23 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:04.926 23:01:23 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:06:04.926 23:01:23 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.926 23:01:23 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.926 23:01:23 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:04.926 23:01:23 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:04.926 23:01:23 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:04.927 23:01:23 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:04.927 23:01:23 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:04.927 23:01:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:04.927 23:01:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 70083 00:06:04.927 23:01:23 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 70083 ']' 00:06:04.927 23:01:23 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 70083 00:06:04.927 23:01:23 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:06:04.927 23:01:23 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:04.927 23:01:23 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70083 00:06:04.927 killing process with pid 70083 00:06:04.927 23:01:23 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:04.927 23:01:23 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:04.927 23:01:23 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70083' 00:06:04.927 23:01:23 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 70083 00:06:04.927 23:01:23 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 70083 00:06:04.927 00:06:04.927 real 0m5.356s 00:06:04.927 user 0m5.014s 00:06:04.927 sys 0m0.243s 00:06:04.927 23:01:24 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:04.927 ************************************ 00:06:04.927 END TEST skip_rpc 00:06:04.927 ************************************ 00:06:04.927 23:01:24 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.927 23:01:24 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:04.927 23:01:24 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:04.927 23:01:24 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:04.927 23:01:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.927 ************************************ 00:06:04.927 START TEST skip_rpc_with_json 00:06:04.927 ************************************ 00:06:04.927 23:01:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:06:04.927 23:01:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:04.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.927 23:01:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=70171 00:06:04.927 23:01:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:04.927 23:01:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 70171 00:06:04.927 23:01:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 70171 ']' 00:06:04.927 23:01:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.927 23:01:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:04.927 23:01:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.927 23:01:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:04.927 23:01:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:04.927 23:01:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:04.927 [2024-11-18 23:01:24.133916] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:04.927 [2024-11-18 23:01:24.134040] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70171 ] 00:06:04.927 [2024-11-18 23:01:24.280639] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.185 [2024-11-18 23:01:24.320629] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.752 23:01:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:05.752 23:01:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:06:05.752 23:01:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:05.752 23:01:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.752 23:01:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:05.752 [2024-11-18 23:01:24.961810] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:05.752 request: 00:06:05.752 { 00:06:05.752 "trtype": "tcp", 00:06:05.752 "method": "nvmf_get_transports", 00:06:05.752 "req_id": 1 00:06:05.752 } 00:06:05.752 Got JSON-RPC error response 00:06:05.752 response: 00:06:05.752 { 00:06:05.752 "code": -19, 00:06:05.752 "message": "No such device" 00:06:05.752 } 00:06:05.752 23:01:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:05.752 23:01:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:05.752 23:01:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.752 23:01:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:05.752 [2024-11-18 23:01:24.973900] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:05.752 23:01:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.752 23:01:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:05.752 23:01:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.752 23:01:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:06.011 23:01:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:06.011 23:01:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:06.011 { 00:06:06.011 "subsystems": [ 00:06:06.011 { 00:06:06.011 "subsystem": "fsdev", 00:06:06.011 "config": [ 00:06:06.011 { 00:06:06.011 "method": "fsdev_set_opts", 00:06:06.011 "params": { 00:06:06.011 "fsdev_io_pool_size": 65535, 00:06:06.011 "fsdev_io_cache_size": 256 00:06:06.011 } 00:06:06.011 } 00:06:06.011 ] 00:06:06.011 }, 00:06:06.011 { 00:06:06.011 "subsystem": "keyring", 00:06:06.011 "config": [] 00:06:06.011 }, 00:06:06.011 { 00:06:06.011 "subsystem": "iobuf", 00:06:06.011 "config": [ 00:06:06.011 { 00:06:06.011 "method": "iobuf_set_options", 00:06:06.011 "params": { 00:06:06.011 "small_pool_count": 8192, 00:06:06.011 "large_pool_count": 1024, 00:06:06.011 "small_bufsize": 8192, 00:06:06.011 "large_bufsize": 135168 00:06:06.011 } 00:06:06.011 } 00:06:06.011 ] 00:06:06.011 }, 00:06:06.011 { 00:06:06.011 "subsystem": "sock", 00:06:06.011 "config": [ 00:06:06.011 { 00:06:06.011 "method": "sock_set_default_impl", 00:06:06.011 "params": { 00:06:06.011 "impl_name": "posix" 00:06:06.011 } 00:06:06.011 }, 00:06:06.011 { 00:06:06.011 "method": "sock_impl_set_options", 00:06:06.011 "params": { 00:06:06.011 "impl_name": "ssl", 00:06:06.011 "recv_buf_size": 4096, 00:06:06.011 "send_buf_size": 4096, 00:06:06.011 "enable_recv_pipe": true, 00:06:06.011 "enable_quickack": false, 00:06:06.011 "enable_placement_id": 0, 00:06:06.011 "enable_zerocopy_send_server": true, 00:06:06.011 "enable_zerocopy_send_client": false, 00:06:06.011 "zerocopy_threshold": 0, 00:06:06.011 "tls_version": 0, 00:06:06.011 "enable_ktls": false 00:06:06.011 } 00:06:06.011 }, 00:06:06.011 { 00:06:06.011 "method": "sock_impl_set_options", 00:06:06.011 "params": { 00:06:06.011 "impl_name": "posix", 00:06:06.011 "recv_buf_size": 2097152, 00:06:06.011 "send_buf_size": 2097152, 00:06:06.011 "enable_recv_pipe": true, 00:06:06.011 "enable_quickack": false, 00:06:06.011 "enable_placement_id": 0, 00:06:06.011 "enable_zerocopy_send_server": true, 00:06:06.011 "enable_zerocopy_send_client": false, 00:06:06.011 "zerocopy_threshold": 0, 00:06:06.011 "tls_version": 0, 00:06:06.011 "enable_ktls": false 00:06:06.011 } 00:06:06.011 } 00:06:06.011 ] 00:06:06.011 }, 00:06:06.011 { 00:06:06.011 "subsystem": "vmd", 00:06:06.011 "config": [] 00:06:06.011 }, 00:06:06.011 { 00:06:06.011 "subsystem": "accel", 00:06:06.011 "config": [ 00:06:06.011 { 00:06:06.011 "method": "accel_set_options", 00:06:06.011 "params": { 00:06:06.011 "small_cache_size": 128, 00:06:06.011 "large_cache_size": 16, 00:06:06.011 "task_count": 2048, 00:06:06.011 "sequence_count": 2048, 00:06:06.011 "buf_count": 2048 00:06:06.011 } 00:06:06.011 } 00:06:06.011 ] 00:06:06.011 }, 00:06:06.011 { 00:06:06.011 "subsystem": "bdev", 00:06:06.011 "config": [ 00:06:06.011 { 00:06:06.011 "method": "bdev_set_options", 00:06:06.011 "params": { 00:06:06.011 "bdev_io_pool_size": 65535, 00:06:06.011 "bdev_io_cache_size": 256, 00:06:06.011 "bdev_auto_examine": true, 00:06:06.011 "iobuf_small_cache_size": 128, 00:06:06.011 "iobuf_large_cache_size": 16 00:06:06.011 } 00:06:06.011 }, 00:06:06.011 { 00:06:06.011 "method": "bdev_raid_set_options", 00:06:06.011 "params": { 00:06:06.011 "process_window_size_kb": 1024, 00:06:06.011 "process_max_bandwidth_mb_sec": 0 00:06:06.011 } 00:06:06.011 }, 00:06:06.011 { 00:06:06.011 "method": "bdev_iscsi_set_options", 00:06:06.011 "params": { 00:06:06.011 "timeout_sec": 30 00:06:06.011 } 00:06:06.011 }, 00:06:06.011 { 00:06:06.011 "method": "bdev_nvme_set_options", 00:06:06.011 "params": { 00:06:06.011 "action_on_timeout": "none", 00:06:06.011 "timeout_us": 0, 00:06:06.011 "timeout_admin_us": 0, 00:06:06.011 "keep_alive_timeout_ms": 10000, 00:06:06.011 "arbitration_burst": 0, 00:06:06.011 "low_priority_weight": 0, 00:06:06.011 "medium_priority_weight": 0, 00:06:06.011 "high_priority_weight": 0, 00:06:06.011 "nvme_adminq_poll_period_us": 10000, 00:06:06.011 "nvme_ioq_poll_period_us": 0, 00:06:06.011 "io_queue_requests": 0, 00:06:06.011 "delay_cmd_submit": true, 00:06:06.011 "transport_retry_count": 4, 00:06:06.011 "bdev_retry_count": 3, 00:06:06.011 "transport_ack_timeout": 0, 00:06:06.011 "ctrlr_loss_timeout_sec": 0, 00:06:06.011 "reconnect_delay_sec": 0, 00:06:06.011 "fast_io_fail_timeout_sec": 0, 00:06:06.011 "disable_auto_failback": false, 00:06:06.011 "generate_uuids": false, 00:06:06.011 "transport_tos": 0, 00:06:06.011 "nvme_error_stat": false, 00:06:06.011 "rdma_srq_size": 0, 00:06:06.011 "io_path_stat": false, 00:06:06.011 "allow_accel_sequence": false, 00:06:06.011 "rdma_max_cq_size": 0, 00:06:06.011 "rdma_cm_event_timeout_ms": 0, 00:06:06.011 "dhchap_digests": [ 00:06:06.011 "sha256", 00:06:06.011 "sha384", 00:06:06.011 "sha512" 00:06:06.011 ], 00:06:06.011 "dhchap_dhgroups": [ 00:06:06.011 "null", 00:06:06.011 "ffdhe2048", 00:06:06.011 "ffdhe3072", 00:06:06.011 "ffdhe4096", 00:06:06.011 "ffdhe6144", 00:06:06.011 "ffdhe8192" 00:06:06.011 ] 00:06:06.011 } 00:06:06.011 }, 00:06:06.011 { 00:06:06.011 "method": "bdev_nvme_set_hotplug", 00:06:06.011 "params": { 00:06:06.011 "period_us": 100000, 00:06:06.011 "enable": false 00:06:06.011 } 00:06:06.011 }, 00:06:06.011 { 00:06:06.011 "method": "bdev_wait_for_examine" 00:06:06.011 } 00:06:06.011 ] 00:06:06.011 }, 00:06:06.011 { 00:06:06.011 "subsystem": "scsi", 00:06:06.011 "config": null 00:06:06.011 }, 00:06:06.011 { 00:06:06.011 "subsystem": "scheduler", 00:06:06.011 "config": [ 00:06:06.011 { 00:06:06.011 "method": "framework_set_scheduler", 00:06:06.011 "params": { 00:06:06.011 "name": "static" 00:06:06.011 } 00:06:06.011 } 00:06:06.011 ] 00:06:06.011 }, 00:06:06.011 { 00:06:06.011 "subsystem": "vhost_scsi", 00:06:06.011 "config": [] 00:06:06.011 }, 00:06:06.011 { 00:06:06.011 "subsystem": "vhost_blk", 00:06:06.011 "config": [] 00:06:06.011 }, 00:06:06.011 { 00:06:06.011 "subsystem": "ublk", 00:06:06.011 "config": [] 00:06:06.011 }, 00:06:06.011 { 00:06:06.012 "subsystem": "nbd", 00:06:06.012 "config": [] 00:06:06.012 }, 00:06:06.012 { 00:06:06.012 "subsystem": "nvmf", 00:06:06.012 "config": [ 00:06:06.012 { 00:06:06.012 "method": "nvmf_set_config", 00:06:06.012 "params": { 00:06:06.012 "discovery_filter": "match_any", 00:06:06.012 "admin_cmd_passthru": { 00:06:06.012 "identify_ctrlr": false 00:06:06.012 }, 00:06:06.012 "dhchap_digests": [ 00:06:06.012 "sha256", 00:06:06.012 "sha384", 00:06:06.012 "sha512" 00:06:06.012 ], 00:06:06.012 "dhchap_dhgroups": [ 00:06:06.012 "null", 00:06:06.012 "ffdhe2048", 00:06:06.012 "ffdhe3072", 00:06:06.012 "ffdhe4096", 00:06:06.012 "ffdhe6144", 00:06:06.012 "ffdhe8192" 00:06:06.012 ] 00:06:06.012 } 00:06:06.012 }, 00:06:06.012 { 00:06:06.012 "method": "nvmf_set_max_subsystems", 00:06:06.012 "params": { 00:06:06.012 "max_subsystems": 1024 00:06:06.012 } 00:06:06.012 }, 00:06:06.012 { 00:06:06.012 "method": "nvmf_set_crdt", 00:06:06.012 "params": { 00:06:06.012 "crdt1": 0, 00:06:06.012 "crdt2": 0, 00:06:06.012 "crdt3": 0 00:06:06.012 } 00:06:06.012 }, 00:06:06.012 { 00:06:06.012 "method": "nvmf_create_transport", 00:06:06.012 "params": { 00:06:06.012 "trtype": "TCP", 00:06:06.012 "max_queue_depth": 128, 00:06:06.012 "max_io_qpairs_per_ctrlr": 127, 00:06:06.012 "in_capsule_data_size": 4096, 00:06:06.012 "max_io_size": 131072, 00:06:06.012 "io_unit_size": 131072, 00:06:06.012 "max_aq_depth": 128, 00:06:06.012 "num_shared_buffers": 511, 00:06:06.012 "buf_cache_size": 4294967295, 00:06:06.012 "dif_insert_or_strip": false, 00:06:06.012 "zcopy": false, 00:06:06.012 "c2h_success": true, 00:06:06.012 "sock_priority": 0, 00:06:06.012 "abort_timeout_sec": 1, 00:06:06.012 "ack_timeout": 0, 00:06:06.012 "data_wr_pool_size": 0 00:06:06.012 } 00:06:06.012 } 00:06:06.012 ] 00:06:06.012 }, 00:06:06.012 { 00:06:06.012 "subsystem": "iscsi", 00:06:06.012 "config": [ 00:06:06.012 { 00:06:06.012 "method": "iscsi_set_options", 00:06:06.012 "params": { 00:06:06.012 "node_base": "iqn.2016-06.io.spdk", 00:06:06.012 "max_sessions": 128, 00:06:06.012 "max_connections_per_session": 2, 00:06:06.012 "max_queue_depth": 64, 00:06:06.012 "default_time2wait": 2, 00:06:06.012 "default_time2retain": 20, 00:06:06.012 "first_burst_length": 8192, 00:06:06.012 "immediate_data": true, 00:06:06.012 "allow_duplicated_isid": false, 00:06:06.012 "error_recovery_level": 0, 00:06:06.012 "nop_timeout": 60, 00:06:06.012 "nop_in_interval": 30, 00:06:06.012 "disable_chap": false, 00:06:06.012 "require_chap": false, 00:06:06.012 "mutual_chap": false, 00:06:06.012 "chap_group": 0, 00:06:06.012 "max_large_datain_per_connection": 64, 00:06:06.012 "max_r2t_per_connection": 4, 00:06:06.012 "pdu_pool_size": 36864, 00:06:06.012 "immediate_data_pool_size": 16384, 00:06:06.012 "data_out_pool_size": 2048 00:06:06.012 } 00:06:06.012 } 00:06:06.012 ] 00:06:06.012 } 00:06:06.012 ] 00:06:06.012 } 00:06:06.012 23:01:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:06.012 23:01:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 70171 00:06:06.012 23:01:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 70171 ']' 00:06:06.012 23:01:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 70171 00:06:06.012 23:01:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:06.012 23:01:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:06.012 23:01:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70171 00:06:06.012 killing process with pid 70171 00:06:06.012 23:01:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:06.012 23:01:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:06.012 23:01:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70171' 00:06:06.012 23:01:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 70171 00:06:06.012 23:01:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 70171 00:06:06.270 23:01:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=70194 00:06:06.270 23:01:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:06.270 23:01:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:11.578 23:01:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 70194 00:06:11.578 23:01:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 70194 ']' 00:06:11.578 23:01:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 70194 00:06:11.578 23:01:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:11.578 23:01:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:11.578 23:01:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70194 00:06:11.578 killing process with pid 70194 00:06:11.578 23:01:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:11.578 23:01:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:11.578 23:01:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70194' 00:06:11.579 23:01:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 70194 00:06:11.579 23:01:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 70194 00:06:11.579 23:01:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:11.579 23:01:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:11.579 ************************************ 00:06:11.579 END TEST skip_rpc_with_json 00:06:11.579 ************************************ 00:06:11.579 00:06:11.579 real 0m6.773s 00:06:11.579 user 0m6.358s 00:06:11.579 sys 0m0.638s 00:06:11.579 23:01:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:11.579 23:01:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:11.579 23:01:30 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:11.579 23:01:30 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:11.579 23:01:30 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:11.579 23:01:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.579 ************************************ 00:06:11.579 START TEST skip_rpc_with_delay 00:06:11.579 ************************************ 00:06:11.579 23:01:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:06:11.579 23:01:30 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:11.579 23:01:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:06:11.579 23:01:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:11.579 23:01:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:11.579 23:01:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:11.579 23:01:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:11.579 23:01:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:11.579 23:01:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:11.579 23:01:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:11.579 23:01:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:11.579 23:01:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:11.579 23:01:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:11.838 [2024-11-18 23:01:30.972400] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:11.838 [2024-11-18 23:01:30.972551] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:11.838 23:01:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:06:11.838 23:01:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:11.838 23:01:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:11.838 23:01:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:11.838 00:06:11.838 real 0m0.132s 00:06:11.838 user 0m0.060s 00:06:11.838 sys 0m0.070s 00:06:11.838 ************************************ 00:06:11.838 END TEST skip_rpc_with_delay 00:06:11.838 ************************************ 00:06:11.838 23:01:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:11.838 23:01:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:11.838 23:01:31 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:11.838 23:01:31 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:11.838 23:01:31 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:11.838 23:01:31 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:11.838 23:01:31 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:11.838 23:01:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.838 ************************************ 00:06:11.838 START TEST exit_on_failed_rpc_init 00:06:11.838 ************************************ 00:06:11.838 23:01:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:06:11.838 23:01:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=70305 00:06:11.838 23:01:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 70305 00:06:11.838 23:01:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 70305 ']' 00:06:11.838 23:01:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.838 23:01:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:11.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.838 23:01:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.838 23:01:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:11.838 23:01:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:11.838 23:01:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:11.838 [2024-11-18 23:01:31.145125] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:11.838 [2024-11-18 23:01:31.145606] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70305 ] 00:06:12.097 [2024-11-18 23:01:31.293328] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.097 [2024-11-18 23:01:31.334563] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.663 23:01:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:12.663 23:01:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:06:12.663 23:01:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:12.663 23:01:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:12.663 23:01:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:06:12.663 23:01:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:12.663 23:01:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:12.663 23:01:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:12.663 23:01:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:12.663 23:01:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:12.663 23:01:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:12.663 23:01:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:12.663 23:01:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:12.663 23:01:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:12.663 23:01:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:12.922 [2024-11-18 23:01:32.059341] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:12.922 [2024-11-18 23:01:32.059467] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70323 ] 00:06:12.922 [2024-11-18 23:01:32.207903] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.922 [2024-11-18 23:01:32.239791] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.922 [2024-11-18 23:01:32.239870] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:12.922 [2024-11-18 23:01:32.239888] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:12.922 [2024-11-18 23:01:32.239899] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:13.181 23:01:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:06:13.181 23:01:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:13.181 23:01:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:06:13.181 23:01:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:06:13.181 23:01:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:06:13.181 23:01:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:13.181 23:01:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:13.181 23:01:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 70305 00:06:13.181 23:01:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 70305 ']' 00:06:13.181 23:01:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 70305 00:06:13.181 23:01:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:06:13.181 23:01:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:13.181 23:01:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70305 00:06:13.181 23:01:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:13.181 23:01:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:13.181 23:01:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70305' 00:06:13.181 killing process with pid 70305 00:06:13.181 23:01:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 70305 00:06:13.181 23:01:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 70305 00:06:13.441 00:06:13.441 real 0m1.596s 00:06:13.441 user 0m1.696s 00:06:13.441 sys 0m0.451s 00:06:13.441 23:01:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:13.441 23:01:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:13.441 ************************************ 00:06:13.441 END TEST exit_on_failed_rpc_init 00:06:13.441 ************************************ 00:06:13.442 23:01:32 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:13.442 00:06:13.442 real 0m14.206s 00:06:13.442 user 0m13.267s 00:06:13.442 sys 0m1.591s 00:06:13.442 23:01:32 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:13.442 23:01:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.442 ************************************ 00:06:13.442 END TEST skip_rpc 00:06:13.442 ************************************ 00:06:13.442 23:01:32 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:13.442 23:01:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:13.442 23:01:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:13.442 23:01:32 -- common/autotest_common.sh@10 -- # set +x 00:06:13.442 ************************************ 00:06:13.442 START TEST rpc_client 00:06:13.442 ************************************ 00:06:13.442 23:01:32 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:13.442 * Looking for test storage... 00:06:13.700 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:13.700 23:01:32 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:13.700 23:01:32 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:06:13.700 23:01:32 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:13.700 23:01:32 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:13.700 23:01:32 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:13.700 23:01:32 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:13.700 23:01:32 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:13.700 23:01:32 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:13.700 23:01:32 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:13.700 23:01:32 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:13.700 23:01:32 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:13.700 23:01:32 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:13.700 23:01:32 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:13.700 23:01:32 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:13.700 23:01:32 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:13.700 23:01:32 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:13.700 23:01:32 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:13.700 23:01:32 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:13.700 23:01:32 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:13.700 23:01:32 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:13.700 23:01:32 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:13.700 23:01:32 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:13.700 23:01:32 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:13.700 23:01:32 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:13.700 23:01:32 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:13.700 23:01:32 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:13.700 23:01:32 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:13.700 23:01:32 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:13.700 23:01:32 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:13.700 23:01:32 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:13.700 23:01:32 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:13.700 23:01:32 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:13.700 23:01:32 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:13.700 23:01:32 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:13.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.700 --rc genhtml_branch_coverage=1 00:06:13.700 --rc genhtml_function_coverage=1 00:06:13.700 --rc genhtml_legend=1 00:06:13.700 --rc geninfo_all_blocks=1 00:06:13.700 --rc geninfo_unexecuted_blocks=1 00:06:13.700 00:06:13.700 ' 00:06:13.700 23:01:32 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:13.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.701 --rc genhtml_branch_coverage=1 00:06:13.701 --rc genhtml_function_coverage=1 00:06:13.701 --rc genhtml_legend=1 00:06:13.701 --rc geninfo_all_blocks=1 00:06:13.701 --rc geninfo_unexecuted_blocks=1 00:06:13.701 00:06:13.701 ' 00:06:13.701 23:01:32 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:13.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.701 --rc genhtml_branch_coverage=1 00:06:13.701 --rc genhtml_function_coverage=1 00:06:13.701 --rc genhtml_legend=1 00:06:13.701 --rc geninfo_all_blocks=1 00:06:13.701 --rc geninfo_unexecuted_blocks=1 00:06:13.701 00:06:13.701 ' 00:06:13.701 23:01:32 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:13.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.701 --rc genhtml_branch_coverage=1 00:06:13.701 --rc genhtml_function_coverage=1 00:06:13.701 --rc genhtml_legend=1 00:06:13.701 --rc geninfo_all_blocks=1 00:06:13.701 --rc geninfo_unexecuted_blocks=1 00:06:13.701 00:06:13.701 ' 00:06:13.701 23:01:32 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:13.701 OK 00:06:13.701 23:01:32 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:13.701 00:06:13.701 real 0m0.184s 00:06:13.701 user 0m0.101s 00:06:13.701 sys 0m0.092s 00:06:13.701 23:01:32 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:13.701 23:01:32 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:13.701 ************************************ 00:06:13.701 END TEST rpc_client 00:06:13.701 ************************************ 00:06:13.701 23:01:32 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:13.701 23:01:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:13.701 23:01:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:13.701 23:01:32 -- common/autotest_common.sh@10 -- # set +x 00:06:13.701 ************************************ 00:06:13.701 START TEST json_config 00:06:13.701 ************************************ 00:06:13.701 23:01:32 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:13.701 23:01:33 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:13.701 23:01:33 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:06:13.701 23:01:33 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:13.959 23:01:33 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:13.959 23:01:33 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:13.959 23:01:33 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:13.959 23:01:33 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:13.959 23:01:33 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:13.959 23:01:33 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:13.959 23:01:33 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:13.959 23:01:33 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:13.959 23:01:33 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:13.959 23:01:33 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:13.959 23:01:33 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:13.959 23:01:33 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:13.959 23:01:33 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:13.959 23:01:33 json_config -- scripts/common.sh@345 -- # : 1 00:06:13.959 23:01:33 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:13.959 23:01:33 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:13.959 23:01:33 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:13.959 23:01:33 json_config -- scripts/common.sh@353 -- # local d=1 00:06:13.959 23:01:33 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:13.959 23:01:33 json_config -- scripts/common.sh@355 -- # echo 1 00:06:13.959 23:01:33 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:13.959 23:01:33 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:13.959 23:01:33 json_config -- scripts/common.sh@353 -- # local d=2 00:06:13.959 23:01:33 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:13.959 23:01:33 json_config -- scripts/common.sh@355 -- # echo 2 00:06:13.959 23:01:33 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:13.959 23:01:33 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:13.959 23:01:33 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:13.959 23:01:33 json_config -- scripts/common.sh@368 -- # return 0 00:06:13.959 23:01:33 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:13.959 23:01:33 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:13.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.959 --rc genhtml_branch_coverage=1 00:06:13.959 --rc genhtml_function_coverage=1 00:06:13.959 --rc genhtml_legend=1 00:06:13.959 --rc geninfo_all_blocks=1 00:06:13.959 --rc geninfo_unexecuted_blocks=1 00:06:13.959 00:06:13.959 ' 00:06:13.959 23:01:33 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:13.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.959 --rc genhtml_branch_coverage=1 00:06:13.959 --rc genhtml_function_coverage=1 00:06:13.959 --rc genhtml_legend=1 00:06:13.959 --rc geninfo_all_blocks=1 00:06:13.959 --rc geninfo_unexecuted_blocks=1 00:06:13.959 00:06:13.959 ' 00:06:13.959 23:01:33 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:13.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.959 --rc genhtml_branch_coverage=1 00:06:13.959 --rc genhtml_function_coverage=1 00:06:13.959 --rc genhtml_legend=1 00:06:13.959 --rc geninfo_all_blocks=1 00:06:13.959 --rc geninfo_unexecuted_blocks=1 00:06:13.959 00:06:13.959 ' 00:06:13.959 23:01:33 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:13.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.959 --rc genhtml_branch_coverage=1 00:06:13.959 --rc genhtml_function_coverage=1 00:06:13.959 --rc genhtml_legend=1 00:06:13.959 --rc geninfo_all_blocks=1 00:06:13.959 --rc geninfo_unexecuted_blocks=1 00:06:13.959 00:06:13.959 ' 00:06:13.959 23:01:33 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:13.959 23:01:33 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:13.959 23:01:33 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:13.959 23:01:33 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:13.959 23:01:33 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:13.959 23:01:33 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:13.959 23:01:33 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:13.959 23:01:33 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:13.959 23:01:33 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:13.959 23:01:33 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:13.959 23:01:33 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:13.959 23:01:33 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:13.959 23:01:33 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:68fa7114-5336-4e39-bec4-9ce93b826881 00:06:13.959 23:01:33 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=68fa7114-5336-4e39-bec4-9ce93b826881 00:06:13.959 23:01:33 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:13.959 23:01:33 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:13.959 23:01:33 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:13.959 23:01:33 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:13.959 23:01:33 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:13.959 23:01:33 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:13.959 23:01:33 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:13.959 23:01:33 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:13.959 23:01:33 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:13.959 23:01:33 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.959 23:01:33 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.959 23:01:33 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.959 23:01:33 json_config -- paths/export.sh@5 -- # export PATH 00:06:13.959 23:01:33 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.959 23:01:33 json_config -- nvmf/common.sh@51 -- # : 0 00:06:13.959 23:01:33 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:13.959 23:01:33 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:13.959 23:01:33 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:13.959 23:01:33 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:13.959 23:01:33 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:13.959 23:01:33 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:13.959 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:13.959 23:01:33 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:13.959 23:01:33 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:13.959 23:01:33 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:13.959 23:01:33 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:13.959 23:01:33 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:13.959 23:01:33 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:13.959 23:01:33 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:13.959 23:01:33 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:13.959 WARNING: No tests are enabled so not running JSON configuration tests 00:06:13.959 23:01:33 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:13.959 23:01:33 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:13.959 00:06:13.959 real 0m0.159s 00:06:13.959 user 0m0.111s 00:06:13.959 sys 0m0.051s 00:06:13.959 23:01:33 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:13.959 23:01:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:13.959 ************************************ 00:06:13.959 END TEST json_config 00:06:13.959 ************************************ 00:06:13.959 23:01:33 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:13.959 23:01:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:13.959 23:01:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:13.959 23:01:33 -- common/autotest_common.sh@10 -- # set +x 00:06:13.959 ************************************ 00:06:13.959 START TEST json_config_extra_key 00:06:13.959 ************************************ 00:06:13.959 23:01:33 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:13.959 23:01:33 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:13.959 23:01:33 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:06:13.959 23:01:33 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:13.959 23:01:33 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:13.959 23:01:33 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:13.959 23:01:33 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:13.959 23:01:33 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:13.959 23:01:33 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:13.959 23:01:33 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:13.959 23:01:33 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:13.959 23:01:33 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:13.959 23:01:33 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:13.959 23:01:33 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:13.959 23:01:33 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:13.959 23:01:33 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:13.960 23:01:33 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:13.960 23:01:33 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:13.960 23:01:33 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:13.960 23:01:33 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:13.960 23:01:33 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:13.960 23:01:33 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:13.960 23:01:33 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:13.960 23:01:33 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:13.960 23:01:33 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:13.960 23:01:33 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:13.960 23:01:33 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:13.960 23:01:33 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:13.960 23:01:33 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:13.960 23:01:33 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:13.960 23:01:33 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:13.960 23:01:33 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:13.960 23:01:33 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:13.960 23:01:33 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:13.960 23:01:33 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:13.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.960 --rc genhtml_branch_coverage=1 00:06:13.960 --rc genhtml_function_coverage=1 00:06:13.960 --rc genhtml_legend=1 00:06:13.960 --rc geninfo_all_blocks=1 00:06:13.960 --rc geninfo_unexecuted_blocks=1 00:06:13.960 00:06:13.960 ' 00:06:13.960 23:01:33 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:13.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.960 --rc genhtml_branch_coverage=1 00:06:13.960 --rc genhtml_function_coverage=1 00:06:13.960 --rc genhtml_legend=1 00:06:13.960 --rc geninfo_all_blocks=1 00:06:13.960 --rc geninfo_unexecuted_blocks=1 00:06:13.960 00:06:13.960 ' 00:06:13.960 23:01:33 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:13.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.960 --rc genhtml_branch_coverage=1 00:06:13.960 --rc genhtml_function_coverage=1 00:06:13.960 --rc genhtml_legend=1 00:06:13.960 --rc geninfo_all_blocks=1 00:06:13.960 --rc geninfo_unexecuted_blocks=1 00:06:13.960 00:06:13.960 ' 00:06:13.960 23:01:33 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:13.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.960 --rc genhtml_branch_coverage=1 00:06:13.960 --rc genhtml_function_coverage=1 00:06:13.960 --rc genhtml_legend=1 00:06:13.960 --rc geninfo_all_blocks=1 00:06:13.960 --rc geninfo_unexecuted_blocks=1 00:06:13.960 00:06:13.960 ' 00:06:13.960 23:01:33 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:13.960 23:01:33 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:13.960 23:01:33 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:13.960 23:01:33 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:13.960 23:01:33 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:13.960 23:01:33 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:13.960 23:01:33 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:13.960 23:01:33 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:13.960 23:01:33 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:13.960 23:01:33 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:13.960 23:01:33 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:13.960 23:01:33 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:13.960 23:01:33 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:68fa7114-5336-4e39-bec4-9ce93b826881 00:06:13.960 23:01:33 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=68fa7114-5336-4e39-bec4-9ce93b826881 00:06:13.960 23:01:33 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:13.960 23:01:33 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:13.960 23:01:33 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:13.960 23:01:33 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:13.960 23:01:33 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:13.960 23:01:33 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:13.960 23:01:33 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:13.960 23:01:33 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:13.960 23:01:33 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:13.960 23:01:33 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.960 23:01:33 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.960 23:01:33 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.960 23:01:33 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:13.960 23:01:33 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.960 23:01:33 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:13.960 23:01:33 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:13.960 23:01:33 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:13.960 23:01:33 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:13.960 23:01:33 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:13.960 23:01:33 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:13.960 23:01:33 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:13.960 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:13.960 23:01:33 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:13.960 23:01:33 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:13.960 23:01:33 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:13.960 23:01:33 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:13.960 23:01:33 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:13.960 23:01:33 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:13.960 23:01:33 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:13.960 23:01:33 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:13.960 23:01:33 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:13.960 23:01:33 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:13.960 23:01:33 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:13.960 23:01:33 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:13.960 23:01:33 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:13.960 INFO: launching applications... 00:06:13.960 Waiting for target to run... 00:06:13.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:13.960 23:01:33 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:13.960 23:01:33 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:13.960 23:01:33 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:13.960 23:01:33 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:13.960 23:01:33 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:13.960 23:01:33 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:13.960 23:01:33 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:13.960 23:01:33 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:13.960 23:01:33 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:13.960 23:01:33 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=70506 00:06:13.960 23:01:33 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:13.960 23:01:33 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 70506 /var/tmp/spdk_tgt.sock 00:06:13.960 23:01:33 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 70506 ']' 00:06:13.960 23:01:33 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:13.960 23:01:33 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:13.960 23:01:33 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:13.960 23:01:33 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:13.960 23:01:33 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:13.960 23:01:33 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:14.217 [2024-11-18 23:01:33.387318] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:14.217 [2024-11-18 23:01:33.387432] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70506 ] 00:06:14.475 [2024-11-18 23:01:33.692014] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.475 [2024-11-18 23:01:33.715919] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.040 00:06:15.040 INFO: shutting down applications... 00:06:15.040 23:01:34 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:15.040 23:01:34 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:06:15.040 23:01:34 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:15.040 23:01:34 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:15.040 23:01:34 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:15.040 23:01:34 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:15.040 23:01:34 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:15.040 23:01:34 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 70506 ]] 00:06:15.040 23:01:34 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 70506 00:06:15.040 23:01:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:15.040 23:01:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:15.040 23:01:34 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 70506 00:06:15.040 23:01:34 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:15.606 23:01:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:15.606 23:01:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:15.606 23:01:34 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 70506 00:06:15.606 23:01:34 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:15.606 23:01:34 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:15.606 23:01:34 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:15.606 SPDK target shutdown done 00:06:15.606 23:01:34 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:15.606 Success 00:06:15.606 23:01:34 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:15.606 00:06:15.606 real 0m1.559s 00:06:15.607 user 0m1.358s 00:06:15.607 sys 0m0.337s 00:06:15.607 23:01:34 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:15.607 23:01:34 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:15.607 ************************************ 00:06:15.607 END TEST json_config_extra_key 00:06:15.607 ************************************ 00:06:15.607 23:01:34 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:15.607 23:01:34 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:15.607 23:01:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:15.607 23:01:34 -- common/autotest_common.sh@10 -- # set +x 00:06:15.607 ************************************ 00:06:15.607 START TEST alias_rpc 00:06:15.607 ************************************ 00:06:15.607 23:01:34 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:15.607 * Looking for test storage... 00:06:15.607 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:15.607 23:01:34 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:15.607 23:01:34 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:15.607 23:01:34 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:06:15.607 23:01:34 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:15.607 23:01:34 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:15.607 23:01:34 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:15.607 23:01:34 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:15.607 23:01:34 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:15.607 23:01:34 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:15.607 23:01:34 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:15.607 23:01:34 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:15.607 23:01:34 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:15.607 23:01:34 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:15.607 23:01:34 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:15.607 23:01:34 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:15.607 23:01:34 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:15.607 23:01:34 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:15.607 23:01:34 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:15.607 23:01:34 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:15.607 23:01:34 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:15.607 23:01:34 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:15.607 23:01:34 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:15.607 23:01:34 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:15.607 23:01:34 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:15.607 23:01:34 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:15.607 23:01:34 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:15.607 23:01:34 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:15.607 23:01:34 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:15.607 23:01:34 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:15.607 23:01:34 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:15.607 23:01:34 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:15.607 23:01:34 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:15.607 23:01:34 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:15.607 23:01:34 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:15.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.607 --rc genhtml_branch_coverage=1 00:06:15.607 --rc genhtml_function_coverage=1 00:06:15.607 --rc genhtml_legend=1 00:06:15.607 --rc geninfo_all_blocks=1 00:06:15.607 --rc geninfo_unexecuted_blocks=1 00:06:15.607 00:06:15.607 ' 00:06:15.607 23:01:34 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:15.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.607 --rc genhtml_branch_coverage=1 00:06:15.607 --rc genhtml_function_coverage=1 00:06:15.607 --rc genhtml_legend=1 00:06:15.607 --rc geninfo_all_blocks=1 00:06:15.607 --rc geninfo_unexecuted_blocks=1 00:06:15.607 00:06:15.607 ' 00:06:15.607 23:01:34 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:15.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.607 --rc genhtml_branch_coverage=1 00:06:15.607 --rc genhtml_function_coverage=1 00:06:15.607 --rc genhtml_legend=1 00:06:15.607 --rc geninfo_all_blocks=1 00:06:15.607 --rc geninfo_unexecuted_blocks=1 00:06:15.607 00:06:15.607 ' 00:06:15.607 23:01:34 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:15.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.607 --rc genhtml_branch_coverage=1 00:06:15.607 --rc genhtml_function_coverage=1 00:06:15.607 --rc genhtml_legend=1 00:06:15.607 --rc geninfo_all_blocks=1 00:06:15.607 --rc geninfo_unexecuted_blocks=1 00:06:15.607 00:06:15.607 ' 00:06:15.607 23:01:34 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:15.607 23:01:34 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=70579 00:06:15.607 23:01:34 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 70579 00:06:15.607 23:01:34 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 70579 ']' 00:06:15.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.607 23:01:34 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.607 23:01:34 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:15.607 23:01:34 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.607 23:01:34 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:15.607 23:01:34 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.607 23:01:34 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:15.865 [2024-11-18 23:01:34.992541] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:15.865 [2024-11-18 23:01:34.992651] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70579 ] 00:06:15.865 [2024-11-18 23:01:35.140522] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.865 [2024-11-18 23:01:35.183400] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.434 23:01:35 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:16.434 23:01:35 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:16.434 23:01:35 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:16.695 23:01:36 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 70579 00:06:16.695 23:01:36 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 70579 ']' 00:06:16.695 23:01:36 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 70579 00:06:16.695 23:01:36 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:06:16.695 23:01:36 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:16.695 23:01:36 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70579 00:06:16.695 23:01:36 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:16.695 killing process with pid 70579 00:06:16.695 23:01:36 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:16.695 23:01:36 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70579' 00:06:16.695 23:01:36 alias_rpc -- common/autotest_common.sh@969 -- # kill 70579 00:06:16.695 23:01:36 alias_rpc -- common/autotest_common.sh@974 -- # wait 70579 00:06:17.267 00:06:17.267 real 0m1.606s 00:06:17.267 user 0m1.674s 00:06:17.267 sys 0m0.406s 00:06:17.267 23:01:36 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:17.267 ************************************ 00:06:17.267 END TEST alias_rpc 00:06:17.267 ************************************ 00:06:17.267 23:01:36 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.267 23:01:36 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:17.267 23:01:36 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:17.267 23:01:36 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:17.267 23:01:36 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:17.267 23:01:36 -- common/autotest_common.sh@10 -- # set +x 00:06:17.267 ************************************ 00:06:17.267 START TEST spdkcli_tcp 00:06:17.267 ************************************ 00:06:17.267 23:01:36 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:17.267 * Looking for test storage... 00:06:17.268 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:17.268 23:01:36 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:17.268 23:01:36 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:06:17.268 23:01:36 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:17.268 23:01:36 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:17.268 23:01:36 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:17.268 23:01:36 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:17.268 23:01:36 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:17.268 23:01:36 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:17.268 23:01:36 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:17.268 23:01:36 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:17.268 23:01:36 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:17.268 23:01:36 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:17.268 23:01:36 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:17.268 23:01:36 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:17.268 23:01:36 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:17.268 23:01:36 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:17.268 23:01:36 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:17.268 23:01:36 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:17.268 23:01:36 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:17.268 23:01:36 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:17.268 23:01:36 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:17.268 23:01:36 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:17.268 23:01:36 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:17.268 23:01:36 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:17.268 23:01:36 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:17.268 23:01:36 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:17.268 23:01:36 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:17.268 23:01:36 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:17.268 23:01:36 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:17.268 23:01:36 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:17.268 23:01:36 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:17.268 23:01:36 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:17.268 23:01:36 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:17.268 23:01:36 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:17.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.268 --rc genhtml_branch_coverage=1 00:06:17.268 --rc genhtml_function_coverage=1 00:06:17.268 --rc genhtml_legend=1 00:06:17.268 --rc geninfo_all_blocks=1 00:06:17.268 --rc geninfo_unexecuted_blocks=1 00:06:17.268 00:06:17.268 ' 00:06:17.268 23:01:36 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:17.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.268 --rc genhtml_branch_coverage=1 00:06:17.268 --rc genhtml_function_coverage=1 00:06:17.268 --rc genhtml_legend=1 00:06:17.268 --rc geninfo_all_blocks=1 00:06:17.268 --rc geninfo_unexecuted_blocks=1 00:06:17.268 00:06:17.268 ' 00:06:17.268 23:01:36 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:17.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.268 --rc genhtml_branch_coverage=1 00:06:17.268 --rc genhtml_function_coverage=1 00:06:17.268 --rc genhtml_legend=1 00:06:17.268 --rc geninfo_all_blocks=1 00:06:17.268 --rc geninfo_unexecuted_blocks=1 00:06:17.268 00:06:17.268 ' 00:06:17.268 23:01:36 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:17.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.268 --rc genhtml_branch_coverage=1 00:06:17.268 --rc genhtml_function_coverage=1 00:06:17.268 --rc genhtml_legend=1 00:06:17.268 --rc geninfo_all_blocks=1 00:06:17.268 --rc geninfo_unexecuted_blocks=1 00:06:17.268 00:06:17.268 ' 00:06:17.268 23:01:36 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:17.268 23:01:36 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:17.268 23:01:36 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:17.268 23:01:36 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:17.268 23:01:36 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:17.268 23:01:36 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:17.268 23:01:36 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:17.268 23:01:36 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:17.268 23:01:36 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:17.268 23:01:36 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=70659 00:06:17.268 23:01:36 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 70659 00:06:17.268 23:01:36 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 70659 ']' 00:06:17.268 23:01:36 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.268 23:01:36 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:17.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.268 23:01:36 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.268 23:01:36 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:17.268 23:01:36 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:17.268 23:01:36 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:17.268 [2024-11-18 23:01:36.637555] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:17.268 [2024-11-18 23:01:36.637686] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70659 ] 00:06:17.527 [2024-11-18 23:01:36.784860] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:17.527 [2024-11-18 23:01:36.830660] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.527 [2024-11-18 23:01:36.830729] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.461 23:01:37 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:18.461 23:01:37 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:06:18.461 23:01:37 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=70676 00:06:18.461 23:01:37 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:18.461 23:01:37 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:18.461 [ 00:06:18.461 "bdev_malloc_delete", 00:06:18.461 "bdev_malloc_create", 00:06:18.461 "bdev_null_resize", 00:06:18.462 "bdev_null_delete", 00:06:18.462 "bdev_null_create", 00:06:18.462 "bdev_nvme_cuse_unregister", 00:06:18.462 "bdev_nvme_cuse_register", 00:06:18.462 "bdev_opal_new_user", 00:06:18.462 "bdev_opal_set_lock_state", 00:06:18.462 "bdev_opal_delete", 00:06:18.462 "bdev_opal_get_info", 00:06:18.462 "bdev_opal_create", 00:06:18.462 "bdev_nvme_opal_revert", 00:06:18.462 "bdev_nvme_opal_init", 00:06:18.462 "bdev_nvme_send_cmd", 00:06:18.462 "bdev_nvme_set_keys", 00:06:18.462 "bdev_nvme_get_path_iostat", 00:06:18.462 "bdev_nvme_get_mdns_discovery_info", 00:06:18.462 "bdev_nvme_stop_mdns_discovery", 00:06:18.462 "bdev_nvme_start_mdns_discovery", 00:06:18.462 "bdev_nvme_set_multipath_policy", 00:06:18.462 "bdev_nvme_set_preferred_path", 00:06:18.462 "bdev_nvme_get_io_paths", 00:06:18.462 "bdev_nvme_remove_error_injection", 00:06:18.462 "bdev_nvme_add_error_injection", 00:06:18.462 "bdev_nvme_get_discovery_info", 00:06:18.462 "bdev_nvme_stop_discovery", 00:06:18.462 "bdev_nvme_start_discovery", 00:06:18.462 "bdev_nvme_get_controller_health_info", 00:06:18.462 "bdev_nvme_disable_controller", 00:06:18.462 "bdev_nvme_enable_controller", 00:06:18.462 "bdev_nvme_reset_controller", 00:06:18.462 "bdev_nvme_get_transport_statistics", 00:06:18.462 "bdev_nvme_apply_firmware", 00:06:18.462 "bdev_nvme_detach_controller", 00:06:18.462 "bdev_nvme_get_controllers", 00:06:18.462 "bdev_nvme_attach_controller", 00:06:18.462 "bdev_nvme_set_hotplug", 00:06:18.462 "bdev_nvme_set_options", 00:06:18.462 "bdev_passthru_delete", 00:06:18.462 "bdev_passthru_create", 00:06:18.462 "bdev_lvol_set_parent_bdev", 00:06:18.462 "bdev_lvol_set_parent", 00:06:18.462 "bdev_lvol_check_shallow_copy", 00:06:18.462 "bdev_lvol_start_shallow_copy", 00:06:18.462 "bdev_lvol_grow_lvstore", 00:06:18.462 "bdev_lvol_get_lvols", 00:06:18.462 "bdev_lvol_get_lvstores", 00:06:18.462 "bdev_lvol_delete", 00:06:18.462 "bdev_lvol_set_read_only", 00:06:18.462 "bdev_lvol_resize", 00:06:18.462 "bdev_lvol_decouple_parent", 00:06:18.462 "bdev_lvol_inflate", 00:06:18.462 "bdev_lvol_rename", 00:06:18.462 "bdev_lvol_clone_bdev", 00:06:18.462 "bdev_lvol_clone", 00:06:18.462 "bdev_lvol_snapshot", 00:06:18.462 "bdev_lvol_create", 00:06:18.462 "bdev_lvol_delete_lvstore", 00:06:18.462 "bdev_lvol_rename_lvstore", 00:06:18.462 "bdev_lvol_create_lvstore", 00:06:18.462 "bdev_raid_set_options", 00:06:18.462 "bdev_raid_remove_base_bdev", 00:06:18.462 "bdev_raid_add_base_bdev", 00:06:18.462 "bdev_raid_delete", 00:06:18.462 "bdev_raid_create", 00:06:18.462 "bdev_raid_get_bdevs", 00:06:18.462 "bdev_error_inject_error", 00:06:18.462 "bdev_error_delete", 00:06:18.462 "bdev_error_create", 00:06:18.462 "bdev_split_delete", 00:06:18.462 "bdev_split_create", 00:06:18.462 "bdev_delay_delete", 00:06:18.462 "bdev_delay_create", 00:06:18.462 "bdev_delay_update_latency", 00:06:18.462 "bdev_zone_block_delete", 00:06:18.462 "bdev_zone_block_create", 00:06:18.462 "blobfs_create", 00:06:18.462 "blobfs_detect", 00:06:18.462 "blobfs_set_cache_size", 00:06:18.462 "bdev_xnvme_delete", 00:06:18.462 "bdev_xnvme_create", 00:06:18.462 "bdev_aio_delete", 00:06:18.462 "bdev_aio_rescan", 00:06:18.462 "bdev_aio_create", 00:06:18.462 "bdev_ftl_set_property", 00:06:18.462 "bdev_ftl_get_properties", 00:06:18.462 "bdev_ftl_get_stats", 00:06:18.462 "bdev_ftl_unmap", 00:06:18.462 "bdev_ftl_unload", 00:06:18.462 "bdev_ftl_delete", 00:06:18.462 "bdev_ftl_load", 00:06:18.462 "bdev_ftl_create", 00:06:18.462 "bdev_virtio_attach_controller", 00:06:18.462 "bdev_virtio_scsi_get_devices", 00:06:18.462 "bdev_virtio_detach_controller", 00:06:18.462 "bdev_virtio_blk_set_hotplug", 00:06:18.462 "bdev_iscsi_delete", 00:06:18.462 "bdev_iscsi_create", 00:06:18.462 "bdev_iscsi_set_options", 00:06:18.462 "accel_error_inject_error", 00:06:18.462 "ioat_scan_accel_module", 00:06:18.462 "dsa_scan_accel_module", 00:06:18.462 "iaa_scan_accel_module", 00:06:18.462 "keyring_file_remove_key", 00:06:18.462 "keyring_file_add_key", 00:06:18.462 "keyring_linux_set_options", 00:06:18.462 "fsdev_aio_delete", 00:06:18.462 "fsdev_aio_create", 00:06:18.462 "iscsi_get_histogram", 00:06:18.462 "iscsi_enable_histogram", 00:06:18.462 "iscsi_set_options", 00:06:18.462 "iscsi_get_auth_groups", 00:06:18.462 "iscsi_auth_group_remove_secret", 00:06:18.462 "iscsi_auth_group_add_secret", 00:06:18.462 "iscsi_delete_auth_group", 00:06:18.462 "iscsi_create_auth_group", 00:06:18.462 "iscsi_set_discovery_auth", 00:06:18.462 "iscsi_get_options", 00:06:18.462 "iscsi_target_node_request_logout", 00:06:18.462 "iscsi_target_node_set_redirect", 00:06:18.462 "iscsi_target_node_set_auth", 00:06:18.462 "iscsi_target_node_add_lun", 00:06:18.462 "iscsi_get_stats", 00:06:18.462 "iscsi_get_connections", 00:06:18.462 "iscsi_portal_group_set_auth", 00:06:18.462 "iscsi_start_portal_group", 00:06:18.462 "iscsi_delete_portal_group", 00:06:18.462 "iscsi_create_portal_group", 00:06:18.462 "iscsi_get_portal_groups", 00:06:18.462 "iscsi_delete_target_node", 00:06:18.462 "iscsi_target_node_remove_pg_ig_maps", 00:06:18.462 "iscsi_target_node_add_pg_ig_maps", 00:06:18.462 "iscsi_create_target_node", 00:06:18.462 "iscsi_get_target_nodes", 00:06:18.462 "iscsi_delete_initiator_group", 00:06:18.462 "iscsi_initiator_group_remove_initiators", 00:06:18.462 "iscsi_initiator_group_add_initiators", 00:06:18.462 "iscsi_create_initiator_group", 00:06:18.462 "iscsi_get_initiator_groups", 00:06:18.462 "nvmf_set_crdt", 00:06:18.462 "nvmf_set_config", 00:06:18.462 "nvmf_set_max_subsystems", 00:06:18.462 "nvmf_stop_mdns_prr", 00:06:18.462 "nvmf_publish_mdns_prr", 00:06:18.462 "nvmf_subsystem_get_listeners", 00:06:18.462 "nvmf_subsystem_get_qpairs", 00:06:18.462 "nvmf_subsystem_get_controllers", 00:06:18.462 "nvmf_get_stats", 00:06:18.462 "nvmf_get_transports", 00:06:18.462 "nvmf_create_transport", 00:06:18.462 "nvmf_get_targets", 00:06:18.462 "nvmf_delete_target", 00:06:18.462 "nvmf_create_target", 00:06:18.462 "nvmf_subsystem_allow_any_host", 00:06:18.462 "nvmf_subsystem_set_keys", 00:06:18.462 "nvmf_subsystem_remove_host", 00:06:18.462 "nvmf_subsystem_add_host", 00:06:18.462 "nvmf_ns_remove_host", 00:06:18.462 "nvmf_ns_add_host", 00:06:18.462 "nvmf_subsystem_remove_ns", 00:06:18.462 "nvmf_subsystem_set_ns_ana_group", 00:06:18.462 "nvmf_subsystem_add_ns", 00:06:18.462 "nvmf_subsystem_listener_set_ana_state", 00:06:18.462 "nvmf_discovery_get_referrals", 00:06:18.462 "nvmf_discovery_remove_referral", 00:06:18.462 "nvmf_discovery_add_referral", 00:06:18.462 "nvmf_subsystem_remove_listener", 00:06:18.462 "nvmf_subsystem_add_listener", 00:06:18.462 "nvmf_delete_subsystem", 00:06:18.462 "nvmf_create_subsystem", 00:06:18.462 "nvmf_get_subsystems", 00:06:18.462 "env_dpdk_get_mem_stats", 00:06:18.462 "nbd_get_disks", 00:06:18.462 "nbd_stop_disk", 00:06:18.462 "nbd_start_disk", 00:06:18.462 "ublk_recover_disk", 00:06:18.462 "ublk_get_disks", 00:06:18.462 "ublk_stop_disk", 00:06:18.462 "ublk_start_disk", 00:06:18.462 "ublk_destroy_target", 00:06:18.462 "ublk_create_target", 00:06:18.462 "virtio_blk_create_transport", 00:06:18.462 "virtio_blk_get_transports", 00:06:18.462 "vhost_controller_set_coalescing", 00:06:18.462 "vhost_get_controllers", 00:06:18.462 "vhost_delete_controller", 00:06:18.462 "vhost_create_blk_controller", 00:06:18.462 "vhost_scsi_controller_remove_target", 00:06:18.462 "vhost_scsi_controller_add_target", 00:06:18.462 "vhost_start_scsi_controller", 00:06:18.462 "vhost_create_scsi_controller", 00:06:18.462 "thread_set_cpumask", 00:06:18.462 "scheduler_set_options", 00:06:18.462 "framework_get_governor", 00:06:18.462 "framework_get_scheduler", 00:06:18.462 "framework_set_scheduler", 00:06:18.462 "framework_get_reactors", 00:06:18.462 "thread_get_io_channels", 00:06:18.462 "thread_get_pollers", 00:06:18.462 "thread_get_stats", 00:06:18.462 "framework_monitor_context_switch", 00:06:18.462 "spdk_kill_instance", 00:06:18.462 "log_enable_timestamps", 00:06:18.462 "log_get_flags", 00:06:18.462 "log_clear_flag", 00:06:18.462 "log_set_flag", 00:06:18.462 "log_get_level", 00:06:18.462 "log_set_level", 00:06:18.462 "log_get_print_level", 00:06:18.462 "log_set_print_level", 00:06:18.462 "framework_enable_cpumask_locks", 00:06:18.462 "framework_disable_cpumask_locks", 00:06:18.462 "framework_wait_init", 00:06:18.462 "framework_start_init", 00:06:18.462 "scsi_get_devices", 00:06:18.462 "bdev_get_histogram", 00:06:18.462 "bdev_enable_histogram", 00:06:18.462 "bdev_set_qos_limit", 00:06:18.462 "bdev_set_qd_sampling_period", 00:06:18.462 "bdev_get_bdevs", 00:06:18.462 "bdev_reset_iostat", 00:06:18.462 "bdev_get_iostat", 00:06:18.462 "bdev_examine", 00:06:18.462 "bdev_wait_for_examine", 00:06:18.462 "bdev_set_options", 00:06:18.462 "accel_get_stats", 00:06:18.462 "accel_set_options", 00:06:18.462 "accel_set_driver", 00:06:18.462 "accel_crypto_key_destroy", 00:06:18.462 "accel_crypto_keys_get", 00:06:18.462 "accel_crypto_key_create", 00:06:18.462 "accel_assign_opc", 00:06:18.462 "accel_get_module_info", 00:06:18.462 "accel_get_opc_assignments", 00:06:18.463 "vmd_rescan", 00:06:18.463 "vmd_remove_device", 00:06:18.463 "vmd_enable", 00:06:18.463 "sock_get_default_impl", 00:06:18.463 "sock_set_default_impl", 00:06:18.463 "sock_impl_set_options", 00:06:18.463 "sock_impl_get_options", 00:06:18.463 "iobuf_get_stats", 00:06:18.463 "iobuf_set_options", 00:06:18.463 "keyring_get_keys", 00:06:18.463 "framework_get_pci_devices", 00:06:18.463 "framework_get_config", 00:06:18.463 "framework_get_subsystems", 00:06:18.463 "fsdev_set_opts", 00:06:18.463 "fsdev_get_opts", 00:06:18.463 "trace_get_info", 00:06:18.463 "trace_get_tpoint_group_mask", 00:06:18.463 "trace_disable_tpoint_group", 00:06:18.463 "trace_enable_tpoint_group", 00:06:18.463 "trace_clear_tpoint_mask", 00:06:18.463 "trace_set_tpoint_mask", 00:06:18.463 "notify_get_notifications", 00:06:18.463 "notify_get_types", 00:06:18.463 "spdk_get_version", 00:06:18.463 "rpc_get_methods" 00:06:18.463 ] 00:06:18.463 23:01:37 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:18.463 23:01:37 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:18.463 23:01:37 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:18.463 23:01:37 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:18.463 23:01:37 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 70659 00:06:18.463 23:01:37 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 70659 ']' 00:06:18.463 23:01:37 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 70659 00:06:18.463 23:01:37 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:06:18.463 23:01:37 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:18.463 23:01:37 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70659 00:06:18.463 23:01:37 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:18.463 23:01:37 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:18.463 23:01:37 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70659' 00:06:18.463 killing process with pid 70659 00:06:18.463 23:01:37 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 70659 00:06:18.463 23:01:37 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 70659 00:06:18.721 00:06:18.721 real 0m1.589s 00:06:18.721 user 0m2.747s 00:06:18.721 sys 0m0.472s 00:06:18.721 23:01:38 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:18.721 23:01:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:18.721 ************************************ 00:06:18.721 END TEST spdkcli_tcp 00:06:18.721 ************************************ 00:06:18.721 23:01:38 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:18.721 23:01:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:18.721 23:01:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:18.721 23:01:38 -- common/autotest_common.sh@10 -- # set +x 00:06:18.721 ************************************ 00:06:18.721 START TEST dpdk_mem_utility 00:06:18.721 ************************************ 00:06:18.721 23:01:38 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:18.992 * Looking for test storage... 00:06:18.992 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:18.992 23:01:38 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:18.992 23:01:38 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:06:18.992 23:01:38 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:18.992 23:01:38 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:18.992 23:01:38 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:18.992 23:01:38 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:18.992 23:01:38 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:18.992 23:01:38 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:18.992 23:01:38 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:18.993 23:01:38 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:18.993 23:01:38 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:18.993 23:01:38 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:18.993 23:01:38 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:18.993 23:01:38 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:18.993 23:01:38 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:18.993 23:01:38 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:18.993 23:01:38 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:18.993 23:01:38 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:18.993 23:01:38 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:18.993 23:01:38 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:18.993 23:01:38 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:18.993 23:01:38 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:18.993 23:01:38 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:18.993 23:01:38 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:18.993 23:01:38 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:18.993 23:01:38 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:18.993 23:01:38 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:18.993 23:01:38 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:18.993 23:01:38 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:18.993 23:01:38 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:18.993 23:01:38 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:18.993 23:01:38 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:18.993 23:01:38 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:18.993 23:01:38 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:18.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.993 --rc genhtml_branch_coverage=1 00:06:18.993 --rc genhtml_function_coverage=1 00:06:18.993 --rc genhtml_legend=1 00:06:18.993 --rc geninfo_all_blocks=1 00:06:18.993 --rc geninfo_unexecuted_blocks=1 00:06:18.993 00:06:18.993 ' 00:06:18.993 23:01:38 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:18.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.993 --rc genhtml_branch_coverage=1 00:06:18.993 --rc genhtml_function_coverage=1 00:06:18.993 --rc genhtml_legend=1 00:06:18.993 --rc geninfo_all_blocks=1 00:06:18.993 --rc geninfo_unexecuted_blocks=1 00:06:18.993 00:06:18.993 ' 00:06:18.993 23:01:38 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:18.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.993 --rc genhtml_branch_coverage=1 00:06:18.993 --rc genhtml_function_coverage=1 00:06:18.993 --rc genhtml_legend=1 00:06:18.993 --rc geninfo_all_blocks=1 00:06:18.993 --rc geninfo_unexecuted_blocks=1 00:06:18.993 00:06:18.993 ' 00:06:18.993 23:01:38 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:18.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.993 --rc genhtml_branch_coverage=1 00:06:18.993 --rc genhtml_function_coverage=1 00:06:18.993 --rc genhtml_legend=1 00:06:18.993 --rc geninfo_all_blocks=1 00:06:18.993 --rc geninfo_unexecuted_blocks=1 00:06:18.993 00:06:18.993 ' 00:06:18.993 23:01:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:18.993 23:01:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=70753 00:06:18.993 23:01:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 70753 00:06:18.993 23:01:38 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 70753 ']' 00:06:18.993 23:01:38 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.993 23:01:38 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:18.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.993 23:01:38 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.993 23:01:38 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:18.993 23:01:38 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:18.993 23:01:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:18.993 [2024-11-18 23:01:38.279830] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:18.993 [2024-11-18 23:01:38.279965] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70753 ] 00:06:19.253 [2024-11-18 23:01:38.423808] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.253 [2024-11-18 23:01:38.455758] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.818 23:01:39 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:19.818 23:01:39 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:06:19.818 23:01:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:19.818 23:01:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:19.818 23:01:39 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.818 23:01:39 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:19.818 { 00:06:19.818 "filename": "/tmp/spdk_mem_dump.txt" 00:06:19.818 } 00:06:19.818 23:01:39 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.818 23:01:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:19.818 DPDK memory size 860.000000 MiB in 1 heap(s) 00:06:19.818 1 heaps totaling size 860.000000 MiB 00:06:19.818 size: 860.000000 MiB heap id: 0 00:06:19.818 end heaps---------- 00:06:19.818 9 mempools totaling size 642.649841 MiB 00:06:19.818 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:19.818 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:19.818 size: 92.545471 MiB name: bdev_io_70753 00:06:19.818 size: 51.011292 MiB name: evtpool_70753 00:06:19.818 size: 50.003479 MiB name: msgpool_70753 00:06:19.818 size: 36.509338 MiB name: fsdev_io_70753 00:06:19.818 size: 21.763794 MiB name: PDU_Pool 00:06:19.818 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:19.818 size: 0.026123 MiB name: Session_Pool 00:06:19.818 end mempools------- 00:06:19.818 6 memzones totaling size 4.142822 MiB 00:06:19.818 size: 1.000366 MiB name: RG_ring_0_70753 00:06:19.818 size: 1.000366 MiB name: RG_ring_1_70753 00:06:19.818 size: 1.000366 MiB name: RG_ring_4_70753 00:06:19.818 size: 1.000366 MiB name: RG_ring_5_70753 00:06:19.818 size: 0.125366 MiB name: RG_ring_2_70753 00:06:19.818 size: 0.015991 MiB name: RG_ring_3_70753 00:06:19.818 end memzones------- 00:06:19.818 23:01:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:20.079 heap id: 0 total size: 860.000000 MiB number of busy elements: 305 number of free elements: 16 00:06:20.079 list of free elements. size: 13.936890 MiB 00:06:20.079 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:20.079 element at address: 0x200000800000 with size: 1.996948 MiB 00:06:20.079 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:06:20.079 element at address: 0x20001be00000 with size: 0.999878 MiB 00:06:20.079 element at address: 0x200034a00000 with size: 0.994446 MiB 00:06:20.079 element at address: 0x200009600000 with size: 0.959839 MiB 00:06:20.079 element at address: 0x200015e00000 with size: 0.954285 MiB 00:06:20.079 element at address: 0x20001c000000 with size: 0.936584 MiB 00:06:20.079 element at address: 0x200000200000 with size: 0.834839 MiB 00:06:20.079 element at address: 0x20001d800000 with size: 0.567505 MiB 00:06:20.079 element at address: 0x20000d800000 with size: 0.489258 MiB 00:06:20.079 element at address: 0x200003e00000 with size: 0.488647 MiB 00:06:20.079 element at address: 0x20001c200000 with size: 0.485657 MiB 00:06:20.079 element at address: 0x200007000000 with size: 0.480469 MiB 00:06:20.079 element at address: 0x20002ac00000 with size: 0.396118 MiB 00:06:20.079 element at address: 0x200003a00000 with size: 0.353027 MiB 00:06:20.079 list of standard malloc elements. size: 199.266418 MiB 00:06:20.079 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:06:20.079 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:06:20.079 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:06:20.079 element at address: 0x20001befff80 with size: 1.000122 MiB 00:06:20.079 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:06:20.079 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:20.079 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:06:20.079 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:20.079 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:06:20.079 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:06:20.079 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:06:20.079 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:06:20.079 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:06:20.079 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:06:20.079 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:06:20.079 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:06:20.079 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:06:20.079 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:06:20.079 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:06:20.079 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:06:20.079 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:06:20.079 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:06:20.079 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:06:20.080 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:06:20.080 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:06:20.080 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:06:20.080 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:06:20.080 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:06:20.080 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:06:20.080 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:06:20.080 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:06:20.080 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:06:20.080 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:06:20.080 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:06:20.080 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:06:20.080 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:06:20.080 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:06:20.080 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:06:20.080 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:06:20.080 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:06:20.080 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:06:20.080 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:06:20.080 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:06:20.080 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:06:20.080 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:06:20.080 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:06:20.080 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:06:20.080 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:06:20.080 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:06:20.080 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:20.080 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:20.080 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:20.080 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:20.080 element at address: 0x200003a5a600 with size: 0.000183 MiB 00:06:20.080 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:06:20.080 element at address: 0x200003a5eac0 with size: 0.000183 MiB 00:06:20.080 element at address: 0x200003a7ed80 with size: 0.000183 MiB 00:06:20.080 element at address: 0x200003a7ee40 with size: 0.000183 MiB 00:06:20.080 element at address: 0x200003a7ef00 with size: 0.000183 MiB 00:06:20.080 element at address: 0x200003a7efc0 with size: 0.000183 MiB 00:06:20.080 element at address: 0x200003a7f080 with size: 0.000183 MiB 00:06:20.080 element at address: 0x200003a7f140 with size: 0.000183 MiB 00:06:20.080 element at address: 0x200003a7f200 with size: 0.000183 MiB 00:06:20.080 element at address: 0x200003a7f2c0 with size: 0.000183 MiB 00:06:20.080 element at address: 0x200003a7f380 with size: 0.000183 MiB 00:06:20.080 element at address: 0x200003a7f440 with size: 0.000183 MiB 00:06:20.080 element at address: 0x200003a7f500 with size: 0.000183 MiB 00:06:20.080 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:06:20.080 element at address: 0x200003aff880 with size: 0.000183 MiB 00:06:20.080 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:20.080 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:20.080 element at address: 0x200003e7d180 with size: 0.000183 MiB 00:06:20.080 element at address: 0x200003e7d240 with size: 0.000183 MiB 00:06:20.080 element at address: 0x200003e7d300 with size: 0.000183 MiB 00:06:20.080 element at address: 0x200003e7d3c0 with size: 0.000183 MiB 00:06:20.080 element at address: 0x200003e7d480 with size: 0.000183 MiB 00:06:20.080 element at address: 0x200003e7d540 with size: 0.000183 MiB 00:06:20.080 element at address: 0x200003e7d600 with size: 0.000183 MiB 00:06:20.080 element at address: 0x200003e7d6c0 with size: 0.000183 MiB 00:06:20.080 element at address: 0x200003e7d780 with size: 0.000183 MiB 00:06:20.080 element at address: 0x200003e7d840 with size: 0.000183 MiB 00:06:20.080 element at address: 0x200003e7d900 with size: 0.000183 MiB 00:06:20.080 element at address: 0x200003e7d9c0 with size: 0.000183 MiB 00:06:20.080 element at address: 0x200003e7da80 with size: 0.000183 MiB 00:06:20.080 element at address: 0x200003e7db40 with size: 0.000183 MiB 00:06:20.080 element at address: 0x200003e7dc00 with size: 0.000183 MiB 00:06:20.080 element at address: 0x200003e7dcc0 with size: 0.000183 MiB 00:06:20.080 element at address: 0x200003e7dd80 with size: 0.000183 MiB 00:06:20.080 element at address: 0x200003e7de40 with size: 0.000183 MiB 00:06:20.080 element at address: 0x200003e7df00 with size: 0.000183 MiB 00:06:20.080 element at address: 0x200003e7dfc0 with size: 0.000183 MiB 00:06:20.080 element at address: 0x200003e7e080 with size: 0.000183 MiB 00:06:20.080 element at address: 0x200003e7e140 with size: 0.000183 MiB 00:06:20.080 element at address: 0x200003e7e200 with size: 0.000183 MiB 00:06:20.080 element at address: 0x200003e7e2c0 with size: 0.000183 MiB 00:06:20.080 element at address: 0x200003e7e380 with size: 0.000183 MiB 00:06:20.080 element at address: 0x200003e7e440 with size: 0.000183 MiB 00:06:20.080 element at address: 0x200003e7e500 with size: 0.000183 MiB 00:06:20.080 element at address: 0x200003e7e5c0 with size: 0.000183 MiB 00:06:20.080 element at address: 0x200003e7e680 with size: 0.000183 MiB 00:06:20.080 element at address: 0x200003e7e740 with size: 0.000183 MiB 00:06:20.080 element at address: 0x200003e7e800 with size: 0.000183 MiB 00:06:20.080 element at address: 0x200003e7e8c0 with size: 0.000183 MiB 00:06:20.080 element at address: 0x200003e7e980 with size: 0.000183 MiB 00:06:20.080 element at address: 0x200003e7ea40 with size: 0.000183 MiB 00:06:20.080 element at address: 0x200003e7eb00 with size: 0.000183 MiB 00:06:20.080 element at address: 0x200003e7ebc0 with size: 0.000183 MiB 00:06:20.080 element at address: 0x200003e7ec80 with size: 0.000183 MiB 00:06:20.080 element at address: 0x200003e7ed40 with size: 0.000183 MiB 00:06:20.080 element at address: 0x200003e7ee00 with size: 0.000183 MiB 00:06:20.080 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:20.080 element at address: 0x20000707b000 with size: 0.000183 MiB 00:06:20.080 element at address: 0x20000707b0c0 with size: 0.000183 MiB 00:06:20.080 element at address: 0x20000707b180 with size: 0.000183 MiB 00:06:20.080 element at address: 0x20000707b240 with size: 0.000183 MiB 00:06:20.080 element at address: 0x20000707b300 with size: 0.000183 MiB 00:06:20.080 element at address: 0x20000707b3c0 with size: 0.000183 MiB 00:06:20.080 element at address: 0x20000707b480 with size: 0.000183 MiB 00:06:20.080 element at address: 0x20000707b540 with size: 0.000183 MiB 00:06:20.080 element at address: 0x20000707b600 with size: 0.000183 MiB 00:06:20.080 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:06:20.080 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:06:20.080 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:06:20.080 element at address: 0x20000d87d400 with size: 0.000183 MiB 00:06:20.080 element at address: 0x20000d87d4c0 with size: 0.000183 MiB 00:06:20.080 element at address: 0x20000d87d580 with size: 0.000183 MiB 00:06:20.080 element at address: 0x20000d87d640 with size: 0.000183 MiB 00:06:20.080 element at address: 0x20000d87d700 with size: 0.000183 MiB 00:06:20.080 element at address: 0x20000d87d7c0 with size: 0.000183 MiB 00:06:20.080 element at address: 0x20000d87d880 with size: 0.000183 MiB 00:06:20.080 element at address: 0x20000d87d940 with size: 0.000183 MiB 00:06:20.080 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:06:20.080 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:06:20.080 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:06:20.080 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:06:20.080 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:06:20.080 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:06:20.080 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:06:20.080 element at address: 0x20001d891480 with size: 0.000183 MiB 00:06:20.080 element at address: 0x20001d891540 with size: 0.000183 MiB 00:06:20.080 element at address: 0x20001d891600 with size: 0.000183 MiB 00:06:20.080 element at address: 0x20001d8916c0 with size: 0.000183 MiB 00:06:20.080 element at address: 0x20001d891780 with size: 0.000183 MiB 00:06:20.080 element at address: 0x20001d891840 with size: 0.000183 MiB 00:06:20.080 element at address: 0x20001d891900 with size: 0.000183 MiB 00:06:20.080 element at address: 0x20001d8919c0 with size: 0.000183 MiB 00:06:20.080 element at address: 0x20001d891a80 with size: 0.000183 MiB 00:06:20.080 element at address: 0x20001d891b40 with size: 0.000183 MiB 00:06:20.080 element at address: 0x20001d891c00 with size: 0.000183 MiB 00:06:20.080 element at address: 0x20001d891cc0 with size: 0.000183 MiB 00:06:20.080 element at address: 0x20001d891d80 with size: 0.000183 MiB 00:06:20.080 element at address: 0x20001d891e40 with size: 0.000183 MiB 00:06:20.080 element at address: 0x20001d891f00 with size: 0.000183 MiB 00:06:20.080 element at address: 0x20001d891fc0 with size: 0.000183 MiB 00:06:20.080 element at address: 0x20001d892080 with size: 0.000183 MiB 00:06:20.080 element at address: 0x20001d892140 with size: 0.000183 MiB 00:06:20.080 element at address: 0x20001d892200 with size: 0.000183 MiB 00:06:20.080 element at address: 0x20001d8922c0 with size: 0.000183 MiB 00:06:20.080 element at address: 0x20001d892380 with size: 0.000183 MiB 00:06:20.080 element at address: 0x20001d892440 with size: 0.000183 MiB 00:06:20.080 element at address: 0x20001d892500 with size: 0.000183 MiB 00:06:20.080 element at address: 0x20001d8925c0 with size: 0.000183 MiB 00:06:20.080 element at address: 0x20001d892680 with size: 0.000183 MiB 00:06:20.080 element at address: 0x20001d892740 with size: 0.000183 MiB 00:06:20.080 element at address: 0x20001d892800 with size: 0.000183 MiB 00:06:20.080 element at address: 0x20001d8928c0 with size: 0.000183 MiB 00:06:20.080 element at address: 0x20001d892980 with size: 0.000183 MiB 00:06:20.080 element at address: 0x20001d892a40 with size: 0.000183 MiB 00:06:20.080 element at address: 0x20001d892b00 with size: 0.000183 MiB 00:06:20.080 element at address: 0x20001d892bc0 with size: 0.000183 MiB 00:06:20.080 element at address: 0x20001d892c80 with size: 0.000183 MiB 00:06:20.080 element at address: 0x20001d892d40 with size: 0.000183 MiB 00:06:20.080 element at address: 0x20001d892e00 with size: 0.000183 MiB 00:06:20.080 element at address: 0x20001d892ec0 with size: 0.000183 MiB 00:06:20.080 element at address: 0x20001d892f80 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20001d893040 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20001d893100 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20001d8931c0 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20001d893280 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20001d893340 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20001d893400 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20001d8934c0 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20001d893580 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20001d893640 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20001d893700 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20001d8937c0 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20001d893880 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20001d893940 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20001d893a00 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20001d893ac0 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20001d893b80 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20001d893c40 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20001d893d00 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20001d893dc0 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20001d893e80 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20001d893f40 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20001d894000 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20001d8940c0 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20001d894180 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20001d894240 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20001d894300 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20001d8943c0 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20001d894480 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20001d894540 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20001d894600 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20001d8946c0 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20001d894780 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20001d894840 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20001d894900 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20001d8949c0 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20001d894a80 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20001d894b40 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20001d894c00 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20001d894cc0 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20001d894d80 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20001d894e40 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20001d894f00 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20001d894fc0 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20001d895080 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20001d895140 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20001d895200 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20001d8952c0 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20001d895380 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20001d895440 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20002ac65680 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20002ac65740 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20002ac6c340 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20002ac6c540 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20002ac6c600 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20002ac6c6c0 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20002ac6c780 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20002ac6c840 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20002ac6c900 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20002ac6c9c0 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20002ac6ca80 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20002ac6cb40 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20002ac6cc00 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20002ac6ccc0 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20002ac6cd80 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20002ac6ce40 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20002ac6cf00 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20002ac6cfc0 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20002ac6d080 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20002ac6d140 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20002ac6d200 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20002ac6d2c0 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20002ac6d380 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20002ac6d440 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20002ac6d500 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20002ac6d5c0 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20002ac6d680 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20002ac6d740 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20002ac6d800 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20002ac6d8c0 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20002ac6d980 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20002ac6da40 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20002ac6db00 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20002ac6dbc0 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20002ac6dc80 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20002ac6dd40 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20002ac6de00 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20002ac6dec0 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20002ac6df80 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20002ac6e040 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20002ac6e100 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20002ac6e1c0 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20002ac6e280 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20002ac6e340 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20002ac6e400 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20002ac6e4c0 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20002ac6e580 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20002ac6e640 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20002ac6e700 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20002ac6e7c0 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20002ac6e880 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20002ac6e940 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20002ac6ea00 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20002ac6eac0 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20002ac6eb80 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20002ac6ec40 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20002ac6ed00 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20002ac6edc0 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20002ac6ee80 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20002ac6ef40 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20002ac6f000 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20002ac6f0c0 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20002ac6f180 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20002ac6f240 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20002ac6f300 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20002ac6f3c0 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20002ac6f480 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20002ac6f540 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20002ac6f600 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20002ac6f6c0 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20002ac6f780 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20002ac6f840 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20002ac6f900 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20002ac6f9c0 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20002ac6fa80 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20002ac6fb40 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20002ac6fc00 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20002ac6fcc0 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20002ac6fd80 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:06:20.081 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:06:20.081 list of memzone associated elements. size: 646.796692 MiB 00:06:20.081 element at address: 0x20001d895500 with size: 211.416748 MiB 00:06:20.081 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:20.081 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:06:20.081 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:20.082 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:06:20.082 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_70753_0 00:06:20.082 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:20.082 associated memzone info: size: 48.002930 MiB name: MP_evtpool_70753_0 00:06:20.082 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:20.082 associated memzone info: size: 48.002930 MiB name: MP_msgpool_70753_0 00:06:20.082 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:06:20.082 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_70753_0 00:06:20.082 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:06:20.082 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:20.082 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:06:20.082 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:20.082 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:20.082 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_70753 00:06:20.082 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:20.082 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_70753 00:06:20.082 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:20.082 associated memzone info: size: 1.007996 MiB name: MP_evtpool_70753 00:06:20.082 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:06:20.082 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:20.082 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:06:20.082 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:20.082 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:06:20.082 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:20.082 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:06:20.082 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:20.082 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:20.082 associated memzone info: size: 1.000366 MiB name: RG_ring_0_70753 00:06:20.082 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:20.082 associated memzone info: size: 1.000366 MiB name: RG_ring_1_70753 00:06:20.082 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:06:20.082 associated memzone info: size: 1.000366 MiB name: RG_ring_4_70753 00:06:20.082 element at address: 0x200034afe940 with size: 1.000488 MiB 00:06:20.082 associated memzone info: size: 1.000366 MiB name: RG_ring_5_70753 00:06:20.082 element at address: 0x200003a7f680 with size: 0.500488 MiB 00:06:20.082 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_70753 00:06:20.082 element at address: 0x200003e7eec0 with size: 0.500488 MiB 00:06:20.082 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_70753 00:06:20.082 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:06:20.082 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:20.082 element at address: 0x20000707b780 with size: 0.500488 MiB 00:06:20.082 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:20.082 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:06:20.082 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:20.082 element at address: 0x200003a5eb80 with size: 0.125488 MiB 00:06:20.082 associated memzone info: size: 0.125366 MiB name: RG_ring_2_70753 00:06:20.082 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:06:20.082 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:20.082 element at address: 0x20002ac65800 with size: 0.023743 MiB 00:06:20.082 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:20.082 element at address: 0x200003a5a8c0 with size: 0.016113 MiB 00:06:20.082 associated memzone info: size: 0.015991 MiB name: RG_ring_3_70753 00:06:20.082 element at address: 0x20002ac6b940 with size: 0.002441 MiB 00:06:20.082 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:20.082 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:06:20.082 associated memzone info: size: 0.000183 MiB name: MP_msgpool_70753 00:06:20.082 element at address: 0x200003aff940 with size: 0.000305 MiB 00:06:20.082 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_70753 00:06:20.082 element at address: 0x200003a5a6c0 with size: 0.000305 MiB 00:06:20.082 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_70753 00:06:20.082 element at address: 0x20002ac6c400 with size: 0.000305 MiB 00:06:20.082 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:20.082 23:01:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:20.082 23:01:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 70753 00:06:20.082 23:01:39 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 70753 ']' 00:06:20.082 23:01:39 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 70753 00:06:20.082 23:01:39 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:06:20.082 23:01:39 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:20.082 23:01:39 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70753 00:06:20.082 23:01:39 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:20.082 23:01:39 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:20.082 killing process with pid 70753 00:06:20.082 23:01:39 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70753' 00:06:20.082 23:01:39 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 70753 00:06:20.082 23:01:39 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 70753 00:06:20.340 ************************************ 00:06:20.340 END TEST dpdk_mem_utility 00:06:20.340 ************************************ 00:06:20.340 00:06:20.340 real 0m1.463s 00:06:20.340 user 0m1.521s 00:06:20.340 sys 0m0.359s 00:06:20.340 23:01:39 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:20.340 23:01:39 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:20.340 23:01:39 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:20.340 23:01:39 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:20.340 23:01:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:20.340 23:01:39 -- common/autotest_common.sh@10 -- # set +x 00:06:20.340 ************************************ 00:06:20.340 START TEST event 00:06:20.340 ************************************ 00:06:20.340 23:01:39 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:20.340 * Looking for test storage... 00:06:20.340 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:20.340 23:01:39 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:20.340 23:01:39 event -- common/autotest_common.sh@1681 -- # lcov --version 00:06:20.340 23:01:39 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:20.340 23:01:39 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:20.340 23:01:39 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:20.340 23:01:39 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:20.340 23:01:39 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:20.340 23:01:39 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:20.340 23:01:39 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:20.340 23:01:39 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:20.340 23:01:39 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:20.340 23:01:39 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:20.340 23:01:39 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:20.340 23:01:39 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:20.340 23:01:39 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:20.340 23:01:39 event -- scripts/common.sh@344 -- # case "$op" in 00:06:20.340 23:01:39 event -- scripts/common.sh@345 -- # : 1 00:06:20.340 23:01:39 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:20.340 23:01:39 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:20.340 23:01:39 event -- scripts/common.sh@365 -- # decimal 1 00:06:20.340 23:01:39 event -- scripts/common.sh@353 -- # local d=1 00:06:20.340 23:01:39 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:20.340 23:01:39 event -- scripts/common.sh@355 -- # echo 1 00:06:20.340 23:01:39 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:20.340 23:01:39 event -- scripts/common.sh@366 -- # decimal 2 00:06:20.340 23:01:39 event -- scripts/common.sh@353 -- # local d=2 00:06:20.340 23:01:39 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:20.340 23:01:39 event -- scripts/common.sh@355 -- # echo 2 00:06:20.340 23:01:39 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:20.340 23:01:39 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:20.340 23:01:39 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:20.340 23:01:39 event -- scripts/common.sh@368 -- # return 0 00:06:20.340 23:01:39 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:20.340 23:01:39 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:20.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.340 --rc genhtml_branch_coverage=1 00:06:20.341 --rc genhtml_function_coverage=1 00:06:20.341 --rc genhtml_legend=1 00:06:20.341 --rc geninfo_all_blocks=1 00:06:20.341 --rc geninfo_unexecuted_blocks=1 00:06:20.341 00:06:20.341 ' 00:06:20.341 23:01:39 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:20.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.341 --rc genhtml_branch_coverage=1 00:06:20.341 --rc genhtml_function_coverage=1 00:06:20.341 --rc genhtml_legend=1 00:06:20.341 --rc geninfo_all_blocks=1 00:06:20.341 --rc geninfo_unexecuted_blocks=1 00:06:20.341 00:06:20.341 ' 00:06:20.341 23:01:39 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:20.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.341 --rc genhtml_branch_coverage=1 00:06:20.341 --rc genhtml_function_coverage=1 00:06:20.341 --rc genhtml_legend=1 00:06:20.341 --rc geninfo_all_blocks=1 00:06:20.341 --rc geninfo_unexecuted_blocks=1 00:06:20.341 00:06:20.341 ' 00:06:20.341 23:01:39 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:20.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.341 --rc genhtml_branch_coverage=1 00:06:20.341 --rc genhtml_function_coverage=1 00:06:20.341 --rc genhtml_legend=1 00:06:20.341 --rc geninfo_all_blocks=1 00:06:20.341 --rc geninfo_unexecuted_blocks=1 00:06:20.341 00:06:20.341 ' 00:06:20.341 23:01:39 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:20.341 23:01:39 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:20.341 23:01:39 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:20.341 23:01:39 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:06:20.341 23:01:39 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:20.341 23:01:39 event -- common/autotest_common.sh@10 -- # set +x 00:06:20.341 ************************************ 00:06:20.341 START TEST event_perf 00:06:20.341 ************************************ 00:06:20.341 23:01:39 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:20.601 Running I/O for 1 seconds...[2024-11-18 23:01:39.739498] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:20.601 [2024-11-18 23:01:39.739629] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70834 ] 00:06:20.601 [2024-11-18 23:01:39.888652] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:20.601 [2024-11-18 23:01:39.924941] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.601 Running I/O for 1 seconds...[2024-11-18 23:01:39.925302] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:20.601 [2024-11-18 23:01:39.925417] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.601 [2024-11-18 23:01:39.925513] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:21.975 00:06:21.976 lcore 0: 140381 00:06:21.976 lcore 1: 140382 00:06:21.976 lcore 2: 140379 00:06:21.976 lcore 3: 140380 00:06:21.976 done. 00:06:21.976 00:06:21.976 real 0m1.281s 00:06:21.976 user 0m4.075s 00:06:21.976 sys 0m0.086s 00:06:21.976 23:01:40 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:21.976 23:01:40 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:21.976 ************************************ 00:06:21.976 END TEST event_perf 00:06:21.976 ************************************ 00:06:21.976 23:01:41 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:21.976 23:01:41 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:21.976 23:01:41 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:21.976 23:01:41 event -- common/autotest_common.sh@10 -- # set +x 00:06:21.976 ************************************ 00:06:21.976 START TEST event_reactor 00:06:21.976 ************************************ 00:06:21.976 23:01:41 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:21.976 [2024-11-18 23:01:41.063097] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:21.976 [2024-11-18 23:01:41.063230] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70868 ] 00:06:21.976 [2024-11-18 23:01:41.223385] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.976 [2024-11-18 23:01:41.270331] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.351 test_start 00:06:23.351 oneshot 00:06:23.351 tick 100 00:06:23.351 tick 100 00:06:23.351 tick 250 00:06:23.351 tick 100 00:06:23.351 tick 100 00:06:23.351 tick 100 00:06:23.351 tick 250 00:06:23.351 tick 500 00:06:23.351 tick 100 00:06:23.351 tick 100 00:06:23.351 tick 250 00:06:23.351 tick 100 00:06:23.351 tick 100 00:06:23.351 test_end 00:06:23.351 00:06:23.351 real 0m1.314s 00:06:23.351 user 0m1.124s 00:06:23.351 sys 0m0.081s 00:06:23.351 23:01:42 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:23.351 23:01:42 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:23.351 ************************************ 00:06:23.351 END TEST event_reactor 00:06:23.351 ************************************ 00:06:23.351 23:01:42 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:23.351 23:01:42 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:23.351 23:01:42 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:23.351 23:01:42 event -- common/autotest_common.sh@10 -- # set +x 00:06:23.351 ************************************ 00:06:23.351 START TEST event_reactor_perf 00:06:23.351 ************************************ 00:06:23.351 23:01:42 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:23.351 [2024-11-18 23:01:42.425743] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:23.351 [2024-11-18 23:01:42.425860] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70910 ] 00:06:23.351 [2024-11-18 23:01:42.572547] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.351 [2024-11-18 23:01:42.615121] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.375 test_start 00:06:24.375 test_end 00:06:24.375 Performance: 313271 events per second 00:06:24.375 00:06:24.375 real 0m1.296s 00:06:24.375 user 0m1.110s 00:06:24.375 sys 0m0.079s 00:06:24.376 23:01:43 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:24.376 ************************************ 00:06:24.376 END TEST event_reactor_perf 00:06:24.376 ************************************ 00:06:24.376 23:01:43 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:24.633 23:01:43 event -- event/event.sh@49 -- # uname -s 00:06:24.634 23:01:43 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:24.634 23:01:43 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:24.634 23:01:43 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:24.634 23:01:43 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:24.634 23:01:43 event -- common/autotest_common.sh@10 -- # set +x 00:06:24.634 ************************************ 00:06:24.634 START TEST event_scheduler 00:06:24.634 ************************************ 00:06:24.634 23:01:43 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:24.634 * Looking for test storage... 00:06:24.634 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:24.634 23:01:43 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:24.634 23:01:43 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:06:24.634 23:01:43 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:24.634 23:01:43 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:24.634 23:01:43 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:24.634 23:01:43 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:24.634 23:01:43 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:24.634 23:01:43 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:24.634 23:01:43 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:24.634 23:01:43 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:24.634 23:01:43 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:24.634 23:01:43 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:24.634 23:01:43 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:24.634 23:01:43 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:24.634 23:01:43 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:24.634 23:01:43 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:24.634 23:01:43 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:24.634 23:01:43 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:24.634 23:01:43 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:24.634 23:01:43 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:24.634 23:01:43 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:24.634 23:01:43 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:24.634 23:01:43 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:24.634 23:01:43 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:24.634 23:01:43 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:24.634 23:01:43 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:24.634 23:01:43 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:24.634 23:01:43 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:24.634 23:01:43 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:24.634 23:01:43 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:24.634 23:01:43 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:24.634 23:01:43 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:24.634 23:01:43 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:24.634 23:01:43 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:24.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.634 --rc genhtml_branch_coverage=1 00:06:24.634 --rc genhtml_function_coverage=1 00:06:24.634 --rc genhtml_legend=1 00:06:24.634 --rc geninfo_all_blocks=1 00:06:24.634 --rc geninfo_unexecuted_blocks=1 00:06:24.634 00:06:24.634 ' 00:06:24.634 23:01:43 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:24.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.634 --rc genhtml_branch_coverage=1 00:06:24.634 --rc genhtml_function_coverage=1 00:06:24.634 --rc genhtml_legend=1 00:06:24.634 --rc geninfo_all_blocks=1 00:06:24.634 --rc geninfo_unexecuted_blocks=1 00:06:24.634 00:06:24.634 ' 00:06:24.634 23:01:43 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:24.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.634 --rc genhtml_branch_coverage=1 00:06:24.634 --rc genhtml_function_coverage=1 00:06:24.634 --rc genhtml_legend=1 00:06:24.634 --rc geninfo_all_blocks=1 00:06:24.634 --rc geninfo_unexecuted_blocks=1 00:06:24.634 00:06:24.634 ' 00:06:24.634 23:01:43 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:24.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.634 --rc genhtml_branch_coverage=1 00:06:24.634 --rc genhtml_function_coverage=1 00:06:24.634 --rc genhtml_legend=1 00:06:24.634 --rc geninfo_all_blocks=1 00:06:24.634 --rc geninfo_unexecuted_blocks=1 00:06:24.634 00:06:24.634 ' 00:06:24.634 23:01:43 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:24.634 23:01:43 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=70975 00:06:24.634 23:01:43 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:24.634 23:01:43 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:24.634 23:01:43 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 70975 00:06:24.634 23:01:43 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 70975 ']' 00:06:24.634 23:01:43 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.634 23:01:43 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:24.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.634 23:01:43 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.634 23:01:43 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:24.634 23:01:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:24.634 [2024-11-18 23:01:43.963563] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:24.634 [2024-11-18 23:01:43.963665] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70975 ] 00:06:24.892 [2024-11-18 23:01:44.108460] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:24.892 [2024-11-18 23:01:44.153701] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.892 [2024-11-18 23:01:44.154045] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:24.892 [2024-11-18 23:01:44.154260] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:24.892 [2024-11-18 23:01:44.154340] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:25.519 23:01:44 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:25.519 23:01:44 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:06:25.519 23:01:44 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:25.519 23:01:44 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.519 23:01:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:25.519 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:25.519 POWER: Cannot set governor of lcore 0 to userspace 00:06:25.519 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:25.519 POWER: Cannot set governor of lcore 0 to performance 00:06:25.519 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:25.519 POWER: Cannot set governor of lcore 0 to userspace 00:06:25.519 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:25.519 POWER: Cannot set governor of lcore 0 to userspace 00:06:25.519 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:25.519 POWER: Unable to set Power Management Environment for lcore 0 00:06:25.519 [2024-11-18 23:01:44.820125] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:06:25.519 [2024-11-18 23:01:44.820146] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:06:25.519 [2024-11-18 23:01:44.820168] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:25.519 [2024-11-18 23:01:44.820221] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:25.519 [2024-11-18 23:01:44.820229] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:25.519 [2024-11-18 23:01:44.820239] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:25.519 23:01:44 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.519 23:01:44 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:25.519 23:01:44 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.519 23:01:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:25.778 [2024-11-18 23:01:44.891258] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:25.778 23:01:44 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.778 23:01:44 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:25.778 23:01:44 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:25.778 23:01:44 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:25.778 23:01:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:25.778 ************************************ 00:06:25.778 START TEST scheduler_create_thread 00:06:25.778 ************************************ 00:06:25.778 23:01:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:06:25.778 23:01:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:25.778 23:01:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.778 23:01:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.778 2 00:06:25.778 23:01:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.778 23:01:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:25.778 23:01:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.778 23:01:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.778 3 00:06:25.778 23:01:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.778 23:01:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:25.778 23:01:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.778 23:01:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.778 4 00:06:25.778 23:01:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.778 23:01:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:25.778 23:01:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.778 23:01:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.778 5 00:06:25.778 23:01:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.778 23:01:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:25.778 23:01:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.778 23:01:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.778 6 00:06:25.778 23:01:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.778 23:01:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:25.778 23:01:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.778 23:01:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.778 7 00:06:25.778 23:01:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.778 23:01:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:25.778 23:01:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.778 23:01:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.778 8 00:06:25.778 23:01:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.778 23:01:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:25.778 23:01:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.778 23:01:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.778 9 00:06:25.778 23:01:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.778 23:01:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:25.778 23:01:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.778 23:01:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.778 10 00:06:25.778 23:01:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.778 23:01:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:25.778 23:01:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.778 23:01:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.778 23:01:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.778 23:01:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:25.778 23:01:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:25.778 23:01:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.778 23:01:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:26.720 23:01:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.720 23:01:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:26.720 23:01:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.720 23:01:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.107 23:01:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.107 23:01:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:28.107 23:01:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:28.107 23:01:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.107 23:01:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:29.061 ************************************ 00:06:29.061 END TEST scheduler_create_thread 00:06:29.061 ************************************ 00:06:29.061 23:01:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:29.061 00:06:29.061 real 0m3.372s 00:06:29.061 user 0m0.021s 00:06:29.061 sys 0m0.002s 00:06:29.061 23:01:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:29.061 23:01:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:29.061 23:01:48 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:29.061 23:01:48 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 70975 00:06:29.061 23:01:48 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 70975 ']' 00:06:29.061 23:01:48 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 70975 00:06:29.061 23:01:48 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:06:29.061 23:01:48 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:29.061 23:01:48 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70975 00:06:29.061 killing process with pid 70975 00:06:29.061 23:01:48 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:29.061 23:01:48 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:29.061 23:01:48 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70975' 00:06:29.061 23:01:48 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 70975 00:06:29.061 23:01:48 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 70975 00:06:29.320 [2024-11-18 23:01:48.655800] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:29.581 00:06:29.581 real 0m5.100s 00:06:29.581 user 0m10.157s 00:06:29.581 sys 0m0.345s 00:06:29.581 23:01:48 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:29.581 23:01:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:29.581 ************************************ 00:06:29.581 END TEST event_scheduler 00:06:29.581 ************************************ 00:06:29.581 23:01:48 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:29.581 23:01:48 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:29.581 23:01:48 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:29.581 23:01:48 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:29.581 23:01:48 event -- common/autotest_common.sh@10 -- # set +x 00:06:29.581 ************************************ 00:06:29.581 START TEST app_repeat 00:06:29.581 ************************************ 00:06:29.581 23:01:48 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:06:29.581 23:01:48 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.581 23:01:48 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.581 23:01:48 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:29.581 23:01:48 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:29.581 23:01:48 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:29.581 23:01:48 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:29.581 23:01:48 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:29.581 Process app_repeat pid: 71081 00:06:29.581 spdk_app_start Round 0 00:06:29.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:29.581 23:01:48 event.app_repeat -- event/event.sh@19 -- # repeat_pid=71081 00:06:29.581 23:01:48 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:29.581 23:01:48 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 71081' 00:06:29.581 23:01:48 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:29.581 23:01:48 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:29.581 23:01:48 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:29.581 23:01:48 event.app_repeat -- event/event.sh@25 -- # waitforlisten 71081 /var/tmp/spdk-nbd.sock 00:06:29.581 23:01:48 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 71081 ']' 00:06:29.581 23:01:48 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:29.581 23:01:48 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:29.581 23:01:48 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:29.581 23:01:48 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:29.581 23:01:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:29.581 [2024-11-18 23:01:48.952785] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:29.581 [2024-11-18 23:01:48.953040] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71081 ] 00:06:29.840 [2024-11-18 23:01:49.094669] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:29.840 [2024-11-18 23:01:49.141181] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.840 [2024-11-18 23:01:49.141257] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.774 23:01:49 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:30.774 23:01:49 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:30.774 23:01:49 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:30.774 Malloc0 00:06:30.774 23:01:50 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:31.031 Malloc1 00:06:31.031 23:01:50 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:31.031 23:01:50 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.031 23:01:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:31.031 23:01:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:31.031 23:01:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.031 23:01:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:31.031 23:01:50 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:31.032 23:01:50 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.032 23:01:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:31.032 23:01:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:31.032 23:01:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.032 23:01:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:31.032 23:01:50 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:31.032 23:01:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:31.032 23:01:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:31.032 23:01:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:31.292 /dev/nbd0 00:06:31.292 23:01:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:31.292 23:01:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:31.292 23:01:50 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:31.292 23:01:50 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:31.292 23:01:50 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:31.292 23:01:50 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:31.292 23:01:50 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:31.292 23:01:50 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:31.292 23:01:50 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:31.292 23:01:50 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:31.292 23:01:50 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:31.292 1+0 records in 00:06:31.292 1+0 records out 00:06:31.292 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000336371 s, 12.2 MB/s 00:06:31.292 23:01:50 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:31.292 23:01:50 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:31.292 23:01:50 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:31.292 23:01:50 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:31.292 23:01:50 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:31.292 23:01:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:31.292 23:01:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:31.292 23:01:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:31.292 /dev/nbd1 00:06:31.554 23:01:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:31.554 23:01:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:31.554 23:01:50 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:31.554 23:01:50 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:31.554 23:01:50 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:31.554 23:01:50 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:31.554 23:01:50 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:31.554 23:01:50 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:31.554 23:01:50 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:31.554 23:01:50 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:31.554 23:01:50 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:31.554 1+0 records in 00:06:31.554 1+0 records out 00:06:31.554 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000243021 s, 16.9 MB/s 00:06:31.554 23:01:50 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:31.554 23:01:50 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:31.554 23:01:50 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:31.554 23:01:50 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:31.554 23:01:50 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:31.554 23:01:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:31.554 23:01:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:31.554 23:01:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:31.554 23:01:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.554 23:01:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:31.554 23:01:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:31.554 { 00:06:31.554 "nbd_device": "/dev/nbd0", 00:06:31.554 "bdev_name": "Malloc0" 00:06:31.554 }, 00:06:31.555 { 00:06:31.555 "nbd_device": "/dev/nbd1", 00:06:31.555 "bdev_name": "Malloc1" 00:06:31.555 } 00:06:31.555 ]' 00:06:31.555 23:01:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:31.555 23:01:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:31.555 { 00:06:31.555 "nbd_device": "/dev/nbd0", 00:06:31.555 "bdev_name": "Malloc0" 00:06:31.555 }, 00:06:31.555 { 00:06:31.555 "nbd_device": "/dev/nbd1", 00:06:31.555 "bdev_name": "Malloc1" 00:06:31.555 } 00:06:31.555 ]' 00:06:31.816 23:01:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:31.816 /dev/nbd1' 00:06:31.816 23:01:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:31.816 /dev/nbd1' 00:06:31.816 23:01:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:31.816 23:01:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:31.816 23:01:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:31.816 23:01:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:31.816 23:01:50 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:31.816 23:01:50 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:31.816 23:01:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.816 23:01:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:31.816 23:01:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:31.816 23:01:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:31.816 23:01:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:31.816 23:01:50 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:31.816 256+0 records in 00:06:31.816 256+0 records out 00:06:31.816 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0102941 s, 102 MB/s 00:06:31.816 23:01:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:31.816 23:01:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:31.816 256+0 records in 00:06:31.816 256+0 records out 00:06:31.816 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0152598 s, 68.7 MB/s 00:06:31.816 23:01:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:31.816 23:01:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:31.816 256+0 records in 00:06:31.816 256+0 records out 00:06:31.816 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0165895 s, 63.2 MB/s 00:06:31.816 23:01:50 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:31.816 23:01:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.816 23:01:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:31.816 23:01:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:31.816 23:01:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:31.816 23:01:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:31.816 23:01:50 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:31.816 23:01:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:31.816 23:01:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:31.816 23:01:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:31.816 23:01:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:31.816 23:01:50 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:31.816 23:01:51 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:31.816 23:01:51 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.816 23:01:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.816 23:01:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:31.816 23:01:51 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:31.816 23:01:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:31.816 23:01:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:32.078 23:01:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:32.078 23:01:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:32.078 23:01:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:32.078 23:01:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:32.078 23:01:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:32.078 23:01:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:32.078 23:01:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:32.078 23:01:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:32.078 23:01:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:32.078 23:01:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:32.078 23:01:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:32.078 23:01:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:32.078 23:01:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:32.078 23:01:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:32.078 23:01:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:32.078 23:01:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:32.078 23:01:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:32.078 23:01:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:32.078 23:01:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:32.078 23:01:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.078 23:01:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:32.390 23:01:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:32.390 23:01:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:32.390 23:01:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:32.390 23:01:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:32.390 23:01:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:32.390 23:01:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:32.390 23:01:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:32.390 23:01:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:32.390 23:01:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:32.390 23:01:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:32.390 23:01:51 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:32.390 23:01:51 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:32.390 23:01:51 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:32.652 23:01:51 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:32.652 [2024-11-18 23:01:52.024711] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:32.911 [2024-11-18 23:01:52.061266] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.911 [2024-11-18 23:01:52.061302] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.911 [2024-11-18 23:01:52.102020] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:32.911 [2024-11-18 23:01:52.102281] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:36.213 23:01:54 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:36.213 23:01:54 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:36.213 spdk_app_start Round 1 00:06:36.213 23:01:54 event.app_repeat -- event/event.sh@25 -- # waitforlisten 71081 /var/tmp/spdk-nbd.sock 00:06:36.213 23:01:54 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 71081 ']' 00:06:36.213 23:01:54 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:36.213 23:01:54 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:36.213 23:01:54 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:36.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:36.213 23:01:54 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:36.213 23:01:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:36.213 23:01:55 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:36.213 23:01:55 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:36.213 23:01:55 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:36.213 Malloc0 00:06:36.213 23:01:55 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:36.213 Malloc1 00:06:36.213 23:01:55 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:36.213 23:01:55 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.213 23:01:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:36.213 23:01:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:36.213 23:01:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.213 23:01:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:36.213 23:01:55 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:36.213 23:01:55 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.213 23:01:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:36.213 23:01:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:36.213 23:01:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.213 23:01:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:36.213 23:01:55 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:36.213 23:01:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:36.213 23:01:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:36.213 23:01:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:36.476 /dev/nbd0 00:06:36.476 23:01:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:36.476 23:01:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:36.476 23:01:55 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:36.476 23:01:55 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:36.476 23:01:55 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:36.476 23:01:55 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:36.476 23:01:55 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:36.476 23:01:55 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:36.476 23:01:55 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:36.476 23:01:55 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:36.476 23:01:55 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:36.476 1+0 records in 00:06:36.476 1+0 records out 00:06:36.476 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000364825 s, 11.2 MB/s 00:06:36.476 23:01:55 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:36.476 23:01:55 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:36.476 23:01:55 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:36.476 23:01:55 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:36.476 23:01:55 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:36.476 23:01:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:36.476 23:01:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:36.476 23:01:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:36.737 /dev/nbd1 00:06:36.737 23:01:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:36.737 23:01:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:36.737 23:01:55 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:36.737 23:01:55 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:36.737 23:01:55 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:36.737 23:01:55 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:36.737 23:01:55 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:36.737 23:01:55 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:36.737 23:01:55 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:36.737 23:01:55 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:36.737 23:01:55 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:36.737 1+0 records in 00:06:36.737 1+0 records out 00:06:36.737 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000423712 s, 9.7 MB/s 00:06:36.737 23:01:55 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:36.737 23:01:55 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:36.737 23:01:55 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:36.737 23:01:55 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:36.737 23:01:55 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:36.737 23:01:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:36.737 23:01:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:36.737 23:01:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:36.737 23:01:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.737 23:01:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:36.996 23:01:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:36.996 { 00:06:36.997 "nbd_device": "/dev/nbd0", 00:06:36.997 "bdev_name": "Malloc0" 00:06:36.997 }, 00:06:36.997 { 00:06:36.997 "nbd_device": "/dev/nbd1", 00:06:36.997 "bdev_name": "Malloc1" 00:06:36.997 } 00:06:36.997 ]' 00:06:36.997 23:01:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:36.997 { 00:06:36.997 "nbd_device": "/dev/nbd0", 00:06:36.997 "bdev_name": "Malloc0" 00:06:36.997 }, 00:06:36.997 { 00:06:36.997 "nbd_device": "/dev/nbd1", 00:06:36.997 "bdev_name": "Malloc1" 00:06:36.997 } 00:06:36.997 ]' 00:06:36.997 23:01:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:36.997 23:01:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:36.997 /dev/nbd1' 00:06:36.997 23:01:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:36.997 23:01:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:36.997 /dev/nbd1' 00:06:36.997 23:01:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:36.997 23:01:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:36.997 23:01:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:36.997 23:01:56 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:36.997 23:01:56 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:36.997 23:01:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.997 23:01:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:36.997 23:01:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:36.997 23:01:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:36.997 23:01:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:36.997 23:01:56 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:36.997 256+0 records in 00:06:36.997 256+0 records out 00:06:36.997 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00696209 s, 151 MB/s 00:06:36.997 23:01:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:36.997 23:01:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:36.997 256+0 records in 00:06:36.997 256+0 records out 00:06:36.997 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0298698 s, 35.1 MB/s 00:06:36.997 23:01:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:36.997 23:01:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:36.997 256+0 records in 00:06:36.997 256+0 records out 00:06:36.997 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0169036 s, 62.0 MB/s 00:06:36.997 23:01:56 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:36.997 23:01:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.997 23:01:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:36.997 23:01:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:36.997 23:01:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:36.997 23:01:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:36.997 23:01:56 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:36.997 23:01:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:36.997 23:01:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:36.997 23:01:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:36.997 23:01:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:36.997 23:01:56 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:36.997 23:01:56 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:36.997 23:01:56 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.997 23:01:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.997 23:01:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:36.997 23:01:56 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:36.997 23:01:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:36.997 23:01:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:37.253 23:01:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:37.253 23:01:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:37.253 23:01:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:37.253 23:01:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:37.253 23:01:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:37.253 23:01:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:37.253 23:01:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:37.253 23:01:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:37.253 23:01:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:37.253 23:01:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:37.511 23:01:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:37.511 23:01:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:37.511 23:01:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:37.511 23:01:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:37.511 23:01:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:37.511 23:01:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:37.511 23:01:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:37.511 23:01:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:37.511 23:01:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:37.511 23:01:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.511 23:01:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:37.773 23:01:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:37.773 23:01:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:37.773 23:01:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:37.773 23:01:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:37.773 23:01:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:37.773 23:01:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:37.773 23:01:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:37.773 23:01:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:37.773 23:01:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:37.773 23:01:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:37.773 23:01:56 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:37.773 23:01:56 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:37.773 23:01:56 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:38.034 23:01:57 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:38.034 [2024-11-18 23:01:57.326241] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:38.034 [2024-11-18 23:01:57.363967] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.034 [2024-11-18 23:01:57.364091] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.034 [2024-11-18 23:01:57.405061] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:38.034 [2024-11-18 23:01:57.405114] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:41.329 spdk_app_start Round 2 00:06:41.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:41.329 23:02:00 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:41.329 23:02:00 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:41.329 23:02:00 event.app_repeat -- event/event.sh@25 -- # waitforlisten 71081 /var/tmp/spdk-nbd.sock 00:06:41.329 23:02:00 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 71081 ']' 00:06:41.329 23:02:00 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:41.329 23:02:00 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:41.329 23:02:00 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:41.329 23:02:00 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:41.329 23:02:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:41.329 23:02:00 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:41.329 23:02:00 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:41.329 23:02:00 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:41.329 Malloc0 00:06:41.329 23:02:00 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:41.590 Malloc1 00:06:41.590 23:02:00 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:41.590 23:02:00 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.590 23:02:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:41.590 23:02:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:41.590 23:02:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.590 23:02:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:41.590 23:02:00 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:41.590 23:02:00 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.590 23:02:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:41.590 23:02:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:41.590 23:02:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.590 23:02:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:41.590 23:02:00 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:41.590 23:02:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:41.590 23:02:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:41.590 23:02:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:41.851 /dev/nbd0 00:06:41.851 23:02:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:41.851 23:02:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:41.851 23:02:01 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:41.851 23:02:01 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:41.851 23:02:01 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:41.851 23:02:01 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:41.851 23:02:01 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:41.851 23:02:01 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:41.851 23:02:01 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:41.851 23:02:01 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:41.851 23:02:01 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:41.851 1+0 records in 00:06:41.851 1+0 records out 00:06:41.851 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00031544 s, 13.0 MB/s 00:06:41.851 23:02:01 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:41.851 23:02:01 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:41.851 23:02:01 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:41.851 23:02:01 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:41.851 23:02:01 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:41.851 23:02:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:41.851 23:02:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:41.851 23:02:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:42.110 /dev/nbd1 00:06:42.110 23:02:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:42.110 23:02:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:42.110 23:02:01 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:42.110 23:02:01 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:42.110 23:02:01 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:42.110 23:02:01 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:42.110 23:02:01 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:42.110 23:02:01 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:42.110 23:02:01 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:42.110 23:02:01 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:42.110 23:02:01 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:42.110 1+0 records in 00:06:42.110 1+0 records out 00:06:42.110 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000188773 s, 21.7 MB/s 00:06:42.110 23:02:01 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:42.110 23:02:01 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:42.110 23:02:01 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:42.110 23:02:01 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:42.111 23:02:01 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:42.111 23:02:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:42.111 23:02:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:42.111 23:02:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:42.111 23:02:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.111 23:02:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:42.372 23:02:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:42.372 { 00:06:42.372 "nbd_device": "/dev/nbd0", 00:06:42.372 "bdev_name": "Malloc0" 00:06:42.372 }, 00:06:42.372 { 00:06:42.372 "nbd_device": "/dev/nbd1", 00:06:42.372 "bdev_name": "Malloc1" 00:06:42.372 } 00:06:42.372 ]' 00:06:42.372 23:02:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:42.372 23:02:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:42.372 { 00:06:42.372 "nbd_device": "/dev/nbd0", 00:06:42.372 "bdev_name": "Malloc0" 00:06:42.372 }, 00:06:42.372 { 00:06:42.372 "nbd_device": "/dev/nbd1", 00:06:42.372 "bdev_name": "Malloc1" 00:06:42.372 } 00:06:42.372 ]' 00:06:42.372 23:02:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:42.372 /dev/nbd1' 00:06:42.372 23:02:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:42.372 /dev/nbd1' 00:06:42.372 23:02:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:42.372 23:02:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:42.372 23:02:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:42.372 23:02:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:42.372 23:02:01 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:42.372 23:02:01 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:42.372 23:02:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:42.372 23:02:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:42.372 23:02:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:42.372 23:02:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:42.372 23:02:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:42.372 23:02:01 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:42.372 256+0 records in 00:06:42.372 256+0 records out 00:06:42.372 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00938853 s, 112 MB/s 00:06:42.372 23:02:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:42.372 23:02:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:42.372 256+0 records in 00:06:42.372 256+0 records out 00:06:42.372 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0143107 s, 73.3 MB/s 00:06:42.372 23:02:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:42.372 23:02:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:42.372 256+0 records in 00:06:42.372 256+0 records out 00:06:42.372 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0177644 s, 59.0 MB/s 00:06:42.372 23:02:01 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:42.372 23:02:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:42.372 23:02:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:42.372 23:02:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:42.372 23:02:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:42.372 23:02:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:42.372 23:02:01 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:42.372 23:02:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:42.372 23:02:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:42.372 23:02:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:42.372 23:02:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:42.372 23:02:01 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:42.372 23:02:01 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:42.372 23:02:01 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.372 23:02:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:42.372 23:02:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:42.372 23:02:01 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:42.372 23:02:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:42.372 23:02:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:42.633 23:02:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:42.633 23:02:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:42.633 23:02:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:42.633 23:02:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:42.633 23:02:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:42.633 23:02:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:42.633 23:02:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:42.633 23:02:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:42.633 23:02:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:42.633 23:02:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:42.893 23:02:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:42.893 23:02:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:42.893 23:02:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:42.893 23:02:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:42.894 23:02:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:42.894 23:02:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:42.894 23:02:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:42.894 23:02:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:42.894 23:02:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:42.894 23:02:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.894 23:02:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:43.155 23:02:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:43.155 23:02:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:43.155 23:02:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:43.155 23:02:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:43.155 23:02:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:43.155 23:02:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:43.155 23:02:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:43.155 23:02:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:43.155 23:02:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:43.155 23:02:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:43.155 23:02:02 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:43.155 23:02:02 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:43.155 23:02:02 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:43.417 23:02:02 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:43.417 [2024-11-18 23:02:02.670872] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:43.417 [2024-11-18 23:02:02.708368] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.417 [2024-11-18 23:02:02.708384] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.417 [2024-11-18 23:02:02.749387] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:43.417 [2024-11-18 23:02:02.749435] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:46.713 23:02:05 event.app_repeat -- event/event.sh@38 -- # waitforlisten 71081 /var/tmp/spdk-nbd.sock 00:06:46.713 23:02:05 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 71081 ']' 00:06:46.713 23:02:05 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:46.713 23:02:05 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:46.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:46.714 23:02:05 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:46.714 23:02:05 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:46.714 23:02:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:46.714 23:02:05 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:46.714 23:02:05 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:46.714 23:02:05 event.app_repeat -- event/event.sh@39 -- # killprocess 71081 00:06:46.714 23:02:05 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 71081 ']' 00:06:46.714 23:02:05 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 71081 00:06:46.714 23:02:05 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:06:46.714 23:02:05 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:46.714 23:02:05 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71081 00:06:46.714 killing process with pid 71081 00:06:46.714 23:02:05 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:46.714 23:02:05 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:46.714 23:02:05 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71081' 00:06:46.714 23:02:05 event.app_repeat -- common/autotest_common.sh@969 -- # kill 71081 00:06:46.714 23:02:05 event.app_repeat -- common/autotest_common.sh@974 -- # wait 71081 00:06:46.714 spdk_app_start is called in Round 0. 00:06:46.714 Shutdown signal received, stop current app iteration 00:06:46.714 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:06:46.714 spdk_app_start is called in Round 1. 00:06:46.714 Shutdown signal received, stop current app iteration 00:06:46.714 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:06:46.714 spdk_app_start is called in Round 2. 00:06:46.714 Shutdown signal received, stop current app iteration 00:06:46.714 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:06:46.714 spdk_app_start is called in Round 3. 00:06:46.714 Shutdown signal received, stop current app iteration 00:06:46.714 ************************************ 00:06:46.714 END TEST app_repeat 00:06:46.714 ************************************ 00:06:46.714 23:02:05 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:46.714 23:02:05 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:46.714 00:06:46.714 real 0m17.055s 00:06:46.714 user 0m37.891s 00:06:46.714 sys 0m2.182s 00:06:46.714 23:02:05 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:46.714 23:02:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:46.714 23:02:06 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:46.714 23:02:06 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:46.714 23:02:06 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:46.714 23:02:06 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:46.714 23:02:06 event -- common/autotest_common.sh@10 -- # set +x 00:06:46.714 ************************************ 00:06:46.714 START TEST cpu_locks 00:06:46.714 ************************************ 00:06:46.714 23:02:06 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:46.714 * Looking for test storage... 00:06:46.714 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:46.714 23:02:06 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:46.972 23:02:06 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:06:46.972 23:02:06 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:46.972 23:02:06 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:46.972 23:02:06 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:46.972 23:02:06 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:46.972 23:02:06 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:46.972 23:02:06 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:46.972 23:02:06 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:46.972 23:02:06 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:46.972 23:02:06 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:46.972 23:02:06 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:46.972 23:02:06 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:46.972 23:02:06 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:46.972 23:02:06 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:46.972 23:02:06 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:46.972 23:02:06 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:46.972 23:02:06 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:46.972 23:02:06 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:46.972 23:02:06 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:46.972 23:02:06 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:46.972 23:02:06 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:46.972 23:02:06 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:46.972 23:02:06 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:46.972 23:02:06 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:46.972 23:02:06 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:46.972 23:02:06 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:46.972 23:02:06 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:46.972 23:02:06 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:46.972 23:02:06 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:46.972 23:02:06 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:46.972 23:02:06 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:46.972 23:02:06 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:46.972 23:02:06 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:46.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.972 --rc genhtml_branch_coverage=1 00:06:46.972 --rc genhtml_function_coverage=1 00:06:46.972 --rc genhtml_legend=1 00:06:46.972 --rc geninfo_all_blocks=1 00:06:46.972 --rc geninfo_unexecuted_blocks=1 00:06:46.972 00:06:46.972 ' 00:06:46.972 23:02:06 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:46.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.972 --rc genhtml_branch_coverage=1 00:06:46.972 --rc genhtml_function_coverage=1 00:06:46.972 --rc genhtml_legend=1 00:06:46.972 --rc geninfo_all_blocks=1 00:06:46.972 --rc geninfo_unexecuted_blocks=1 00:06:46.972 00:06:46.972 ' 00:06:46.972 23:02:06 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:46.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.972 --rc genhtml_branch_coverage=1 00:06:46.972 --rc genhtml_function_coverage=1 00:06:46.972 --rc genhtml_legend=1 00:06:46.972 --rc geninfo_all_blocks=1 00:06:46.972 --rc geninfo_unexecuted_blocks=1 00:06:46.972 00:06:46.972 ' 00:06:46.972 23:02:06 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:46.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.972 --rc genhtml_branch_coverage=1 00:06:46.972 --rc genhtml_function_coverage=1 00:06:46.972 --rc genhtml_legend=1 00:06:46.972 --rc geninfo_all_blocks=1 00:06:46.972 --rc geninfo_unexecuted_blocks=1 00:06:46.972 00:06:46.972 ' 00:06:46.972 23:02:06 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:46.972 23:02:06 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:46.972 23:02:06 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:46.972 23:02:06 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:46.972 23:02:06 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:46.972 23:02:06 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:46.972 23:02:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:46.972 ************************************ 00:06:46.972 START TEST default_locks 00:06:46.972 ************************************ 00:06:46.972 23:02:06 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:06:46.972 23:02:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=71506 00:06:46.972 23:02:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 71506 00:06:46.972 23:02:06 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 71506 ']' 00:06:46.972 23:02:06 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.972 23:02:06 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:46.972 23:02:06 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.972 23:02:06 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:46.972 23:02:06 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:46.972 23:02:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:46.972 [2024-11-18 23:02:06.246498] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:46.972 [2024-11-18 23:02:06.246627] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71506 ] 00:06:47.231 [2024-11-18 23:02:06.393046] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.231 [2024-11-18 23:02:06.433129] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.798 23:02:07 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:47.798 23:02:07 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:06:47.798 23:02:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 71506 00:06:47.798 23:02:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 71506 00:06:47.798 23:02:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:48.057 23:02:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 71506 00:06:48.057 23:02:07 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 71506 ']' 00:06:48.057 23:02:07 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 71506 00:06:48.057 23:02:07 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:06:48.057 23:02:07 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:48.057 23:02:07 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71506 00:06:48.057 23:02:07 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:48.057 23:02:07 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:48.057 killing process with pid 71506 00:06:48.057 23:02:07 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71506' 00:06:48.057 23:02:07 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 71506 00:06:48.057 23:02:07 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 71506 00:06:48.315 23:02:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 71506 00:06:48.315 23:02:07 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:48.315 23:02:07 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71506 00:06:48.315 23:02:07 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:48.315 23:02:07 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:48.315 23:02:07 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:48.315 23:02:07 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:48.315 23:02:07 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 71506 00:06:48.315 23:02:07 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 71506 ']' 00:06:48.315 23:02:07 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.315 23:02:07 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:48.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.315 23:02:07 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.315 23:02:07 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:48.315 ERROR: process (pid: 71506) is no longer running 00:06:48.315 23:02:07 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:48.315 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71506) - No such process 00:06:48.315 23:02:07 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:48.315 23:02:07 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:06:48.315 23:02:07 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:48.315 23:02:07 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:48.315 23:02:07 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:48.316 23:02:07 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:48.316 23:02:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:48.316 23:02:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:48.316 23:02:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:48.316 23:02:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:48.316 00:06:48.316 real 0m1.486s 00:06:48.316 user 0m1.479s 00:06:48.316 sys 0m0.465s 00:06:48.316 ************************************ 00:06:48.316 END TEST default_locks 00:06:48.316 ************************************ 00:06:48.316 23:02:07 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:48.316 23:02:07 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:48.316 23:02:07 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:48.316 23:02:07 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:48.316 23:02:07 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:48.316 23:02:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:48.574 ************************************ 00:06:48.574 START TEST default_locks_via_rpc 00:06:48.574 ************************************ 00:06:48.574 23:02:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:06:48.574 23:02:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=71548 00:06:48.574 23:02:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 71548 00:06:48.574 23:02:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71548 ']' 00:06:48.574 23:02:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.574 23:02:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:48.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.574 23:02:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.574 23:02:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:48.574 23:02:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.574 23:02:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:48.574 [2024-11-18 23:02:07.771520] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:48.574 [2024-11-18 23:02:07.771652] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71548 ] 00:06:48.574 [2024-11-18 23:02:07.915004] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.837 [2024-11-18 23:02:07.955295] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.407 23:02:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:49.407 23:02:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:49.407 23:02:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:49.407 23:02:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.407 23:02:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.407 23:02:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.407 23:02:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:49.407 23:02:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:49.407 23:02:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:49.407 23:02:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:49.407 23:02:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:49.407 23:02:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.407 23:02:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.407 23:02:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.407 23:02:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 71548 00:06:49.407 23:02:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 71548 00:06:49.407 23:02:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:49.665 23:02:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 71548 00:06:49.665 23:02:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 71548 ']' 00:06:49.665 23:02:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 71548 00:06:49.665 23:02:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:06:49.665 23:02:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:49.665 23:02:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71548 00:06:49.665 23:02:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:49.665 23:02:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:49.665 killing process with pid 71548 00:06:49.665 23:02:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71548' 00:06:49.665 23:02:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 71548 00:06:49.665 23:02:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 71548 00:06:49.924 00:06:49.924 real 0m1.440s 00:06:49.924 user 0m1.465s 00:06:49.924 sys 0m0.426s 00:06:49.924 23:02:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:49.924 23:02:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.924 ************************************ 00:06:49.924 END TEST default_locks_via_rpc 00:06:49.924 ************************************ 00:06:49.924 23:02:09 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:49.924 23:02:09 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:49.924 23:02:09 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:49.924 23:02:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:49.924 ************************************ 00:06:49.924 START TEST non_locking_app_on_locked_coremask 00:06:49.924 ************************************ 00:06:49.924 23:02:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:06:49.924 23:02:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=71600 00:06:49.924 23:02:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 71600 /var/tmp/spdk.sock 00:06:49.924 23:02:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71600 ']' 00:06:49.924 23:02:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.924 23:02:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:49.924 23:02:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.924 23:02:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:49.924 23:02:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:49.924 23:02:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.924 [2024-11-18 23:02:09.244427] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:49.924 [2024-11-18 23:02:09.244527] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71600 ] 00:06:50.182 [2024-11-18 23:02:09.383314] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.182 [2024-11-18 23:02:09.423342] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.747 23:02:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:50.747 23:02:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:50.747 23:02:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=71616 00:06:50.747 23:02:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:50.748 23:02:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 71616 /var/tmp/spdk2.sock 00:06:50.748 23:02:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71616 ']' 00:06:50.748 23:02:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:50.748 23:02:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:50.748 23:02:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:50.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:50.748 23:02:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:50.748 23:02:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:51.004 [2024-11-18 23:02:10.171861] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:51.004 [2024-11-18 23:02:10.171973] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71616 ] 00:06:51.004 [2024-11-18 23:02:10.316239] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:51.004 [2024-11-18 23:02:10.316283] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.262 [2024-11-18 23:02:10.406296] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.827 23:02:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:51.827 23:02:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:51.827 23:02:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 71600 00:06:51.827 23:02:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:51.827 23:02:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71600 00:06:52.085 23:02:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 71600 00:06:52.085 23:02:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71600 ']' 00:06:52.085 23:02:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71600 00:06:52.085 23:02:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:52.085 23:02:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:52.085 23:02:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71600 00:06:52.085 23:02:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:52.085 23:02:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:52.085 killing process with pid 71600 00:06:52.085 23:02:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71600' 00:06:52.085 23:02:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71600 00:06:52.085 23:02:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71600 00:06:53.062 23:02:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 71616 00:06:53.062 23:02:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71616 ']' 00:06:53.062 23:02:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71616 00:06:53.062 23:02:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:53.062 23:02:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:53.062 23:02:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71616 00:06:53.062 23:02:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:53.062 23:02:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:53.062 23:02:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71616' 00:06:53.062 killing process with pid 71616 00:06:53.062 23:02:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71616 00:06:53.062 23:02:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71616 00:06:53.062 00:06:53.062 real 0m3.198s 00:06:53.062 user 0m3.416s 00:06:53.062 sys 0m0.900s 00:06:53.062 23:02:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:53.062 ************************************ 00:06:53.062 END TEST non_locking_app_on_locked_coremask 00:06:53.062 ************************************ 00:06:53.062 23:02:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:53.340 23:02:12 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:53.340 23:02:12 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:53.340 23:02:12 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:53.340 23:02:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:53.340 ************************************ 00:06:53.340 START TEST locking_app_on_unlocked_coremask 00:06:53.340 ************************************ 00:06:53.340 23:02:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:06:53.340 23:02:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=71674 00:06:53.340 23:02:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 71674 /var/tmp/spdk.sock 00:06:53.340 23:02:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71674 ']' 00:06:53.340 23:02:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.340 23:02:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:53.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.340 23:02:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.340 23:02:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:53.340 23:02:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:53.340 23:02:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:53.340 [2024-11-18 23:02:12.502116] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:53.340 [2024-11-18 23:02:12.502246] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71674 ] 00:06:53.340 [2024-11-18 23:02:12.639383] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:53.340 [2024-11-18 23:02:12.639440] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.340 [2024-11-18 23:02:12.681112] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.941 23:02:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:53.941 23:02:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:53.941 23:02:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=71690 00:06:53.941 23:02:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 71690 /var/tmp/spdk2.sock 00:06:53.941 23:02:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71690 ']' 00:06:53.941 23:02:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:53.941 23:02:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:53.941 23:02:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:53.941 23:02:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:53.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:53.941 23:02:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:53.941 23:02:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:54.197 [2024-11-18 23:02:13.385682] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:54.197 [2024-11-18 23:02:13.385828] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71690 ] 00:06:54.197 [2024-11-18 23:02:13.534558] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.455 [2024-11-18 23:02:13.620437] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.021 23:02:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:55.021 23:02:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:55.021 23:02:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 71690 00:06:55.021 23:02:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71690 00:06:55.021 23:02:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:55.281 23:02:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 71674 00:06:55.281 23:02:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71674 ']' 00:06:55.281 23:02:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 71674 00:06:55.281 23:02:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:55.281 23:02:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:55.281 23:02:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71674 00:06:55.281 23:02:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:55.281 killing process with pid 71674 00:06:55.282 23:02:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:55.282 23:02:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71674' 00:06:55.282 23:02:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 71674 00:06:55.282 23:02:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 71674 00:06:55.848 23:02:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 71690 00:06:55.848 23:02:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71690 ']' 00:06:55.848 23:02:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 71690 00:06:55.848 23:02:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:55.848 23:02:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:55.848 23:02:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71690 00:06:55.848 23:02:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:55.848 killing process with pid 71690 00:06:55.848 23:02:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:55.848 23:02:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71690' 00:06:55.848 23:02:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 71690 00:06:55.848 23:02:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 71690 00:06:56.415 00:06:56.415 real 0m3.104s 00:06:56.415 user 0m3.265s 00:06:56.415 sys 0m0.903s 00:06:56.415 23:02:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:56.415 23:02:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:56.415 ************************************ 00:06:56.415 END TEST locking_app_on_unlocked_coremask 00:06:56.415 ************************************ 00:06:56.415 23:02:15 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:56.415 23:02:15 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:56.415 23:02:15 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:56.415 23:02:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:56.415 ************************************ 00:06:56.415 START TEST locking_app_on_locked_coremask 00:06:56.415 ************************************ 00:06:56.415 23:02:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:56.415 23:02:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=71748 00:06:56.415 23:02:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 71748 /var/tmp/spdk.sock 00:06:56.415 23:02:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71748 ']' 00:06:56.415 23:02:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.415 23:02:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:56.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.415 23:02:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.415 23:02:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:56.415 23:02:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:56.415 23:02:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:56.415 [2024-11-18 23:02:15.646117] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:56.415 [2024-11-18 23:02:15.646239] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71748 ] 00:06:56.415 [2024-11-18 23:02:15.783343] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.675 [2024-11-18 23:02:15.824874] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.246 23:02:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:57.246 23:02:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:57.246 23:02:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=71764 00:06:57.246 23:02:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 71764 /var/tmp/spdk2.sock 00:06:57.246 23:02:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:57.246 23:02:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:57.246 23:02:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71764 /var/tmp/spdk2.sock 00:06:57.246 23:02:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:57.246 23:02:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:57.246 23:02:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:57.246 23:02:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:57.246 23:02:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 71764 /var/tmp/spdk2.sock 00:06:57.246 23:02:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71764 ']' 00:06:57.246 23:02:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:57.246 23:02:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:57.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:57.246 23:02:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:57.246 23:02:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:57.246 23:02:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:57.246 [2024-11-18 23:02:16.528190] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:57.246 [2024-11-18 23:02:16.528305] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71764 ] 00:06:57.503 [2024-11-18 23:02:16.672834] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 71748 has claimed it. 00:06:57.503 [2024-11-18 23:02:16.672916] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:58.070 ERROR: process (pid: 71764) is no longer running 00:06:58.070 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71764) - No such process 00:06:58.070 23:02:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:58.070 23:02:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:58.070 23:02:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:58.070 23:02:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:58.070 23:02:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:58.070 23:02:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:58.070 23:02:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 71748 00:06:58.070 23:02:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71748 00:06:58.070 23:02:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:58.070 23:02:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 71748 00:06:58.070 23:02:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71748 ']' 00:06:58.070 23:02:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71748 00:06:58.070 23:02:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:58.070 23:02:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:58.070 23:02:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71748 00:06:58.070 23:02:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:58.070 23:02:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:58.070 killing process with pid 71748 00:06:58.070 23:02:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71748' 00:06:58.070 23:02:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71748 00:06:58.070 23:02:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71748 00:06:58.328 00:06:58.328 real 0m2.112s 00:06:58.328 user 0m2.273s 00:06:58.328 sys 0m0.565s 00:06:58.328 23:02:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:58.328 23:02:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:58.328 ************************************ 00:06:58.328 END TEST locking_app_on_locked_coremask 00:06:58.328 ************************************ 00:06:58.588 23:02:17 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:58.588 23:02:17 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:58.588 23:02:17 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:58.588 23:02:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:58.588 ************************************ 00:06:58.588 START TEST locking_overlapped_coremask 00:06:58.588 ************************************ 00:06:58.588 23:02:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:58.588 23:02:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=71806 00:06:58.588 23:02:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 71806 /var/tmp/spdk.sock 00:06:58.588 23:02:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 71806 ']' 00:06:58.588 23:02:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.588 23:02:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:58.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.588 23:02:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.588 23:02:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:58.588 23:02:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:58.588 23:02:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:58.588 [2024-11-18 23:02:17.802009] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:58.588 [2024-11-18 23:02:17.802309] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71806 ] 00:06:58.588 [2024-11-18 23:02:17.949121] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:58.847 [2024-11-18 23:02:17.992545] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:58.847 [2024-11-18 23:02:17.992814] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:58.847 [2024-11-18 23:02:17.992896] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.417 23:02:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:59.417 23:02:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:59.417 23:02:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=71824 00:06:59.417 23:02:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:59.417 23:02:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 71824 /var/tmp/spdk2.sock 00:06:59.417 23:02:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:59.417 23:02:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71824 /var/tmp/spdk2.sock 00:06:59.417 23:02:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:59.417 23:02:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:59.417 23:02:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:59.418 23:02:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:59.418 23:02:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 71824 /var/tmp/spdk2.sock 00:06:59.418 23:02:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 71824 ']' 00:06:59.418 23:02:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:59.418 23:02:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:59.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:59.418 23:02:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:59.418 23:02:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:59.418 23:02:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:59.418 [2024-11-18 23:02:18.722547] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:59.418 [2024-11-18 23:02:18.722664] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71824 ] 00:06:59.682 [2024-11-18 23:02:18.877470] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71806 has claimed it. 00:06:59.682 [2024-11-18 23:02:18.877540] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:00.280 ERROR: process (pid: 71824) is no longer running 00:07:00.280 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71824) - No such process 00:07:00.280 23:02:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:00.280 23:02:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:00.280 23:02:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:00.280 23:02:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:00.280 23:02:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:00.280 23:02:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:00.280 23:02:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:00.280 23:02:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:00.280 23:02:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:00.280 23:02:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:00.280 23:02:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 71806 00:07:00.280 23:02:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 71806 ']' 00:07:00.280 23:02:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 71806 00:07:00.280 23:02:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:07:00.280 23:02:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:00.280 23:02:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71806 00:07:00.280 killing process with pid 71806 00:07:00.280 23:02:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:00.280 23:02:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:00.280 23:02:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71806' 00:07:00.280 23:02:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 71806 00:07:00.280 23:02:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 71806 00:07:00.539 00:07:00.539 real 0m1.993s 00:07:00.539 user 0m5.426s 00:07:00.539 sys 0m0.451s 00:07:00.539 23:02:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:00.539 23:02:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:00.539 ************************************ 00:07:00.539 END TEST locking_overlapped_coremask 00:07:00.539 ************************************ 00:07:00.539 23:02:19 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:00.539 23:02:19 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:00.539 23:02:19 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:00.539 23:02:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:00.539 ************************************ 00:07:00.539 START TEST locking_overlapped_coremask_via_rpc 00:07:00.539 ************************************ 00:07:00.539 23:02:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:07:00.539 23:02:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=71866 00:07:00.539 23:02:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 71866 /var/tmp/spdk.sock 00:07:00.539 23:02:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:00.539 23:02:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71866 ']' 00:07:00.539 23:02:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.539 23:02:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:00.539 23:02:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.539 23:02:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:00.539 23:02:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:00.539 [2024-11-18 23:02:19.860314] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:00.539 [2024-11-18 23:02:19.860447] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71866 ] 00:07:00.800 [2024-11-18 23:02:20.012011] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:00.800 [2024-11-18 23:02:20.012074] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:00.800 [2024-11-18 23:02:20.060368] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.800 [2024-11-18 23:02:20.060669] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.800 [2024-11-18 23:02:20.060703] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:01.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:01.740 23:02:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:01.740 23:02:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:01.740 23:02:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=71884 00:07:01.740 23:02:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 71884 /var/tmp/spdk2.sock 00:07:01.740 23:02:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71884 ']' 00:07:01.740 23:02:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:01.740 23:02:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:01.740 23:02:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:01.740 23:02:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:01.740 23:02:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:01.740 23:02:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.740 [2024-11-18 23:02:20.832417] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:01.740 [2024-11-18 23:02:20.832594] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71884 ] 00:07:01.740 [2024-11-18 23:02:20.989957] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:01.740 [2024-11-18 23:02:20.990008] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:01.740 [2024-11-18 23:02:21.061364] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:01.740 [2024-11-18 23:02:21.061553] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:01.740 [2024-11-18 23:02:21.061627] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:07:02.681 23:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:02.681 23:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:02.681 23:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:02.681 23:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.681 23:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.681 23:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.681 23:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:02.681 23:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:02.681 23:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:02.681 23:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:02.681 23:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.681 23:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:02.681 23:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.681 23:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:02.681 23:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.681 23:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.681 [2024-11-18 23:02:21.756380] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71866 has claimed it. 00:07:02.681 request: 00:07:02.681 { 00:07:02.681 "method": "framework_enable_cpumask_locks", 00:07:02.681 "req_id": 1 00:07:02.681 } 00:07:02.681 Got JSON-RPC error response 00:07:02.681 response: 00:07:02.681 { 00:07:02.681 "code": -32603, 00:07:02.681 "message": "Failed to claim CPU core: 2" 00:07:02.681 } 00:07:02.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.681 23:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:02.681 23:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:02.681 23:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:02.681 23:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:02.681 23:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:02.681 23:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 71866 /var/tmp/spdk.sock 00:07:02.681 23:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71866 ']' 00:07:02.681 23:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.681 23:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:02.681 23:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.681 23:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:02.681 23:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:02.681 23:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:02.681 23:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:02.681 23:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 71884 /var/tmp/spdk2.sock 00:07:02.681 23:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71884 ']' 00:07:02.681 23:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:02.681 23:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:02.681 23:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:02.681 23:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:02.681 23:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.938 ************************************ 00:07:02.938 END TEST locking_overlapped_coremask_via_rpc 00:07:02.938 ************************************ 00:07:02.938 23:02:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:02.938 23:02:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:02.939 23:02:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:02.939 23:02:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:02.939 23:02:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:02.939 23:02:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:02.939 00:07:02.939 real 0m2.406s 00:07:02.939 user 0m1.195s 00:07:02.939 sys 0m0.139s 00:07:02.939 23:02:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:02.939 23:02:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.939 23:02:22 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:02.939 23:02:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71866 ]] 00:07:02.939 23:02:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71866 00:07:02.939 23:02:22 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71866 ']' 00:07:02.939 23:02:22 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71866 00:07:02.939 23:02:22 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:02.939 23:02:22 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:02.939 23:02:22 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71866 00:07:02.939 killing process with pid 71866 00:07:02.939 23:02:22 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:02.939 23:02:22 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:02.939 23:02:22 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71866' 00:07:02.939 23:02:22 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 71866 00:07:02.939 23:02:22 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 71866 00:07:03.510 23:02:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71884 ]] 00:07:03.510 23:02:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71884 00:07:03.510 23:02:22 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71884 ']' 00:07:03.510 23:02:22 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71884 00:07:03.510 23:02:22 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:03.510 23:02:22 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:03.510 23:02:22 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71884 00:07:03.510 killing process with pid 71884 00:07:03.510 23:02:22 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:03.510 23:02:22 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:03.510 23:02:22 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71884' 00:07:03.510 23:02:22 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 71884 00:07:03.510 23:02:22 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 71884 00:07:03.510 23:02:22 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:03.510 Process with pid 71866 is not found 00:07:03.510 23:02:22 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:03.510 23:02:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71866 ]] 00:07:03.510 23:02:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71866 00:07:03.510 23:02:22 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71866 ']' 00:07:03.510 23:02:22 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71866 00:07:03.510 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (71866) - No such process 00:07:03.510 23:02:22 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 71866 is not found' 00:07:03.510 Process with pid 71884 is not found 00:07:03.510 23:02:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71884 ]] 00:07:03.510 23:02:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71884 00:07:03.510 23:02:22 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71884 ']' 00:07:03.510 23:02:22 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71884 00:07:03.510 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (71884) - No such process 00:07:03.510 23:02:22 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 71884 is not found' 00:07:03.510 23:02:22 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:03.772 ************************************ 00:07:03.772 END TEST cpu_locks 00:07:03.772 ************************************ 00:07:03.772 00:07:03.772 real 0m16.870s 00:07:03.772 user 0m29.536s 00:07:03.772 sys 0m4.711s 00:07:03.772 23:02:22 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:03.772 23:02:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:03.772 ************************************ 00:07:03.772 END TEST event 00:07:03.772 ************************************ 00:07:03.772 00:07:03.772 real 0m43.367s 00:07:03.772 user 1m24.068s 00:07:03.772 sys 0m7.720s 00:07:03.772 23:02:22 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:03.772 23:02:22 event -- common/autotest_common.sh@10 -- # set +x 00:07:03.772 23:02:22 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:03.772 23:02:22 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:03.772 23:02:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:03.772 23:02:22 -- common/autotest_common.sh@10 -- # set +x 00:07:03.772 ************************************ 00:07:03.772 START TEST thread 00:07:03.772 ************************************ 00:07:03.772 23:02:22 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:03.772 * Looking for test storage... 00:07:03.772 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:03.772 23:02:23 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:03.772 23:02:23 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:03.772 23:02:23 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:07:03.772 23:02:23 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:03.772 23:02:23 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:03.772 23:02:23 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:03.772 23:02:23 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:03.772 23:02:23 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:03.772 23:02:23 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:03.772 23:02:23 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:03.772 23:02:23 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:03.772 23:02:23 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:03.772 23:02:23 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:03.772 23:02:23 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:03.772 23:02:23 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:03.772 23:02:23 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:03.772 23:02:23 thread -- scripts/common.sh@345 -- # : 1 00:07:03.772 23:02:23 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:03.772 23:02:23 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:03.772 23:02:23 thread -- scripts/common.sh@365 -- # decimal 1 00:07:03.772 23:02:23 thread -- scripts/common.sh@353 -- # local d=1 00:07:03.772 23:02:23 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:03.772 23:02:23 thread -- scripts/common.sh@355 -- # echo 1 00:07:03.772 23:02:23 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:03.772 23:02:23 thread -- scripts/common.sh@366 -- # decimal 2 00:07:03.772 23:02:23 thread -- scripts/common.sh@353 -- # local d=2 00:07:03.772 23:02:23 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:03.772 23:02:23 thread -- scripts/common.sh@355 -- # echo 2 00:07:03.772 23:02:23 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:03.772 23:02:23 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:03.772 23:02:23 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:03.772 23:02:23 thread -- scripts/common.sh@368 -- # return 0 00:07:03.772 23:02:23 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:03.772 23:02:23 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:03.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.772 --rc genhtml_branch_coverage=1 00:07:03.772 --rc genhtml_function_coverage=1 00:07:03.772 --rc genhtml_legend=1 00:07:03.772 --rc geninfo_all_blocks=1 00:07:03.772 --rc geninfo_unexecuted_blocks=1 00:07:03.772 00:07:03.772 ' 00:07:03.772 23:02:23 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:03.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.772 --rc genhtml_branch_coverage=1 00:07:03.772 --rc genhtml_function_coverage=1 00:07:03.772 --rc genhtml_legend=1 00:07:03.772 --rc geninfo_all_blocks=1 00:07:03.772 --rc geninfo_unexecuted_blocks=1 00:07:03.772 00:07:03.772 ' 00:07:03.773 23:02:23 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:03.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.773 --rc genhtml_branch_coverage=1 00:07:03.773 --rc genhtml_function_coverage=1 00:07:03.773 --rc genhtml_legend=1 00:07:03.773 --rc geninfo_all_blocks=1 00:07:03.773 --rc geninfo_unexecuted_blocks=1 00:07:03.773 00:07:03.773 ' 00:07:03.773 23:02:23 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:03.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.773 --rc genhtml_branch_coverage=1 00:07:03.773 --rc genhtml_function_coverage=1 00:07:03.773 --rc genhtml_legend=1 00:07:03.773 --rc geninfo_all_blocks=1 00:07:03.773 --rc geninfo_unexecuted_blocks=1 00:07:03.773 00:07:03.773 ' 00:07:03.773 23:02:23 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:03.773 23:02:23 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:03.773 23:02:23 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:03.773 23:02:23 thread -- common/autotest_common.sh@10 -- # set +x 00:07:03.773 ************************************ 00:07:03.773 START TEST thread_poller_perf 00:07:03.773 ************************************ 00:07:03.773 23:02:23 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:04.033 [2024-11-18 23:02:23.160497] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:04.033 [2024-11-18 23:02:23.160711] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72011 ] 00:07:04.033 [2024-11-18 23:02:23.310342] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.033 [2024-11-18 23:02:23.353018] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.033 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:05.406 [2024-11-18T23:02:24.784Z] ====================================== 00:07:05.406 [2024-11-18T23:02:24.784Z] busy:2607968570 (cyc) 00:07:05.406 [2024-11-18T23:02:24.784Z] total_run_count: 354000 00:07:05.406 [2024-11-18T23:02:24.784Z] tsc_hz: 2600000000 (cyc) 00:07:05.406 [2024-11-18T23:02:24.784Z] ====================================== 00:07:05.406 [2024-11-18T23:02:24.784Z] poller_cost: 7367 (cyc), 2833 (nsec) 00:07:05.406 ************************************ 00:07:05.406 END TEST thread_poller_perf 00:07:05.406 ************************************ 00:07:05.406 00:07:05.406 real 0m1.290s 00:07:05.406 user 0m1.111s 00:07:05.406 sys 0m0.072s 00:07:05.406 23:02:24 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:05.406 23:02:24 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:05.406 23:02:24 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:05.406 23:02:24 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:05.406 23:02:24 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:05.406 23:02:24 thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.406 ************************************ 00:07:05.406 START TEST thread_poller_perf 00:07:05.406 ************************************ 00:07:05.406 23:02:24 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:05.406 [2024-11-18 23:02:24.493873] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:05.406 [2024-11-18 23:02:24.493994] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72053 ] 00:07:05.407 [2024-11-18 23:02:24.638913] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.407 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:05.407 [2024-11-18 23:02:24.679685] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.781 [2024-11-18T23:02:26.159Z] ====================================== 00:07:06.781 [2024-11-18T23:02:26.159Z] busy:2602562436 (cyc) 00:07:06.781 [2024-11-18T23:02:26.159Z] total_run_count: 5256000 00:07:06.781 [2024-11-18T23:02:26.159Z] tsc_hz: 2600000000 (cyc) 00:07:06.781 [2024-11-18T23:02:26.159Z] ====================================== 00:07:06.781 [2024-11-18T23:02:26.159Z] poller_cost: 495 (cyc), 190 (nsec) 00:07:06.781 ************************************ 00:07:06.781 END TEST thread_poller_perf 00:07:06.781 ************************************ 00:07:06.781 00:07:06.781 real 0m1.280s 00:07:06.781 user 0m1.105s 00:07:06.781 sys 0m0.069s 00:07:06.781 23:02:25 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:06.781 23:02:25 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:06.781 23:02:25 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:06.781 00:07:06.781 real 0m2.806s 00:07:06.781 user 0m2.329s 00:07:06.781 sys 0m0.256s 00:07:06.781 ************************************ 00:07:06.781 END TEST thread 00:07:06.781 ************************************ 00:07:06.781 23:02:25 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:06.781 23:02:25 thread -- common/autotest_common.sh@10 -- # set +x 00:07:06.781 23:02:25 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:06.781 23:02:25 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:06.781 23:02:25 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:06.781 23:02:25 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:06.781 23:02:25 -- common/autotest_common.sh@10 -- # set +x 00:07:06.781 ************************************ 00:07:06.781 START TEST app_cmdline 00:07:06.781 ************************************ 00:07:06.781 23:02:25 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:06.781 * Looking for test storage... 00:07:06.781 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:06.781 23:02:25 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:06.781 23:02:25 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:07:06.781 23:02:25 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:06.781 23:02:25 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:06.781 23:02:25 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:06.781 23:02:25 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:06.781 23:02:25 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:06.781 23:02:25 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:06.781 23:02:25 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:06.781 23:02:25 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:06.781 23:02:25 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:06.781 23:02:25 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:06.781 23:02:25 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:06.781 23:02:25 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:06.781 23:02:25 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:06.781 23:02:25 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:06.781 23:02:25 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:06.781 23:02:25 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:06.781 23:02:25 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:06.781 23:02:25 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:06.781 23:02:25 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:06.781 23:02:25 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:06.781 23:02:25 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:06.781 23:02:25 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:06.781 23:02:25 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:06.781 23:02:25 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:06.781 23:02:25 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:06.781 23:02:25 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:06.781 23:02:25 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:06.781 23:02:25 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:06.781 23:02:25 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:06.781 23:02:25 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:06.781 23:02:25 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:06.781 23:02:25 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:06.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.781 --rc genhtml_branch_coverage=1 00:07:06.781 --rc genhtml_function_coverage=1 00:07:06.781 --rc genhtml_legend=1 00:07:06.781 --rc geninfo_all_blocks=1 00:07:06.781 --rc geninfo_unexecuted_blocks=1 00:07:06.781 00:07:06.781 ' 00:07:06.781 23:02:25 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:06.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.781 --rc genhtml_branch_coverage=1 00:07:06.781 --rc genhtml_function_coverage=1 00:07:06.781 --rc genhtml_legend=1 00:07:06.781 --rc geninfo_all_blocks=1 00:07:06.781 --rc geninfo_unexecuted_blocks=1 00:07:06.781 00:07:06.781 ' 00:07:06.781 23:02:25 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:06.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.781 --rc genhtml_branch_coverage=1 00:07:06.781 --rc genhtml_function_coverage=1 00:07:06.781 --rc genhtml_legend=1 00:07:06.781 --rc geninfo_all_blocks=1 00:07:06.781 --rc geninfo_unexecuted_blocks=1 00:07:06.781 00:07:06.781 ' 00:07:06.781 23:02:25 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:06.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.781 --rc genhtml_branch_coverage=1 00:07:06.781 --rc genhtml_function_coverage=1 00:07:06.781 --rc genhtml_legend=1 00:07:06.781 --rc geninfo_all_blocks=1 00:07:06.781 --rc geninfo_unexecuted_blocks=1 00:07:06.781 00:07:06.781 ' 00:07:06.782 23:02:25 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:06.782 23:02:25 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=72131 00:07:06.782 23:02:25 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 72131 00:07:06.782 23:02:25 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 72131 ']' 00:07:06.782 23:02:25 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:06.782 23:02:25 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.782 23:02:25 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:06.782 23:02:25 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.782 23:02:25 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:06.782 23:02:25 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:06.782 [2024-11-18 23:02:26.049815] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:06.782 [2024-11-18 23:02:26.049932] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72131 ] 00:07:07.040 [2024-11-18 23:02:26.194448] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.040 [2024-11-18 23:02:26.234915] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.608 23:02:26 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:07.608 23:02:26 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:07:07.608 23:02:26 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:07.866 { 00:07:07.866 "version": "SPDK v24.09.1-pre git sha1 b18e1bd62", 00:07:07.866 "fields": { 00:07:07.866 "major": 24, 00:07:07.866 "minor": 9, 00:07:07.866 "patch": 1, 00:07:07.866 "suffix": "-pre", 00:07:07.866 "commit": "b18e1bd62" 00:07:07.866 } 00:07:07.866 } 00:07:07.866 23:02:27 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:07.866 23:02:27 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:07.866 23:02:27 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:07.866 23:02:27 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:07.866 23:02:27 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:07.866 23:02:27 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.866 23:02:27 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:07.866 23:02:27 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:07.866 23:02:27 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:07.866 23:02:27 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.866 23:02:27 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:07.866 23:02:27 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:07.866 23:02:27 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:07.866 23:02:27 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:07:07.866 23:02:27 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:07.866 23:02:27 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:07.866 23:02:27 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:07.866 23:02:27 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:07.866 23:02:27 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:07.866 23:02:27 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:07.866 23:02:27 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:07.866 23:02:27 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:07.866 23:02:27 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:07.866 23:02:27 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:08.124 request: 00:07:08.124 { 00:07:08.124 "method": "env_dpdk_get_mem_stats", 00:07:08.124 "req_id": 1 00:07:08.124 } 00:07:08.124 Got JSON-RPC error response 00:07:08.124 response: 00:07:08.124 { 00:07:08.124 "code": -32601, 00:07:08.124 "message": "Method not found" 00:07:08.124 } 00:07:08.124 23:02:27 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:07:08.124 23:02:27 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:08.124 23:02:27 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:08.124 23:02:27 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:08.124 23:02:27 app_cmdline -- app/cmdline.sh@1 -- # killprocess 72131 00:07:08.124 23:02:27 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 72131 ']' 00:07:08.124 23:02:27 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 72131 00:07:08.124 23:02:27 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:07:08.124 23:02:27 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:08.124 23:02:27 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72131 00:07:08.124 killing process with pid 72131 00:07:08.124 23:02:27 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:08.124 23:02:27 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:08.124 23:02:27 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72131' 00:07:08.124 23:02:27 app_cmdline -- common/autotest_common.sh@969 -- # kill 72131 00:07:08.124 23:02:27 app_cmdline -- common/autotest_common.sh@974 -- # wait 72131 00:07:08.386 00:07:08.386 real 0m1.833s 00:07:08.386 user 0m2.156s 00:07:08.386 sys 0m0.421s 00:07:08.386 ************************************ 00:07:08.386 END TEST app_cmdline 00:07:08.386 ************************************ 00:07:08.386 23:02:27 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:08.386 23:02:27 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:08.386 23:02:27 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:08.386 23:02:27 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:08.386 23:02:27 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:08.386 23:02:27 -- common/autotest_common.sh@10 -- # set +x 00:07:08.386 ************************************ 00:07:08.386 START TEST version 00:07:08.386 ************************************ 00:07:08.386 23:02:27 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:08.645 * Looking for test storage... 00:07:08.645 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:08.645 23:02:27 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:08.645 23:02:27 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:08.645 23:02:27 version -- common/autotest_common.sh@1681 -- # lcov --version 00:07:08.645 23:02:27 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:08.645 23:02:27 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:08.645 23:02:27 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:08.645 23:02:27 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:08.645 23:02:27 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:08.645 23:02:27 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:08.645 23:02:27 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:08.645 23:02:27 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:08.645 23:02:27 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:08.645 23:02:27 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:08.645 23:02:27 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:08.645 23:02:27 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:08.645 23:02:27 version -- scripts/common.sh@344 -- # case "$op" in 00:07:08.645 23:02:27 version -- scripts/common.sh@345 -- # : 1 00:07:08.645 23:02:27 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:08.645 23:02:27 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:08.645 23:02:27 version -- scripts/common.sh@365 -- # decimal 1 00:07:08.645 23:02:27 version -- scripts/common.sh@353 -- # local d=1 00:07:08.645 23:02:27 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:08.645 23:02:27 version -- scripts/common.sh@355 -- # echo 1 00:07:08.645 23:02:27 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:08.645 23:02:27 version -- scripts/common.sh@366 -- # decimal 2 00:07:08.645 23:02:27 version -- scripts/common.sh@353 -- # local d=2 00:07:08.645 23:02:27 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:08.645 23:02:27 version -- scripts/common.sh@355 -- # echo 2 00:07:08.645 23:02:27 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:08.645 23:02:27 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:08.645 23:02:27 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:08.645 23:02:27 version -- scripts/common.sh@368 -- # return 0 00:07:08.645 23:02:27 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:08.645 23:02:27 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:08.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.645 --rc genhtml_branch_coverage=1 00:07:08.645 --rc genhtml_function_coverage=1 00:07:08.645 --rc genhtml_legend=1 00:07:08.646 --rc geninfo_all_blocks=1 00:07:08.646 --rc geninfo_unexecuted_blocks=1 00:07:08.646 00:07:08.646 ' 00:07:08.646 23:02:27 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:08.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.646 --rc genhtml_branch_coverage=1 00:07:08.646 --rc genhtml_function_coverage=1 00:07:08.646 --rc genhtml_legend=1 00:07:08.646 --rc geninfo_all_blocks=1 00:07:08.646 --rc geninfo_unexecuted_blocks=1 00:07:08.646 00:07:08.646 ' 00:07:08.646 23:02:27 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:08.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.646 --rc genhtml_branch_coverage=1 00:07:08.646 --rc genhtml_function_coverage=1 00:07:08.646 --rc genhtml_legend=1 00:07:08.646 --rc geninfo_all_blocks=1 00:07:08.646 --rc geninfo_unexecuted_blocks=1 00:07:08.646 00:07:08.646 ' 00:07:08.646 23:02:27 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:08.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.646 --rc genhtml_branch_coverage=1 00:07:08.646 --rc genhtml_function_coverage=1 00:07:08.646 --rc genhtml_legend=1 00:07:08.646 --rc geninfo_all_blocks=1 00:07:08.646 --rc geninfo_unexecuted_blocks=1 00:07:08.646 00:07:08.646 ' 00:07:08.646 23:02:27 version -- app/version.sh@17 -- # get_header_version major 00:07:08.646 23:02:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:08.646 23:02:27 version -- app/version.sh@14 -- # cut -f2 00:07:08.646 23:02:27 version -- app/version.sh@14 -- # tr -d '"' 00:07:08.646 23:02:27 version -- app/version.sh@17 -- # major=24 00:07:08.646 23:02:27 version -- app/version.sh@18 -- # get_header_version minor 00:07:08.646 23:02:27 version -- app/version.sh@14 -- # cut -f2 00:07:08.646 23:02:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:08.646 23:02:27 version -- app/version.sh@14 -- # tr -d '"' 00:07:08.646 23:02:27 version -- app/version.sh@18 -- # minor=9 00:07:08.646 23:02:27 version -- app/version.sh@19 -- # get_header_version patch 00:07:08.646 23:02:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:08.646 23:02:27 version -- app/version.sh@14 -- # cut -f2 00:07:08.646 23:02:27 version -- app/version.sh@14 -- # tr -d '"' 00:07:08.646 23:02:27 version -- app/version.sh@19 -- # patch=1 00:07:08.646 23:02:27 version -- app/version.sh@20 -- # get_header_version suffix 00:07:08.646 23:02:27 version -- app/version.sh@14 -- # cut -f2 00:07:08.646 23:02:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:08.646 23:02:27 version -- app/version.sh@14 -- # tr -d '"' 00:07:08.646 23:02:27 version -- app/version.sh@20 -- # suffix=-pre 00:07:08.646 23:02:27 version -- app/version.sh@22 -- # version=24.9 00:07:08.646 23:02:27 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:08.646 23:02:27 version -- app/version.sh@25 -- # version=24.9.1 00:07:08.646 23:02:27 version -- app/version.sh@28 -- # version=24.9.1rc0 00:07:08.646 23:02:27 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:08.646 23:02:27 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:08.646 23:02:27 version -- app/version.sh@30 -- # py_version=24.9.1rc0 00:07:08.646 23:02:27 version -- app/version.sh@31 -- # [[ 24.9.1rc0 == \2\4\.\9\.\1\r\c\0 ]] 00:07:08.646 ************************************ 00:07:08.646 END TEST version 00:07:08.646 ************************************ 00:07:08.646 00:07:08.646 real 0m0.209s 00:07:08.646 user 0m0.124s 00:07:08.646 sys 0m0.109s 00:07:08.646 23:02:27 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:08.646 23:02:27 version -- common/autotest_common.sh@10 -- # set +x 00:07:08.646 23:02:27 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:08.646 23:02:27 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:08.646 23:02:27 -- spdk/autotest.sh@194 -- # uname -s 00:07:08.646 23:02:27 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:08.646 23:02:27 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:08.646 23:02:27 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:08.646 23:02:27 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:07:08.646 23:02:27 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:08.646 23:02:27 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:08.646 23:02:27 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:08.646 23:02:27 -- common/autotest_common.sh@10 -- # set +x 00:07:08.646 ************************************ 00:07:08.646 START TEST blockdev_nvme 00:07:08.646 ************************************ 00:07:08.646 23:02:27 blockdev_nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:08.905 * Looking for test storage... 00:07:08.905 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:08.905 23:02:28 blockdev_nvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:08.905 23:02:28 blockdev_nvme -- common/autotest_common.sh@1681 -- # lcov --version 00:07:08.905 23:02:28 blockdev_nvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:08.905 23:02:28 blockdev_nvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:08.905 23:02:28 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:08.905 23:02:28 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:08.905 23:02:28 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:08.905 23:02:28 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:08.905 23:02:28 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:08.905 23:02:28 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:08.905 23:02:28 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:08.905 23:02:28 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:08.905 23:02:28 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:08.905 23:02:28 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:08.905 23:02:28 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:08.905 23:02:28 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:08.905 23:02:28 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:07:08.905 23:02:28 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:08.905 23:02:28 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:08.905 23:02:28 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:07:08.905 23:02:28 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:07:08.905 23:02:28 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:08.905 23:02:28 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:07:08.905 23:02:28 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:08.905 23:02:28 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:07:08.905 23:02:28 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:07:08.905 23:02:28 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:08.905 23:02:28 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:07:08.905 23:02:28 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:08.905 23:02:28 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:08.905 23:02:28 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:08.905 23:02:28 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:07:08.905 23:02:28 blockdev_nvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:08.905 23:02:28 blockdev_nvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:08.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.905 --rc genhtml_branch_coverage=1 00:07:08.905 --rc genhtml_function_coverage=1 00:07:08.905 --rc genhtml_legend=1 00:07:08.905 --rc geninfo_all_blocks=1 00:07:08.905 --rc geninfo_unexecuted_blocks=1 00:07:08.905 00:07:08.905 ' 00:07:08.905 23:02:28 blockdev_nvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:08.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.905 --rc genhtml_branch_coverage=1 00:07:08.905 --rc genhtml_function_coverage=1 00:07:08.905 --rc genhtml_legend=1 00:07:08.905 --rc geninfo_all_blocks=1 00:07:08.905 --rc geninfo_unexecuted_blocks=1 00:07:08.905 00:07:08.905 ' 00:07:08.905 23:02:28 blockdev_nvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:08.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.905 --rc genhtml_branch_coverage=1 00:07:08.905 --rc genhtml_function_coverage=1 00:07:08.905 --rc genhtml_legend=1 00:07:08.905 --rc geninfo_all_blocks=1 00:07:08.905 --rc geninfo_unexecuted_blocks=1 00:07:08.905 00:07:08.905 ' 00:07:08.905 23:02:28 blockdev_nvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:08.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.905 --rc genhtml_branch_coverage=1 00:07:08.905 --rc genhtml_function_coverage=1 00:07:08.905 --rc genhtml_legend=1 00:07:08.905 --rc geninfo_all_blocks=1 00:07:08.905 --rc geninfo_unexecuted_blocks=1 00:07:08.905 00:07:08.905 ' 00:07:08.905 23:02:28 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:08.905 23:02:28 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:07:08.905 23:02:28 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:08.905 23:02:28 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:08.905 23:02:28 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:08.905 23:02:28 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:08.905 23:02:28 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:08.905 23:02:28 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:08.905 23:02:28 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:07:08.905 23:02:28 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:07:08.905 23:02:28 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:07:08.905 23:02:28 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:07:08.905 23:02:28 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:07:08.905 23:02:28 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:07:08.905 23:02:28 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:07:08.905 23:02:28 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:07:08.905 23:02:28 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:07:08.905 23:02:28 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:07:08.905 23:02:28 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:07:08.905 23:02:28 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:07:08.905 23:02:28 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:07:08.905 23:02:28 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:07:08.905 23:02:28 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:07:08.905 23:02:28 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:07:08.905 23:02:28 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=72292 00:07:08.905 23:02:28 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:08.905 23:02:28 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 72292 00:07:08.905 23:02:28 blockdev_nvme -- common/autotest_common.sh@831 -- # '[' -z 72292 ']' 00:07:08.905 23:02:28 blockdev_nvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.905 23:02:28 blockdev_nvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:08.905 23:02:28 blockdev_nvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.905 23:02:28 blockdev_nvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:08.905 23:02:28 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:08.905 23:02:28 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:08.905 [2024-11-18 23:02:28.195623] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:08.905 [2024-11-18 23:02:28.195739] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72292 ] 00:07:09.164 [2024-11-18 23:02:28.340841] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.164 [2024-11-18 23:02:28.384401] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.747 23:02:29 blockdev_nvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:09.747 23:02:29 blockdev_nvme -- common/autotest_common.sh@864 -- # return 0 00:07:09.747 23:02:29 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:07:09.747 23:02:29 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:07:09.747 23:02:29 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:07:09.747 23:02:29 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:09.747 23:02:29 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:09.747 23:02:29 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:09.747 23:02:29 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.747 23:02:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:10.011 23:02:29 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.011 23:02:29 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:07:10.011 23:02:29 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.011 23:02:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:10.272 23:02:29 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.272 23:02:29 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:07:10.272 23:02:29 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:07:10.272 23:02:29 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.272 23:02:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:10.272 23:02:29 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.272 23:02:29 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:07:10.272 23:02:29 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.272 23:02:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:10.272 23:02:29 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.272 23:02:29 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:10.272 23:02:29 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.272 23:02:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:10.272 23:02:29 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.272 23:02:29 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:07:10.272 23:02:29 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:07:10.272 23:02:29 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:07:10.272 23:02:29 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.272 23:02:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:10.272 23:02:29 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.272 23:02:29 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:07:10.272 23:02:29 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:07:10.273 23:02:29 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "14a0edaa-dc55-44b1-9383-25d6f64f514c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "14a0edaa-dc55-44b1-9383-25d6f64f514c",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "fa5d904f-5ede-4d7f-bc5c-fa09c177e525"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "fa5d904f-5ede-4d7f-bc5c-fa09c177e525",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "06c71121-9a15-4e1f-a169-c57678f5d512"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "06c71121-9a15-4e1f-a169-c57678f5d512",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "780155d8-1ab6-4e4a-b571-9f0fb3c808bc"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "780155d8-1ab6-4e4a-b571-9f0fb3c808bc",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "87a532d7-3327-4987-a001-e0d648d5515a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "87a532d7-3327-4987-a001-e0d648d5515a",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "51a9379a-29e8-4bd6-a1da-d556f9385336"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "51a9379a-29e8-4bd6-a1da-d556f9385336",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:10.273 23:02:29 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:07:10.273 23:02:29 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:07:10.273 23:02:29 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:07:10.273 23:02:29 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 72292 00:07:10.273 23:02:29 blockdev_nvme -- common/autotest_common.sh@950 -- # '[' -z 72292 ']' 00:07:10.273 23:02:29 blockdev_nvme -- common/autotest_common.sh@954 -- # kill -0 72292 00:07:10.273 23:02:29 blockdev_nvme -- common/autotest_common.sh@955 -- # uname 00:07:10.273 23:02:29 blockdev_nvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:10.273 23:02:29 blockdev_nvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72292 00:07:10.273 23:02:29 blockdev_nvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:10.273 23:02:29 blockdev_nvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:10.273 killing process with pid 72292 00:07:10.273 23:02:29 blockdev_nvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72292' 00:07:10.273 23:02:29 blockdev_nvme -- common/autotest_common.sh@969 -- # kill 72292 00:07:10.273 23:02:29 blockdev_nvme -- common/autotest_common.sh@974 -- # wait 72292 00:07:10.843 23:02:30 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:10.843 23:02:30 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:10.843 23:02:30 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:07:10.843 23:02:30 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:10.843 23:02:30 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:10.843 ************************************ 00:07:10.843 START TEST bdev_hello_world 00:07:10.843 ************************************ 00:07:10.843 23:02:30 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:10.843 [2024-11-18 23:02:30.181391] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:10.843 [2024-11-18 23:02:30.181556] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72365 ] 00:07:11.102 [2024-11-18 23:02:30.344206] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.102 [2024-11-18 23:02:30.387128] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.672 [2024-11-18 23:02:30.772631] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:11.672 [2024-11-18 23:02:30.772692] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:11.672 [2024-11-18 23:02:30.772719] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:11.672 [2024-11-18 23:02:30.775323] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:11.672 [2024-11-18 23:02:30.775986] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:11.672 [2024-11-18 23:02:30.776020] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:11.672 [2024-11-18 23:02:30.776284] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:11.672 00:07:11.672 [2024-11-18 23:02:30.776314] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:11.672 00:07:11.672 real 0m0.905s 00:07:11.672 user 0m0.592s 00:07:11.672 sys 0m0.206s 00:07:11.672 23:02:31 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:11.672 23:02:31 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:11.672 ************************************ 00:07:11.672 END TEST bdev_hello_world 00:07:11.672 ************************************ 00:07:11.942 23:02:31 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:07:11.942 23:02:31 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:11.942 23:02:31 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:11.942 23:02:31 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:11.942 ************************************ 00:07:11.942 START TEST bdev_bounds 00:07:11.942 ************************************ 00:07:11.942 23:02:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:07:11.942 Process bdevio pid: 72396 00:07:11.942 23:02:31 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=72396 00:07:11.943 23:02:31 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:11.943 23:02:31 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 72396' 00:07:11.943 23:02:31 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 72396 00:07:11.943 23:02:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 72396 ']' 00:07:11.943 23:02:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.943 23:02:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:11.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.943 23:02:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.943 23:02:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:11.943 23:02:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:11.943 23:02:31 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:11.943 [2024-11-18 23:02:31.148818] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:11.943 [2024-11-18 23:02:31.148980] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72396 ] 00:07:11.943 [2024-11-18 23:02:31.304680] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:12.204 [2024-11-18 23:02:31.353031] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:12.204 [2024-11-18 23:02:31.353370] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:12.204 [2024-11-18 23:02:31.353386] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.771 23:02:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:12.771 23:02:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:07:12.771 23:02:31 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:12.771 I/O targets: 00:07:12.771 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:12.771 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:07:12.771 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:12.771 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:12.771 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:12.771 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:12.771 00:07:12.771 00:07:12.771 CUnit - A unit testing framework for C - Version 2.1-3 00:07:12.771 http://cunit.sourceforge.net/ 00:07:12.771 00:07:12.771 00:07:12.771 Suite: bdevio tests on: Nvme3n1 00:07:12.771 Test: blockdev write read block ...passed 00:07:12.771 Test: blockdev write zeroes read block ...passed 00:07:12.771 Test: blockdev write zeroes read no split ...passed 00:07:12.771 Test: blockdev write zeroes read split ...passed 00:07:12.771 Test: blockdev write zeroes read split partial ...passed 00:07:12.771 Test: blockdev reset ...[2024-11-18 23:02:32.113055] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:07:12.771 [2024-11-18 23:02:32.115315] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:12.771 passed 00:07:12.771 Test: blockdev write read 8 blocks ...passed 00:07:12.771 Test: blockdev write read size > 128k ...passed 00:07:12.771 Test: blockdev write read invalid size ...passed 00:07:12.771 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:12.771 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:12.771 Test: blockdev write read max offset ...passed 00:07:12.771 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:12.771 Test: blockdev writev readv 8 blocks ...passed 00:07:12.771 Test: blockdev writev readv 30 x 1block ...passed 00:07:12.771 Test: blockdev writev readv block ...passed 00:07:12.772 Test: blockdev writev readv size > 128k ...passed 00:07:12.772 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:12.772 Test: blockdev comparev and writev ...[2024-11-18 23:02:32.124778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2a8806000 len:0x1000 00:07:12.772 [2024-11-18 23:02:32.124839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:12.772 passed 00:07:12.772 Test: blockdev nvme passthru rw ...passed 00:07:12.772 Test: blockdev nvme passthru vendor specific ...passed 00:07:12.772 Test: blockdev nvme admin passthru ...[2024-11-18 23:02:32.125390] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:12.772 [2024-11-18 23:02:32.125423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:12.772 passed 00:07:12.772 Test: blockdev copy ...passed 00:07:12.772 Suite: bdevio tests on: Nvme2n3 00:07:12.772 Test: blockdev write read block ...passed 00:07:12.772 Test: blockdev write zeroes read block ...passed 00:07:12.772 Test: blockdev write zeroes read no split ...passed 00:07:12.772 Test: blockdev write zeroes read split ...passed 00:07:13.032 Test: blockdev write zeroes read split partial ...passed 00:07:13.032 Test: blockdev reset ...[2024-11-18 23:02:32.147910] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:13.032 [2024-11-18 23:02:32.149927] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:13.032 passed 00:07:13.032 Test: blockdev write read 8 blocks ...passed 00:07:13.032 Test: blockdev write read size > 128k ...passed 00:07:13.032 Test: blockdev write read invalid size ...passed 00:07:13.032 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:13.032 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:13.032 Test: blockdev write read max offset ...passed 00:07:13.032 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:13.032 Test: blockdev writev readv 8 blocks ...passed 00:07:13.032 Test: blockdev writev readv 30 x 1block ...passed 00:07:13.032 Test: blockdev writev readv block ...passed 00:07:13.032 Test: blockdev writev readv size > 128k ...passed 00:07:13.032 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:13.032 Test: blockdev comparev and writev ...[2024-11-18 23:02:32.158048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d8c05000 len:0x1000 00:07:13.032 [2024-11-18 23:02:32.158108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:13.032 passed 00:07:13.032 Test: blockdev nvme passthru rw ...passed 00:07:13.032 Test: blockdev nvme passthru vendor specific ...passed 00:07:13.032 Test: blockdev nvme admin passthru ...[2024-11-18 23:02:32.158987] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:13.032 [2024-11-18 23:02:32.159029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:13.032 passed 00:07:13.032 Test: blockdev copy ...passed 00:07:13.032 Suite: bdevio tests on: Nvme2n2 00:07:13.032 Test: blockdev write read block ...passed 00:07:13.032 Test: blockdev write zeroes read block ...passed 00:07:13.032 Test: blockdev write zeroes read no split ...passed 00:07:13.032 Test: blockdev write zeroes read split ...passed 00:07:13.032 Test: blockdev write zeroes read split partial ...passed 00:07:13.032 Test: blockdev reset ...[2024-11-18 23:02:32.181559] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:13.032 passed 00:07:13.032 Test: blockdev write read 8 blocks ...[2024-11-18 23:02:32.184058] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:13.032 passed 00:07:13.032 Test: blockdev write read size > 128k ...passed 00:07:13.032 Test: blockdev write read invalid size ...passed 00:07:13.032 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:13.032 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:13.032 Test: blockdev write read max offset ...passed 00:07:13.032 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:13.032 Test: blockdev writev readv 8 blocks ...passed 00:07:13.032 Test: blockdev writev readv 30 x 1block ...passed 00:07:13.033 Test: blockdev writev readv block ...passed 00:07:13.033 Test: blockdev writev readv size > 128k ...passed 00:07:13.033 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:13.033 Test: blockdev comparev and writev ...[2024-11-18 23:02:32.193079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d9036000 len:0x1000 00:07:13.033 [2024-11-18 23:02:32.193122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:13.033 passed 00:07:13.033 Test: blockdev nvme passthru rw ...passed 00:07:13.033 Test: blockdev nvme passthru vendor specific ...[2024-11-18 23:02:32.195189] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:13.033 [2024-11-18 23:02:32.195218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:13.033 passed 00:07:13.033 Test: blockdev nvme admin passthru ...passed 00:07:13.033 Test: blockdev copy ...passed 00:07:13.033 Suite: bdevio tests on: Nvme2n1 00:07:13.033 Test: blockdev write read block ...passed 00:07:13.033 Test: blockdev write zeroes read block ...passed 00:07:13.033 Test: blockdev write zeroes read no split ...passed 00:07:13.033 Test: blockdev write zeroes read split ...passed 00:07:13.033 Test: blockdev write zeroes read split partial ...passed 00:07:13.033 Test: blockdev reset ...[2024-11-18 23:02:32.218453] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:13.033 [2024-11-18 23:02:32.220389] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:13.033 passed 00:07:13.033 Test: blockdev write read 8 blocks ...passed 00:07:13.033 Test: blockdev write read size > 128k ...passed 00:07:13.033 Test: blockdev write read invalid size ...passed 00:07:13.033 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:13.033 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:13.033 Test: blockdev write read max offset ...passed 00:07:13.033 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:13.033 Test: blockdev writev readv 8 blocks ...passed 00:07:13.033 Test: blockdev writev readv 30 x 1block ...passed 00:07:13.033 Test: blockdev writev readv block ...passed 00:07:13.033 Test: blockdev writev readv size > 128k ...passed 00:07:13.033 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:13.033 Test: blockdev comparev and writev ...[2024-11-18 23:02:32.226834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d9030000 len:0x1000 00:07:13.033 [2024-11-18 23:02:32.226885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:13.033 passed 00:07:13.033 Test: blockdev nvme passthru rw ...passed 00:07:13.033 Test: blockdev nvme passthru vendor specific ...[2024-11-18 23:02:32.228009] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:13.033 [2024-11-18 23:02:32.228037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:13.033 passed 00:07:13.033 Test: blockdev nvme admin passthru ...passed 00:07:13.033 Test: blockdev copy ...passed 00:07:13.033 Suite: bdevio tests on: Nvme1n1 00:07:13.033 Test: blockdev write read block ...passed 00:07:13.033 Test: blockdev write zeroes read block ...passed 00:07:13.033 Test: blockdev write zeroes read no split ...passed 00:07:13.033 Test: blockdev write zeroes read split ...passed 00:07:13.033 Test: blockdev write zeroes read split partial ...passed 00:07:13.033 Test: blockdev reset ...[2024-11-18 23:02:32.241266] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:07:13.033 [2024-11-18 23:02:32.242854] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:13.033 passed 00:07:13.033 Test: blockdev write read 8 blocks ...passed 00:07:13.033 Test: blockdev write read size > 128k ...passed 00:07:13.033 Test: blockdev write read invalid size ...passed 00:07:13.033 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:13.033 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:13.033 Test: blockdev write read max offset ...passed 00:07:13.033 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:13.033 Test: blockdev writev readv 8 blocks ...passed 00:07:13.033 Test: blockdev writev readv 30 x 1block ...passed 00:07:13.033 Test: blockdev writev readv block ...passed 00:07:13.033 Test: blockdev writev readv size > 128k ...passed 00:07:13.033 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:13.033 Test: blockdev comparev and writev ...[2024-11-18 23:02:32.250272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d902c000 len:0x1000 00:07:13.033 [2024-11-18 23:02:32.250310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:13.033 passed 00:07:13.033 Test: blockdev nvme passthru rw ...passed 00:07:13.033 Test: blockdev nvme passthru vendor specific ...[2024-11-18 23:02:32.251833] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:13.033 [2024-11-18 23:02:32.251863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:13.033 passed 00:07:13.033 Test: blockdev nvme admin passthru ...passed 00:07:13.033 Test: blockdev copy ...passed 00:07:13.033 Suite: bdevio tests on: Nvme0n1 00:07:13.033 Test: blockdev write read block ...passed 00:07:13.033 Test: blockdev write zeroes read block ...passed 00:07:13.033 Test: blockdev write zeroes read no split ...passed 00:07:13.033 Test: blockdev write zeroes read split ...passed 00:07:13.033 Test: blockdev write zeroes read split partial ...passed 00:07:13.033 Test: blockdev reset ...[2024-11-18 23:02:32.279427] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:07:13.033 [2024-11-18 23:02:32.281081] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:13.033 passed 00:07:13.033 Test: blockdev write read 8 blocks ...passed 00:07:13.033 Test: blockdev write read size > 128k ...passed 00:07:13.033 Test: blockdev write read invalid size ...passed 00:07:13.033 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:13.033 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:13.033 Test: blockdev write read max offset ...passed 00:07:13.033 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:13.033 Test: blockdev writev readv 8 blocks ...passed 00:07:13.033 Test: blockdev writev readv 30 x 1block ...passed 00:07:13.033 Test: blockdev writev readv block ...passed 00:07:13.033 Test: blockdev writev readv size > 128k ...passed 00:07:13.033 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:13.033 Test: blockdev comparev and writev ...passed 00:07:13.033 Test: blockdev nvme passthru rw ...[2024-11-18 23:02:32.289531] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:13.033 separate metadata which is not supported yet. 00:07:13.033 passed 00:07:13.033 Test: blockdev nvme passthru vendor specific ...[2024-11-18 23:02:32.290278] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:13.033 [2024-11-18 23:02:32.290313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:13.033 passed 00:07:13.033 Test: blockdev nvme admin passthru ...passed 00:07:13.033 Test: blockdev copy ...passed 00:07:13.033 00:07:13.033 Run Summary: Type Total Ran Passed Failed Inactive 00:07:13.033 suites 6 6 n/a 0 0 00:07:13.033 tests 138 138 138 0 0 00:07:13.033 asserts 893 893 893 0 n/a 00:07:13.033 00:07:13.033 Elapsed time = 0.472 seconds 00:07:13.033 0 00:07:13.033 23:02:32 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 72396 00:07:13.033 23:02:32 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 72396 ']' 00:07:13.033 23:02:32 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 72396 00:07:13.033 23:02:32 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:07:13.033 23:02:32 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:13.033 23:02:32 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72396 00:07:13.033 23:02:32 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:13.033 23:02:32 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:13.033 23:02:32 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72396' 00:07:13.033 killing process with pid 72396 00:07:13.034 23:02:32 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 72396 00:07:13.034 23:02:32 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 72396 00:07:13.295 23:02:32 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:13.295 00:07:13.295 real 0m1.452s 00:07:13.295 user 0m3.520s 00:07:13.295 sys 0m0.308s 00:07:13.295 23:02:32 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:13.295 23:02:32 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:13.295 ************************************ 00:07:13.295 END TEST bdev_bounds 00:07:13.295 ************************************ 00:07:13.295 23:02:32 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:13.295 23:02:32 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:13.295 23:02:32 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:13.295 23:02:32 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:13.295 ************************************ 00:07:13.295 START TEST bdev_nbd 00:07:13.295 ************************************ 00:07:13.295 23:02:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:13.295 23:02:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:13.295 23:02:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:13.295 23:02:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.295 23:02:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:13.295 23:02:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:13.295 23:02:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:13.295 23:02:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:07:13.295 23:02:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:13.295 23:02:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:13.295 23:02:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:13.295 23:02:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:07:13.295 23:02:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:13.295 23:02:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:13.295 23:02:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:13.295 23:02:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:13.295 23:02:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=72445 00:07:13.295 23:02:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:13.295 23:02:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 72445 /var/tmp/spdk-nbd.sock 00:07:13.295 23:02:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 72445 ']' 00:07:13.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:13.295 23:02:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:13.295 23:02:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:13.295 23:02:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:13.295 23:02:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:13.295 23:02:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:13.295 23:02:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:13.295 [2024-11-18 23:02:32.667519] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:13.295 [2024-11-18 23:02:32.667668] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:13.557 [2024-11-18 23:02:32.818940] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.557 [2024-11-18 23:02:32.891669] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.501 23:02:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:14.501 23:02:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:07:14.501 23:02:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:14.501 23:02:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:14.501 23:02:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:14.501 23:02:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:14.501 23:02:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:14.501 23:02:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:14.501 23:02:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:14.501 23:02:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:14.501 23:02:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:14.501 23:02:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:14.501 23:02:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:14.501 23:02:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:14.501 23:02:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:14.501 23:02:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:14.501 23:02:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:14.501 23:02:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:14.501 23:02:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:14.501 23:02:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:14.501 23:02:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:14.501 23:02:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:14.501 23:02:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:14.501 23:02:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:14.501 23:02:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:14.501 23:02:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:14.501 23:02:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:14.501 1+0 records in 00:07:14.501 1+0 records out 00:07:14.502 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00108705 s, 3.8 MB/s 00:07:14.502 23:02:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.502 23:02:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:14.502 23:02:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.502 23:02:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:14.502 23:02:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:14.502 23:02:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:14.502 23:02:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:14.502 23:02:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:07:14.763 23:02:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:14.763 23:02:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:14.763 23:02:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:14.763 23:02:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:14.763 23:02:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:14.763 23:02:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:14.763 23:02:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:14.763 23:02:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:14.763 23:02:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:14.763 23:02:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:14.763 23:02:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:14.763 23:02:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:14.763 1+0 records in 00:07:14.763 1+0 records out 00:07:14.763 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000904538 s, 4.5 MB/s 00:07:14.763 23:02:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.763 23:02:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:14.763 23:02:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.763 23:02:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:14.763 23:02:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:14.763 23:02:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:14.763 23:02:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:14.763 23:02:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:15.030 23:02:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:15.030 23:02:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:15.030 23:02:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:15.030 23:02:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:07:15.030 23:02:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:15.030 23:02:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:15.030 23:02:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:15.030 23:02:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:07:15.030 23:02:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:15.030 23:02:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:15.030 23:02:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:15.030 23:02:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:15.030 1+0 records in 00:07:15.030 1+0 records out 00:07:15.030 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000643605 s, 6.4 MB/s 00:07:15.030 23:02:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.030 23:02:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:15.030 23:02:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.030 23:02:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:15.030 23:02:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:15.030 23:02:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:15.030 23:02:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:15.030 23:02:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:15.293 23:02:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:15.293 23:02:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:15.293 23:02:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:15.293 23:02:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:07:15.293 23:02:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:15.293 23:02:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:15.293 23:02:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:15.293 23:02:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:07:15.293 23:02:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:15.293 23:02:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:15.293 23:02:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:15.293 23:02:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:15.293 1+0 records in 00:07:15.293 1+0 records out 00:07:15.293 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00130545 s, 3.1 MB/s 00:07:15.293 23:02:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.293 23:02:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:15.293 23:02:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.293 23:02:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:15.293 23:02:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:15.293 23:02:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:15.293 23:02:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:15.293 23:02:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:15.624 23:02:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:15.624 23:02:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:15.624 23:02:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:15.624 23:02:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:07:15.624 23:02:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:15.624 23:02:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:15.624 23:02:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:15.624 23:02:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:07:15.624 23:02:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:15.624 23:02:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:15.624 23:02:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:15.624 23:02:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:15.624 1+0 records in 00:07:15.624 1+0 records out 00:07:15.624 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00089906 s, 4.6 MB/s 00:07:15.624 23:02:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.624 23:02:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:15.624 23:02:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.624 23:02:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:15.624 23:02:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:15.624 23:02:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:15.624 23:02:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:15.624 23:02:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:15.900 23:02:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:15.900 23:02:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:15.900 23:02:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:15.900 23:02:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:07:15.900 23:02:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:15.900 23:02:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:15.900 23:02:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:15.900 23:02:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:07:15.900 23:02:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:15.900 23:02:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:15.900 23:02:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:15.900 23:02:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:15.900 1+0 records in 00:07:15.900 1+0 records out 00:07:15.900 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00106946 s, 3.8 MB/s 00:07:15.900 23:02:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.900 23:02:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:15.900 23:02:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.900 23:02:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:15.900 23:02:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:15.900 23:02:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:15.900 23:02:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:15.900 23:02:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:15.900 23:02:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:15.900 { 00:07:15.900 "nbd_device": "/dev/nbd0", 00:07:15.900 "bdev_name": "Nvme0n1" 00:07:15.900 }, 00:07:15.900 { 00:07:15.900 "nbd_device": "/dev/nbd1", 00:07:15.900 "bdev_name": "Nvme1n1" 00:07:15.900 }, 00:07:15.900 { 00:07:15.900 "nbd_device": "/dev/nbd2", 00:07:15.900 "bdev_name": "Nvme2n1" 00:07:15.900 }, 00:07:15.900 { 00:07:15.900 "nbd_device": "/dev/nbd3", 00:07:15.900 "bdev_name": "Nvme2n2" 00:07:15.900 }, 00:07:15.900 { 00:07:15.900 "nbd_device": "/dev/nbd4", 00:07:15.900 "bdev_name": "Nvme2n3" 00:07:15.900 }, 00:07:15.900 { 00:07:15.900 "nbd_device": "/dev/nbd5", 00:07:15.900 "bdev_name": "Nvme3n1" 00:07:15.900 } 00:07:15.900 ]' 00:07:15.900 23:02:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:16.160 23:02:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:16.161 23:02:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:16.161 { 00:07:16.161 "nbd_device": "/dev/nbd0", 00:07:16.161 "bdev_name": "Nvme0n1" 00:07:16.161 }, 00:07:16.161 { 00:07:16.161 "nbd_device": "/dev/nbd1", 00:07:16.161 "bdev_name": "Nvme1n1" 00:07:16.161 }, 00:07:16.161 { 00:07:16.161 "nbd_device": "/dev/nbd2", 00:07:16.161 "bdev_name": "Nvme2n1" 00:07:16.161 }, 00:07:16.161 { 00:07:16.161 "nbd_device": "/dev/nbd3", 00:07:16.161 "bdev_name": "Nvme2n2" 00:07:16.161 }, 00:07:16.161 { 00:07:16.161 "nbd_device": "/dev/nbd4", 00:07:16.161 "bdev_name": "Nvme2n3" 00:07:16.161 }, 00:07:16.161 { 00:07:16.161 "nbd_device": "/dev/nbd5", 00:07:16.161 "bdev_name": "Nvme3n1" 00:07:16.161 } 00:07:16.161 ]' 00:07:16.161 23:02:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:07:16.161 23:02:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:16.161 23:02:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:07:16.161 23:02:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:16.161 23:02:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:16.161 23:02:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:16.161 23:02:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:16.161 23:02:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:16.422 23:02:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:16.422 23:02:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:16.422 23:02:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:16.422 23:02:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:16.422 23:02:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:16.422 23:02:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:16.422 23:02:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:16.422 23:02:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:16.422 23:02:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:16.422 23:02:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:16.422 23:02:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:16.422 23:02:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:16.422 23:02:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:16.422 23:02:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:16.422 23:02:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:16.422 23:02:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:16.422 23:02:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:16.422 23:02:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:16.422 23:02:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:16.681 23:02:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:16.681 23:02:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:16.681 23:02:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:16.681 23:02:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:16.681 23:02:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:16.681 23:02:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:16.681 23:02:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:16.681 23:02:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:16.681 23:02:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:16.681 23:02:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:16.942 23:02:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:16.942 23:02:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:16.942 23:02:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:16.942 23:02:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:16.942 23:02:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:16.942 23:02:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:16.942 23:02:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:16.942 23:02:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:16.942 23:02:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:16.942 23:02:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:17.203 23:02:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:17.204 23:02:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:17.204 23:02:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:17.204 23:02:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:17.204 23:02:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:17.204 23:02:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:17.204 23:02:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:17.204 23:02:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:17.204 23:02:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:17.204 23:02:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:17.465 23:02:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:17.465 23:02:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:17.465 23:02:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:17.465 23:02:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:17.465 23:02:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:17.465 23:02:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:17.465 23:02:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:17.465 23:02:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:17.466 23:02:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:17.466 23:02:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.466 23:02:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:17.727 23:02:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:17.727 23:02:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:17.727 23:02:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:17.727 23:02:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:17.727 23:02:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:17.727 23:02:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:17.727 23:02:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:17.727 23:02:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:17.727 23:02:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:17.727 23:02:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:17.727 23:02:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:17.727 23:02:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:17.727 23:02:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:17.727 23:02:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.727 23:02:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:17.727 23:02:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:17.727 23:02:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:17.727 23:02:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:17.727 23:02:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:17.727 23:02:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.727 23:02:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:17.727 23:02:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:17.727 23:02:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:17.727 23:02:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:17.727 23:02:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:17.727 23:02:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:17.727 23:02:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:17.727 23:02:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:17.987 /dev/nbd0 00:07:17.987 23:02:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:17.987 23:02:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:17.987 23:02:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:17.987 23:02:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:17.987 23:02:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:17.987 23:02:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:17.987 23:02:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:17.987 23:02:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:17.987 23:02:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:17.987 23:02:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:17.987 23:02:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:17.987 1+0 records in 00:07:17.987 1+0 records out 00:07:17.987 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000629521 s, 6.5 MB/s 00:07:17.987 23:02:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.987 23:02:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:17.987 23:02:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.987 23:02:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:17.987 23:02:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:17.987 23:02:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:17.987 23:02:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:17.987 23:02:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:07:18.248 /dev/nbd1 00:07:18.248 23:02:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:18.248 23:02:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:18.248 23:02:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:18.248 23:02:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:18.248 23:02:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:18.248 23:02:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:18.248 23:02:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:18.248 23:02:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:18.248 23:02:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:18.248 23:02:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:18.248 23:02:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:18.248 1+0 records in 00:07:18.248 1+0 records out 00:07:18.248 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000929696 s, 4.4 MB/s 00:07:18.248 23:02:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.248 23:02:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:18.248 23:02:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.248 23:02:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:18.248 23:02:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:18.248 23:02:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:18.248 23:02:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:18.249 23:02:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:07:18.249 /dev/nbd10 00:07:18.511 23:02:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:18.512 23:02:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:18.512 23:02:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:07:18.512 23:02:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:18.512 23:02:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:18.512 23:02:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:18.512 23:02:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:07:18.512 23:02:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:18.512 23:02:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:18.512 23:02:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:18.512 23:02:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:18.512 1+0 records in 00:07:18.512 1+0 records out 00:07:18.512 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00089167 s, 4.6 MB/s 00:07:18.512 23:02:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.512 23:02:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:18.512 23:02:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.512 23:02:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:18.512 23:02:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:18.512 23:02:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:18.512 23:02:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:18.512 23:02:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:07:18.512 /dev/nbd11 00:07:18.512 23:02:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:18.512 23:02:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:18.512 23:02:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:07:18.512 23:02:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:18.512 23:02:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:18.512 23:02:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:18.512 23:02:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:07:18.512 23:02:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:18.512 23:02:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:18.512 23:02:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:18.512 23:02:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:18.775 1+0 records in 00:07:18.775 1+0 records out 00:07:18.775 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000780419 s, 5.2 MB/s 00:07:18.775 23:02:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.775 23:02:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:18.775 23:02:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.775 23:02:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:18.775 23:02:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:18.775 23:02:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:18.775 23:02:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:18.775 23:02:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:07:18.775 /dev/nbd12 00:07:18.775 23:02:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:18.775 23:02:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:18.775 23:02:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:07:18.775 23:02:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:18.775 23:02:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:18.775 23:02:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:18.775 23:02:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:07:18.775 23:02:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:18.775 23:02:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:18.776 23:02:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:18.776 23:02:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:18.776 1+0 records in 00:07:18.776 1+0 records out 00:07:18.776 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00102568 s, 4.0 MB/s 00:07:18.776 23:02:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.776 23:02:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:18.776 23:02:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.776 23:02:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:18.776 23:02:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:18.776 23:02:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:18.776 23:02:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:18.776 23:02:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:07:19.036 /dev/nbd13 00:07:19.036 23:02:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:19.036 23:02:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:19.036 23:02:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:07:19.036 23:02:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:19.036 23:02:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:19.036 23:02:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:19.036 23:02:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:07:19.036 23:02:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:19.036 23:02:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:19.036 23:02:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:19.036 23:02:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:19.036 1+0 records in 00:07:19.036 1+0 records out 00:07:19.036 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00106601 s, 3.8 MB/s 00:07:19.036 23:02:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:19.036 23:02:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:19.036 23:02:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:19.036 23:02:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:19.036 23:02:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:19.036 23:02:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:19.036 23:02:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:19.036 23:02:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:19.036 23:02:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:19.036 23:02:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:19.299 23:02:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:19.299 { 00:07:19.299 "nbd_device": "/dev/nbd0", 00:07:19.299 "bdev_name": "Nvme0n1" 00:07:19.299 }, 00:07:19.299 { 00:07:19.299 "nbd_device": "/dev/nbd1", 00:07:19.299 "bdev_name": "Nvme1n1" 00:07:19.299 }, 00:07:19.299 { 00:07:19.299 "nbd_device": "/dev/nbd10", 00:07:19.299 "bdev_name": "Nvme2n1" 00:07:19.299 }, 00:07:19.299 { 00:07:19.299 "nbd_device": "/dev/nbd11", 00:07:19.299 "bdev_name": "Nvme2n2" 00:07:19.299 }, 00:07:19.299 { 00:07:19.299 "nbd_device": "/dev/nbd12", 00:07:19.299 "bdev_name": "Nvme2n3" 00:07:19.299 }, 00:07:19.299 { 00:07:19.299 "nbd_device": "/dev/nbd13", 00:07:19.299 "bdev_name": "Nvme3n1" 00:07:19.299 } 00:07:19.299 ]' 00:07:19.299 23:02:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:19.299 { 00:07:19.299 "nbd_device": "/dev/nbd0", 00:07:19.299 "bdev_name": "Nvme0n1" 00:07:19.299 }, 00:07:19.299 { 00:07:19.299 "nbd_device": "/dev/nbd1", 00:07:19.299 "bdev_name": "Nvme1n1" 00:07:19.299 }, 00:07:19.299 { 00:07:19.299 "nbd_device": "/dev/nbd10", 00:07:19.299 "bdev_name": "Nvme2n1" 00:07:19.299 }, 00:07:19.299 { 00:07:19.299 "nbd_device": "/dev/nbd11", 00:07:19.299 "bdev_name": "Nvme2n2" 00:07:19.299 }, 00:07:19.299 { 00:07:19.299 "nbd_device": "/dev/nbd12", 00:07:19.299 "bdev_name": "Nvme2n3" 00:07:19.299 }, 00:07:19.299 { 00:07:19.299 "nbd_device": "/dev/nbd13", 00:07:19.299 "bdev_name": "Nvme3n1" 00:07:19.299 } 00:07:19.299 ]' 00:07:19.299 23:02:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:19.299 23:02:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:19.299 /dev/nbd1 00:07:19.299 /dev/nbd10 00:07:19.299 /dev/nbd11 00:07:19.299 /dev/nbd12 00:07:19.299 /dev/nbd13' 00:07:19.299 23:02:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:19.299 23:02:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:19.299 /dev/nbd1 00:07:19.299 /dev/nbd10 00:07:19.299 /dev/nbd11 00:07:19.299 /dev/nbd12 00:07:19.299 /dev/nbd13' 00:07:19.299 23:02:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:07:19.299 23:02:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:07:19.299 23:02:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:07:19.299 23:02:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:07:19.299 23:02:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:07:19.299 23:02:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:19.299 23:02:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:19.299 23:02:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:19.299 23:02:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:19.299 23:02:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:19.299 23:02:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:19.299 256+0 records in 00:07:19.299 256+0 records out 00:07:19.299 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00683312 s, 153 MB/s 00:07:19.299 23:02:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:19.299 23:02:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:19.562 256+0 records in 00:07:19.562 256+0 records out 00:07:19.562 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.251532 s, 4.2 MB/s 00:07:19.562 23:02:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:19.562 23:02:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:19.824 256+0 records in 00:07:19.824 256+0 records out 00:07:19.824 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.245167 s, 4.3 MB/s 00:07:19.824 23:02:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:19.824 23:02:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:20.092 256+0 records in 00:07:20.092 256+0 records out 00:07:20.092 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.229849 s, 4.6 MB/s 00:07:20.092 23:02:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:20.092 23:02:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:20.357 256+0 records in 00:07:20.357 256+0 records out 00:07:20.357 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.215065 s, 4.9 MB/s 00:07:20.357 23:02:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:20.357 23:02:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:20.618 256+0 records in 00:07:20.618 256+0 records out 00:07:20.618 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.209111 s, 5.0 MB/s 00:07:20.618 23:02:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:20.618 23:02:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:20.888 256+0 records in 00:07:20.888 256+0 records out 00:07:20.888 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.210471 s, 5.0 MB/s 00:07:20.888 23:02:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:07:20.888 23:02:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:20.888 23:02:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:20.888 23:02:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:20.888 23:02:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:20.888 23:02:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:20.888 23:02:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:20.888 23:02:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:20.889 23:02:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:20.889 23:02:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:20.889 23:02:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:20.889 23:02:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:20.889 23:02:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:20.889 23:02:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:20.889 23:02:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:20.889 23:02:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:20.889 23:02:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:20.889 23:02:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:20.889 23:02:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:20.889 23:02:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:20.889 23:02:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:20.889 23:02:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:20.889 23:02:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:20.889 23:02:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:20.890 23:02:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:20.890 23:02:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:20.890 23:02:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:21.154 23:02:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:21.155 23:02:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:21.155 23:02:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:21.155 23:02:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:21.155 23:02:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:21.155 23:02:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:21.155 23:02:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:21.155 23:02:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:21.155 23:02:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:21.155 23:02:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:21.155 23:02:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:21.155 23:02:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:21.155 23:02:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:21.155 23:02:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:21.155 23:02:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:21.155 23:02:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:21.416 23:02:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:21.416 23:02:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:21.416 23:02:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:21.416 23:02:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:21.416 23:02:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:21.416 23:02:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:21.416 23:02:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:21.416 23:02:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:21.416 23:02:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:21.416 23:02:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:21.416 23:02:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:21.416 23:02:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:21.416 23:02:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:21.416 23:02:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:21.679 23:02:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:21.679 23:02:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:21.679 23:02:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:21.679 23:02:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:21.679 23:02:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:21.679 23:02:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:21.679 23:02:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:21.679 23:02:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:21.679 23:02:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:21.679 23:02:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:21.941 23:02:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:21.941 23:02:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:21.941 23:02:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:21.941 23:02:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:21.941 23:02:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:21.941 23:02:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:21.941 23:02:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:21.941 23:02:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:21.941 23:02:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:21.941 23:02:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:22.200 23:02:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:22.200 23:02:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:22.200 23:02:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:22.200 23:02:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:22.200 23:02:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:22.200 23:02:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:22.200 23:02:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:22.200 23:02:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:22.200 23:02:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:22.200 23:02:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:22.200 23:02:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:22.458 23:02:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:22.458 23:02:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:22.458 23:02:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:22.458 23:02:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:22.458 23:02:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:22.458 23:02:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:22.458 23:02:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:22.458 23:02:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:22.458 23:02:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:22.458 23:02:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:22.458 23:02:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:22.458 23:02:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:22.458 23:02:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:22.458 23:02:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:22.458 23:02:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:22.458 23:02:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:22.716 malloc_lvol_verify 00:07:22.716 23:02:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:22.977 4efa4f12-49df-4050-9e68-7f0be20706fa 00:07:22.977 23:02:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:22.977 c0464b4d-b3d1-4d28-9227-9ea57f6ec6c9 00:07:22.977 23:02:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:23.236 /dev/nbd0 00:07:23.236 23:02:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:23.236 23:02:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:23.236 23:02:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:23.236 23:02:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:23.236 23:02:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:23.236 mke2fs 1.47.0 (5-Feb-2023) 00:07:23.236 Discarding device blocks: 0/4096 done 00:07:23.236 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:23.236 00:07:23.236 Allocating group tables: 0/1 done 00:07:23.236 Writing inode tables: 0/1 done 00:07:23.236 Creating journal (1024 blocks): done 00:07:23.236 Writing superblocks and filesystem accounting information: 0/1 done 00:07:23.236 00:07:23.236 23:02:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:23.236 23:02:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:23.236 23:02:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:23.236 23:02:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:23.236 23:02:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:23.236 23:02:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:23.236 23:02:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:23.494 23:02:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:23.494 23:02:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:23.494 23:02:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:23.494 23:02:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:23.494 23:02:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:23.494 23:02:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:23.494 23:02:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:23.494 23:02:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:23.494 23:02:42 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 72445 00:07:23.494 23:02:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 72445 ']' 00:07:23.494 23:02:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 72445 00:07:23.494 23:02:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:07:23.494 23:02:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:23.494 23:02:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72445 00:07:23.494 23:02:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:23.494 23:02:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:23.494 killing process with pid 72445 00:07:23.494 23:02:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72445' 00:07:23.494 23:02:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 72445 00:07:23.494 23:02:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 72445 00:07:23.752 23:02:43 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:23.752 00:07:23.752 real 0m10.409s 00:07:23.752 user 0m14.375s 00:07:23.752 sys 0m3.601s 00:07:23.752 ************************************ 00:07:23.752 END TEST bdev_nbd 00:07:23.752 ************************************ 00:07:23.752 23:02:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:23.752 23:02:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:23.752 23:02:43 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:07:23.752 skipping fio tests on NVMe due to multi-ns failures. 00:07:23.752 23:02:43 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:07:23.752 23:02:43 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:23.752 23:02:43 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:23.752 23:02:43 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:23.752 23:02:43 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:07:23.752 23:02:43 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:23.752 23:02:43 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:23.752 ************************************ 00:07:23.752 START TEST bdev_verify 00:07:23.752 ************************************ 00:07:23.752 23:02:43 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:23.752 [2024-11-18 23:02:43.114391] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:23.752 [2024-11-18 23:02:43.114505] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72827 ] 00:07:24.011 [2024-11-18 23:02:43.263503] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:24.011 [2024-11-18 23:02:43.312033] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:24.011 [2024-11-18 23:02:43.312123] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.616 Running I/O for 5 seconds... 00:07:26.921 18624.00 IOPS, 72.75 MiB/s [2024-11-18T23:02:47.232Z] 19072.00 IOPS, 74.50 MiB/s [2024-11-18T23:02:48.166Z] 19498.67 IOPS, 76.17 MiB/s [2024-11-18T23:02:49.100Z] 19744.00 IOPS, 77.12 MiB/s [2024-11-18T23:02:49.100Z] 19929.60 IOPS, 77.85 MiB/s 00:07:29.722 Latency(us) 00:07:29.722 [2024-11-18T23:02:49.100Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:29.722 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:29.722 Verification LBA range: start 0x0 length 0xbd0bd 00:07:29.722 Nvme0n1 : 5.04 1650.05 6.45 0.00 0.00 77240.64 17442.66 96388.33 00:07:29.722 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:29.722 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:29.722 Nvme0n1 : 5.06 1619.27 6.33 0.00 0.00 78652.92 16434.41 83886.08 00:07:29.722 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:29.722 Verification LBA range: start 0x0 length 0xa0000 00:07:29.722 Nvme1n1 : 5.07 1652.58 6.46 0.00 0.00 76940.26 8015.56 87515.77 00:07:29.722 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:29.722 Verification LBA range: start 0xa0000 length 0xa0000 00:07:29.722 Nvme1n1 : 5.06 1618.83 6.32 0.00 0.00 78528.88 16434.41 76626.71 00:07:29.722 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:29.722 Verification LBA range: start 0x0 length 0x80000 00:07:29.722 Nvme2n1 : 5.09 1658.43 6.48 0.00 0.00 76592.17 12502.25 73400.32 00:07:29.722 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:29.722 Verification LBA range: start 0x80000 length 0x80000 00:07:29.722 Nvme2n1 : 5.08 1625.47 6.35 0.00 0.00 78053.98 5167.26 62107.96 00:07:29.722 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:29.722 Verification LBA range: start 0x0 length 0x80000 00:07:29.722 Nvme2n2 : 5.10 1656.67 6.47 0.00 0.00 76443.82 17039.36 71787.13 00:07:29.722 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:29.722 Verification LBA range: start 0x80000 length 0x80000 00:07:29.722 Nvme2n2 : 5.08 1625.01 6.35 0.00 0.00 77932.87 5494.94 59284.87 00:07:29.722 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:29.722 Verification LBA range: start 0x0 length 0x80000 00:07:29.722 Nvme2n3 : 5.10 1656.24 6.47 0.00 0.00 76315.27 16736.89 73803.62 00:07:29.722 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:29.722 Verification LBA range: start 0x80000 length 0x80000 00:07:29.722 Nvme2n3 : 5.09 1633.54 6.38 0.00 0.00 77508.38 8065.97 62914.56 00:07:29.722 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:29.722 Verification LBA range: start 0x0 length 0x20000 00:07:29.722 Nvme3n1 : 5.10 1655.80 6.47 0.00 0.00 76198.63 16232.76 75820.11 00:07:29.722 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:29.722 Verification LBA range: start 0x20000 length 0x20000 00:07:29.722 Nvme3n1 : 5.09 1633.03 6.38 0.00 0.00 77377.28 8418.86 66140.95 00:07:29.722 [2024-11-18T23:02:49.100Z] =================================================================================================================== 00:07:29.722 [2024-11-18T23:02:49.100Z] Total : 19684.92 76.89 0.00 0.00 77306.30 5167.26 96388.33 00:07:30.287 00:07:30.288 real 0m6.513s 00:07:30.288 user 0m12.202s 00:07:30.288 sys 0m0.263s 00:07:30.288 23:02:49 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:30.288 23:02:49 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:30.288 ************************************ 00:07:30.288 END TEST bdev_verify 00:07:30.288 ************************************ 00:07:30.288 23:02:49 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:30.288 23:02:49 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:07:30.288 23:02:49 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:30.288 23:02:49 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:30.288 ************************************ 00:07:30.288 START TEST bdev_verify_big_io 00:07:30.288 ************************************ 00:07:30.288 23:02:49 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:30.545 [2024-11-18 23:02:49.675140] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:30.545 [2024-11-18 23:02:49.675284] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72919 ] 00:07:30.545 [2024-11-18 23:02:49.817087] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:30.545 [2024-11-18 23:02:49.861440] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:30.545 [2024-11-18 23:02:49.861540] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.111 Running I/O for 5 seconds... 00:07:36.167 535.00 IOPS, 33.44 MiB/s [2024-11-18T23:02:56.480Z] 1812.00 IOPS, 113.25 MiB/s [2024-11-18T23:02:56.480Z] 2542.33 IOPS, 158.90 MiB/s 00:07:37.102 Latency(us) 00:07:37.102 [2024-11-18T23:02:56.480Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:37.102 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:37.102 Verification LBA range: start 0x0 length 0xbd0b 00:07:37.102 Nvme0n1 : 5.58 114.76 7.17 0.00 0.00 1074548.74 35288.62 1167952.34 00:07:37.102 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:37.102 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:37.102 Nvme0n1 : 5.61 118.43 7.40 0.00 0.00 1032692.93 26214.40 1155046.79 00:07:37.102 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:37.102 Verification LBA range: start 0x0 length 0xa000 00:07:37.102 Nvme1n1 : 5.69 116.44 7.28 0.00 0.00 1018428.98 101227.91 1032444.06 00:07:37.102 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:37.102 Verification LBA range: start 0xa000 length 0xa000 00:07:37.102 Nvme1n1 : 5.70 123.60 7.72 0.00 0.00 963279.52 84692.68 961463.53 00:07:37.102 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:37.102 Verification LBA range: start 0x0 length 0x8000 00:07:37.102 Nvme2n1 : 5.74 122.62 7.66 0.00 0.00 944860.12 43152.94 1051802.39 00:07:37.102 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:37.103 Verification LBA range: start 0x8000 length 0x8000 00:07:37.103 Nvme2n1 : 5.81 128.67 8.04 0.00 0.00 895873.62 53235.40 877577.45 00:07:37.103 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:37.103 Verification LBA range: start 0x0 length 0x8000 00:07:37.103 Nvme2n2 : 5.85 126.83 7.93 0.00 0.00 876743.81 35086.97 1167952.34 00:07:37.103 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:37.103 Verification LBA range: start 0x8000 length 0x8000 00:07:37.103 Nvme2n2 : 5.82 132.06 8.25 0.00 0.00 846441.29 59688.17 896935.78 00:07:37.103 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:37.103 Verification LBA range: start 0x0 length 0x8000 00:07:37.103 Nvme2n3 : 5.90 140.93 8.81 0.00 0.00 771567.76 16938.54 1096971.82 00:07:37.103 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:37.103 Verification LBA range: start 0x8000 length 0x8000 00:07:37.103 Nvme2n3 : 5.96 146.03 9.13 0.00 0.00 738059.45 33877.07 935652.43 00:07:37.103 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:37.103 Verification LBA range: start 0x0 length 0x2000 00:07:37.103 Nvme3n1 : 6.03 166.14 10.38 0.00 0.00 635080.12 730.98 1122782.92 00:07:37.103 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:37.103 Verification LBA range: start 0x2000 length 0x2000 00:07:37.103 Nvme3n1 : 6.03 161.26 10.08 0.00 0.00 649954.79 680.57 2039077.02 00:07:37.103 [2024-11-18T23:02:56.481Z] =================================================================================================================== 00:07:37.103 [2024-11-18T23:02:56.481Z] Total : 1597.77 99.86 0.00 0.00 850224.79 680.57 2039077.02 00:07:38.043 00:07:38.043 real 0m7.709s 00:07:38.043 user 0m14.626s 00:07:38.043 sys 0m0.252s 00:07:38.043 23:02:57 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:38.043 ************************************ 00:07:38.043 END TEST bdev_verify_big_io 00:07:38.043 ************************************ 00:07:38.043 23:02:57 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:38.043 23:02:57 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:38.043 23:02:57 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:38.043 23:02:57 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:38.043 23:02:57 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:38.043 ************************************ 00:07:38.043 START TEST bdev_write_zeroes 00:07:38.043 ************************************ 00:07:38.043 23:02:57 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:38.300 [2024-11-18 23:02:57.427801] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:38.300 [2024-11-18 23:02:57.427902] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73025 ] 00:07:38.300 [2024-11-18 23:02:57.572094] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.300 [2024-11-18 23:02:57.615048] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.865 Running I/O for 1 seconds... 00:07:39.797 72576.00 IOPS, 283.50 MiB/s 00:07:39.797 Latency(us) 00:07:39.797 [2024-11-18T23:02:59.175Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:39.797 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:39.797 Nvme0n1 : 1.02 12065.71 47.13 0.00 0.00 10583.54 7612.26 20971.52 00:07:39.797 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:39.797 Nvme1n1 : 1.02 12050.30 47.07 0.00 0.00 10581.50 7914.73 20568.22 00:07:39.797 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:39.797 Nvme2n1 : 1.02 12036.54 47.02 0.00 0.00 10559.35 7864.32 19963.27 00:07:39.797 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:39.797 Nvme2n2 : 1.02 12022.81 46.96 0.00 0.00 10533.09 7612.26 19459.15 00:07:39.797 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:39.797 Nvme2n3 : 1.02 12009.14 46.91 0.00 0.00 10505.31 7813.91 19358.33 00:07:39.797 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:39.797 Nvme3n1 : 1.02 12053.33 47.08 0.00 0.00 10462.07 5620.97 21072.34 00:07:39.797 [2024-11-18T23:02:59.175Z] =================================================================================================================== 00:07:39.797 [2024-11-18T23:02:59.175Z] Total : 72237.83 282.18 0.00 0.00 10537.41 5620.97 21072.34 00:07:40.055 00:07:40.055 real 0m1.907s 00:07:40.055 user 0m1.589s 00:07:40.055 sys 0m0.209s 00:07:40.055 23:02:59 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:40.055 ************************************ 00:07:40.055 END TEST bdev_write_zeroes 00:07:40.055 ************************************ 00:07:40.055 23:02:59 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:40.055 23:02:59 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:40.055 23:02:59 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:40.055 23:02:59 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:40.055 23:02:59 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:40.055 ************************************ 00:07:40.055 START TEST bdev_json_nonenclosed 00:07:40.055 ************************************ 00:07:40.055 23:02:59 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:40.055 [2024-11-18 23:02:59.369579] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:40.055 [2024-11-18 23:02:59.369701] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73068 ] 00:07:40.313 [2024-11-18 23:02:59.519905] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.313 [2024-11-18 23:02:59.562878] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.313 [2024-11-18 23:02:59.562978] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:40.313 [2024-11-18 23:02:59.563002] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:40.313 [2024-11-18 23:02:59.563013] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:40.313 00:07:40.313 real 0m0.350s 00:07:40.313 user 0m0.134s 00:07:40.313 sys 0m0.113s 00:07:40.313 23:02:59 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:40.313 23:02:59 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:40.313 ************************************ 00:07:40.313 END TEST bdev_json_nonenclosed 00:07:40.313 ************************************ 00:07:40.638 23:02:59 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:40.638 23:02:59 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:40.638 23:02:59 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:40.638 23:02:59 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:40.638 ************************************ 00:07:40.638 START TEST bdev_json_nonarray 00:07:40.638 ************************************ 00:07:40.638 23:02:59 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:40.638 [2024-11-18 23:02:59.769020] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:40.638 [2024-11-18 23:02:59.769170] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73088 ] 00:07:40.638 [2024-11-18 23:02:59.916668] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.638 [2024-11-18 23:02:59.959693] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.638 [2024-11-18 23:02:59.959797] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:40.638 [2024-11-18 23:02:59.959814] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:40.638 [2024-11-18 23:02:59.959826] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:40.917 00:07:40.917 real 0m0.349s 00:07:40.917 user 0m0.134s 00:07:40.917 sys 0m0.111s 00:07:40.917 23:03:00 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:40.917 23:03:00 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:40.917 ************************************ 00:07:40.917 END TEST bdev_json_nonarray 00:07:40.917 ************************************ 00:07:40.917 23:03:00 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:07:40.917 23:03:00 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:07:40.917 23:03:00 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:07:40.917 23:03:00 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:07:40.917 23:03:00 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:07:40.917 23:03:00 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:40.917 23:03:00 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:40.917 23:03:00 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:07:40.917 23:03:00 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:07:40.917 23:03:00 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:07:40.917 23:03:00 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:07:40.917 00:07:40.917 real 0m32.130s 00:07:40.917 user 0m49.364s 00:07:40.917 sys 0m5.874s 00:07:40.917 23:03:00 blockdev_nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:40.917 ************************************ 00:07:40.917 END TEST blockdev_nvme 00:07:40.917 ************************************ 00:07:40.917 23:03:00 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:40.917 23:03:00 -- spdk/autotest.sh@209 -- # uname -s 00:07:40.917 23:03:00 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:07:40.917 23:03:00 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:40.917 23:03:00 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:40.917 23:03:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:40.917 23:03:00 -- common/autotest_common.sh@10 -- # set +x 00:07:40.917 ************************************ 00:07:40.918 START TEST blockdev_nvme_gpt 00:07:40.918 ************************************ 00:07:40.918 23:03:00 blockdev_nvme_gpt -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:40.918 * Looking for test storage... 00:07:40.918 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:40.918 23:03:00 blockdev_nvme_gpt -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:40.918 23:03:00 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # lcov --version 00:07:40.918 23:03:00 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:40.918 23:03:00 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:40.918 23:03:00 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:40.918 23:03:00 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:40.918 23:03:00 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:40.918 23:03:00 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:07:40.918 23:03:00 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:07:40.918 23:03:00 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:07:40.918 23:03:00 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:07:40.918 23:03:00 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:07:40.918 23:03:00 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:07:40.918 23:03:00 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:07:40.918 23:03:00 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:40.918 23:03:00 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:07:40.918 23:03:00 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:07:40.918 23:03:00 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:40.918 23:03:00 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:40.918 23:03:00 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:07:40.918 23:03:00 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:07:40.918 23:03:00 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:40.918 23:03:00 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:07:40.918 23:03:00 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:07:40.918 23:03:00 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:07:41.177 23:03:00 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:07:41.177 23:03:00 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:41.177 23:03:00 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:07:41.177 23:03:00 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:07:41.177 23:03:00 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:41.177 23:03:00 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:41.177 23:03:00 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:07:41.177 23:03:00 blockdev_nvme_gpt -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:41.177 23:03:00 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:41.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.177 --rc genhtml_branch_coverage=1 00:07:41.177 --rc genhtml_function_coverage=1 00:07:41.177 --rc genhtml_legend=1 00:07:41.177 --rc geninfo_all_blocks=1 00:07:41.177 --rc geninfo_unexecuted_blocks=1 00:07:41.177 00:07:41.177 ' 00:07:41.177 23:03:00 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:41.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.177 --rc genhtml_branch_coverage=1 00:07:41.177 --rc genhtml_function_coverage=1 00:07:41.177 --rc genhtml_legend=1 00:07:41.177 --rc geninfo_all_blocks=1 00:07:41.177 --rc geninfo_unexecuted_blocks=1 00:07:41.177 00:07:41.177 ' 00:07:41.177 23:03:00 blockdev_nvme_gpt -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:41.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.177 --rc genhtml_branch_coverage=1 00:07:41.177 --rc genhtml_function_coverage=1 00:07:41.177 --rc genhtml_legend=1 00:07:41.177 --rc geninfo_all_blocks=1 00:07:41.177 --rc geninfo_unexecuted_blocks=1 00:07:41.177 00:07:41.177 ' 00:07:41.177 23:03:00 blockdev_nvme_gpt -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:41.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.177 --rc genhtml_branch_coverage=1 00:07:41.177 --rc genhtml_function_coverage=1 00:07:41.177 --rc genhtml_legend=1 00:07:41.177 --rc geninfo_all_blocks=1 00:07:41.177 --rc geninfo_unexecuted_blocks=1 00:07:41.177 00:07:41.177 ' 00:07:41.177 23:03:00 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:41.177 23:03:00 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:07:41.177 23:03:00 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:41.177 23:03:00 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:41.177 23:03:00 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:41.177 23:03:00 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:41.177 23:03:00 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:41.177 23:03:00 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:41.177 23:03:00 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:07:41.177 23:03:00 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:07:41.177 23:03:00 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:07:41.177 23:03:00 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:07:41.177 23:03:00 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:07:41.177 23:03:00 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:07:41.177 23:03:00 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:07:41.177 23:03:00 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:07:41.177 23:03:00 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:07:41.177 23:03:00 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:07:41.177 23:03:00 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:07:41.177 23:03:00 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:07:41.177 23:03:00 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:07:41.177 23:03:00 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:07:41.177 23:03:00 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:07:41.177 23:03:00 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:07:41.177 23:03:00 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=73172 00:07:41.177 23:03:00 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:41.177 23:03:00 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 73172 00:07:41.177 23:03:00 blockdev_nvme_gpt -- common/autotest_common.sh@831 -- # '[' -z 73172 ']' 00:07:41.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.177 23:03:00 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.177 23:03:00 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:41.177 23:03:00 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.177 23:03:00 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:41.177 23:03:00 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:41.177 23:03:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:41.177 [2024-11-18 23:03:00.383502] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:41.177 [2024-11-18 23:03:00.383628] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73172 ] 00:07:41.177 [2024-11-18 23:03:00.532558] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.436 [2024-11-18 23:03:00.575730] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.009 23:03:01 blockdev_nvme_gpt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:42.009 23:03:01 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # return 0 00:07:42.009 23:03:01 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:07:42.009 23:03:01 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:07:42.009 23:03:01 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:42.268 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:42.525 Waiting for block devices as requested 00:07:42.525 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:42.525 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:42.525 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:42.525 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:47.783 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:47.783 23:03:06 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:07:47.783 23:03:06 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:07:47.783 23:03:06 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:07:47.783 23:03:06 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # local nvme bdf 00:07:47.783 23:03:06 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:47.783 23:03:06 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:07:47.783 23:03:06 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:07:47.783 23:03:06 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:47.783 23:03:06 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:47.783 23:03:06 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:47.783 23:03:06 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:07:47.783 23:03:06 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:07:47.783 23:03:06 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:47.783 23:03:06 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:47.783 23:03:06 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:47.783 23:03:06 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:07:47.783 23:03:06 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:07:47.783 23:03:06 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:07:47.783 23:03:06 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:47.783 23:03:06 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:47.783 23:03:06 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:07:47.783 23:03:06 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:07:47.783 23:03:06 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:07:47.783 23:03:06 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:47.783 23:03:06 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:47.783 23:03:06 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:07:47.783 23:03:06 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:07:47.783 23:03:06 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:07:47.783 23:03:06 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:47.783 23:03:06 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:47.783 23:03:06 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:07:47.783 23:03:06 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:07:47.783 23:03:06 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:07:47.783 23:03:06 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:47.783 23:03:06 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:47.783 23:03:06 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:07:47.783 23:03:06 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:07:47.783 23:03:06 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:07:47.783 23:03:06 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:47.783 23:03:06 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:07:47.783 23:03:06 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:07:47.783 23:03:06 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:07:47.783 23:03:06 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:07:47.783 23:03:06 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:07:47.783 23:03:06 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:07:47.783 23:03:06 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:07:47.783 23:03:06 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:07:47.783 BYT; 00:07:47.783 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:07:47.783 23:03:06 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:07:47.783 BYT; 00:07:47.783 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:07:47.783 23:03:06 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:07:47.783 23:03:06 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:07:47.783 23:03:06 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:07:47.783 23:03:06 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:07:47.783 23:03:06 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:47.783 23:03:06 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:07:47.783 23:03:07 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:07:47.783 23:03:07 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:07:47.783 23:03:07 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:47.783 23:03:07 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:47.783 23:03:07 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:07:47.783 23:03:07 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:07:47.783 23:03:07 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:47.783 23:03:07 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:07:47.783 23:03:07 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:47.783 23:03:07 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:47.783 23:03:07 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:47.783 23:03:07 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:07:47.783 23:03:07 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:07:47.783 23:03:07 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:47.783 23:03:07 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:47.783 23:03:07 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:07:47.783 23:03:07 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:07:47.783 23:03:07 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:47.783 23:03:07 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:07:47.783 23:03:07 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:47.783 23:03:07 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:47.783 23:03:07 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:47.783 23:03:07 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:07:48.716 The operation has completed successfully. 00:07:48.716 23:03:08 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:07:50.087 The operation has completed successfully. 00:07:50.087 23:03:09 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:50.344 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:50.909 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:50.909 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:50.909 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:50.909 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:50.909 23:03:10 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:07:50.909 23:03:10 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.909 23:03:10 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:50.909 [] 00:07:50.909 23:03:10 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.909 23:03:10 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:07:50.909 23:03:10 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:07:50.909 23:03:10 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:50.909 23:03:10 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:50.909 23:03:10 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:50.909 23:03:10 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.909 23:03:10 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:51.167 23:03:10 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.167 23:03:10 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:07:51.167 23:03:10 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.167 23:03:10 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:51.167 23:03:10 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.167 23:03:10 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:07:51.167 23:03:10 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:07:51.167 23:03:10 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.167 23:03:10 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:51.167 23:03:10 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.167 23:03:10 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:07:51.167 23:03:10 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.167 23:03:10 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:51.167 23:03:10 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.167 23:03:10 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:51.167 23:03:10 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.167 23:03:10 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:51.167 23:03:10 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.167 23:03:10 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:07:51.167 23:03:10 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:07:51.167 23:03:10 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:07:51.167 23:03:10 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.167 23:03:10 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:51.427 23:03:10 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.427 23:03:10 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:07:51.427 23:03:10 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:07:51.427 23:03:10 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "b0fd7a77-ae07-4e92-8820-030a162446e1"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "b0fd7a77-ae07-4e92-8820-030a162446e1",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "d5a1265b-aae1-4ad1-b7f2-4175525b0efb"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d5a1265b-aae1-4ad1-b7f2-4175525b0efb",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "4ce76f65-05ad-4671-88dc-9163cc201eca"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4ce76f65-05ad-4671-88dc-9163cc201eca",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "a2f45b3f-438c-41d8-9df5-642d22099dd8"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a2f45b3f-438c-41d8-9df5-642d22099dd8",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "d8745836-0c1f-411c-9715-0074cda2e474"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "d8745836-0c1f-411c-9715-0074cda2e474",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:51.427 23:03:10 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:07:51.427 23:03:10 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:07:51.427 23:03:10 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:07:51.427 23:03:10 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 73172 00:07:51.427 23:03:10 blockdev_nvme_gpt -- common/autotest_common.sh@950 -- # '[' -z 73172 ']' 00:07:51.427 23:03:10 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # kill -0 73172 00:07:51.427 23:03:10 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # uname 00:07:51.427 23:03:10 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:51.427 23:03:10 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73172 00:07:51.427 23:03:10 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:51.427 23:03:10 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:51.427 killing process with pid 73172 00:07:51.427 23:03:10 blockdev_nvme_gpt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73172' 00:07:51.427 23:03:10 blockdev_nvme_gpt -- common/autotest_common.sh@969 -- # kill 73172 00:07:51.427 23:03:10 blockdev_nvme_gpt -- common/autotest_common.sh@974 -- # wait 73172 00:07:51.725 23:03:10 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:51.725 23:03:10 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:51.725 23:03:10 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:07:51.725 23:03:10 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:51.725 23:03:10 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:51.725 ************************************ 00:07:51.725 START TEST bdev_hello_world 00:07:51.725 ************************************ 00:07:51.725 23:03:10 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:51.725 [2024-11-18 23:03:11.011414] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:51.725 [2024-11-18 23:03:11.011533] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73776 ] 00:07:51.982 [2024-11-18 23:03:11.158193] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.982 [2024-11-18 23:03:11.197650] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.241 [2024-11-18 23:03:11.577124] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:52.241 [2024-11-18 23:03:11.577203] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:52.241 [2024-11-18 23:03:11.577229] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:52.241 [2024-11-18 23:03:11.579450] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:52.241 [2024-11-18 23:03:11.580035] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:52.241 [2024-11-18 23:03:11.580058] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:52.241 [2024-11-18 23:03:11.580828] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:52.241 00:07:52.241 [2024-11-18 23:03:11.580880] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:52.499 00:07:52.499 real 0m0.824s 00:07:52.499 user 0m0.540s 00:07:52.499 sys 0m0.180s 00:07:52.499 23:03:11 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:52.499 23:03:11 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:52.499 ************************************ 00:07:52.499 END TEST bdev_hello_world 00:07:52.499 ************************************ 00:07:52.499 23:03:11 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:07:52.499 23:03:11 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:52.499 23:03:11 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:52.499 23:03:11 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:52.499 ************************************ 00:07:52.499 START TEST bdev_bounds 00:07:52.499 ************************************ 00:07:52.499 23:03:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:07:52.499 Process bdevio pid: 73807 00:07:52.499 23:03:11 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=73807 00:07:52.499 23:03:11 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:52.499 23:03:11 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 73807' 00:07:52.499 23:03:11 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 73807 00:07:52.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.499 23:03:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 73807 ']' 00:07:52.499 23:03:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.499 23:03:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:52.499 23:03:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.499 23:03:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:52.499 23:03:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:52.499 23:03:11 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:52.756 [2024-11-18 23:03:11.886538] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:52.756 [2024-11-18 23:03:11.886646] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73807 ] 00:07:52.756 [2024-11-18 23:03:12.029402] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:52.756 [2024-11-18 23:03:12.073361] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:52.756 [2024-11-18 23:03:12.073572] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:52.757 [2024-11-18 23:03:12.073690] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.689 23:03:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:53.689 23:03:12 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:07:53.690 23:03:12 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:53.690 I/O targets: 00:07:53.690 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:53.690 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:07:53.690 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:07:53.690 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:53.690 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:53.690 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:53.690 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:53.690 00:07:53.690 00:07:53.690 CUnit - A unit testing framework for C - Version 2.1-3 00:07:53.690 http://cunit.sourceforge.net/ 00:07:53.690 00:07:53.690 00:07:53.690 Suite: bdevio tests on: Nvme3n1 00:07:53.690 Test: blockdev write read block ...passed 00:07:53.690 Test: blockdev write zeroes read block ...passed 00:07:53.690 Test: blockdev write zeroes read no split ...passed 00:07:53.690 Test: blockdev write zeroes read split ...passed 00:07:53.690 Test: blockdev write zeroes read split partial ...passed 00:07:53.690 Test: blockdev reset ...[2024-11-18 23:03:12.864104] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:07:53.690 [2024-11-18 23:03:12.866452] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:53.690 passed 00:07:53.690 Test: blockdev write read 8 blocks ...passed 00:07:53.690 Test: blockdev write read size > 128k ...passed 00:07:53.690 Test: blockdev write read invalid size ...passed 00:07:53.690 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:53.690 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:53.690 Test: blockdev write read max offset ...passed 00:07:53.690 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:53.690 Test: blockdev writev readv 8 blocks ...passed 00:07:53.690 Test: blockdev writev readv 30 x 1block ...passed 00:07:53.690 Test: blockdev writev readv block ...passed 00:07:53.690 Test: blockdev writev readv size > 128k ...passed 00:07:53.690 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:53.690 Test: blockdev comparev and writev ...[2024-11-18 23:03:12.872185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c000e000 len:0x1000 00:07:53.690 [2024-11-18 23:03:12.872245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:53.690 passed 00:07:53.690 Test: blockdev nvme passthru rw ...passed 00:07:53.690 Test: blockdev nvme passthru vendor specific ...[2024-11-18 23:03:12.872868] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:53.690 [2024-11-18 23:03:12.872907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:53.690 passed 00:07:53.690 Test: blockdev nvme admin passthru ...passed 00:07:53.690 Test: blockdev copy ...passed 00:07:53.690 Suite: bdevio tests on: Nvme2n3 00:07:53.690 Test: blockdev write read block ...passed 00:07:53.690 Test: blockdev write zeroes read block ...passed 00:07:53.690 Test: blockdev write zeroes read no split ...passed 00:07:53.690 Test: blockdev write zeroes read split ...passed 00:07:53.690 Test: blockdev write zeroes read split partial ...passed 00:07:53.690 Test: blockdev reset ...[2024-11-18 23:03:12.885377] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:53.690 [2024-11-18 23:03:12.887290] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:53.690 passed 00:07:53.690 Test: blockdev write read 8 blocks ...passed 00:07:53.690 Test: blockdev write read size > 128k ...passed 00:07:53.690 Test: blockdev write read invalid size ...passed 00:07:53.690 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:53.690 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:53.690 Test: blockdev write read max offset ...passed 00:07:53.690 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:53.690 Test: blockdev writev readv 8 blocks ...passed 00:07:53.690 Test: blockdev writev readv 30 x 1block ...passed 00:07:53.690 Test: blockdev writev readv block ...passed 00:07:53.690 Test: blockdev writev readv size > 128k ...passed 00:07:53.690 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:53.690 Test: blockdev comparev and writev ...[2024-11-18 23:03:12.891834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c000a000 len:0x1000 00:07:53.690 [2024-11-18 23:03:12.891880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:53.690 passed 00:07:53.690 Test: blockdev nvme passthru rw ...passed 00:07:53.690 Test: blockdev nvme passthru vendor specific ...passed 00:07:53.690 Test: blockdev nvme admin passthru ...[2024-11-18 23:03:12.892620] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:53.690 [2024-11-18 23:03:12.892646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:53.690 passed 00:07:53.690 Test: blockdev copy ...passed 00:07:53.690 Suite: bdevio tests on: Nvme2n2 00:07:53.690 Test: blockdev write read block ...passed 00:07:53.690 Test: blockdev write zeroes read block ...passed 00:07:53.690 Test: blockdev write zeroes read no split ...passed 00:07:53.690 Test: blockdev write zeroes read split ...passed 00:07:53.690 Test: blockdev write zeroes read split partial ...passed 00:07:53.690 Test: blockdev reset ...[2024-11-18 23:03:12.907031] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:53.690 [2024-11-18 23:03:12.908883] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:53.690 passed 00:07:53.690 Test: blockdev write read 8 blocks ...passed 00:07:53.690 Test: blockdev write read size > 128k ...passed 00:07:53.690 Test: blockdev write read invalid size ...passed 00:07:53.690 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:53.690 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:53.690 Test: blockdev write read max offset ...passed 00:07:53.690 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:53.690 Test: blockdev writev readv 8 blocks ...passed 00:07:53.690 Test: blockdev writev readv 30 x 1block ...passed 00:07:53.690 Test: blockdev writev readv block ...passed 00:07:53.690 Test: blockdev writev readv size > 128k ...passed 00:07:53.690 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:53.690 Test: blockdev comparev and writev ...[2024-11-18 23:03:12.914438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d4005000 len:0x1000 00:07:53.690 [2024-11-18 23:03:12.914546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:53.690 passed 00:07:53.690 Test: blockdev nvme passthru rw ...passed 00:07:53.690 Test: blockdev nvme passthru vendor specific ...[2024-11-18 23:03:12.915512] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:53.690 passed 00:07:53.690 Test: blockdev nvme admin passthru ...[2024-11-18 23:03:12.915580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:53.690 passed 00:07:53.690 Test: blockdev copy ...passed 00:07:53.690 Suite: bdevio tests on: Nvme2n1 00:07:53.690 Test: blockdev write read block ...passed 00:07:53.690 Test: blockdev write zeroes read block ...passed 00:07:53.690 Test: blockdev write zeroes read no split ...passed 00:07:53.690 Test: blockdev write zeroes read split ...passed 00:07:53.690 Test: blockdev write zeroes read split partial ...passed 00:07:53.690 Test: blockdev reset ...[2024-11-18 23:03:12.930018] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:53.690 [2024-11-18 23:03:12.931963] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:53.690 passed 00:07:53.690 Test: blockdev write read 8 blocks ...passed 00:07:53.690 Test: blockdev write read size > 128k ...passed 00:07:53.690 Test: blockdev write read invalid size ...passed 00:07:53.690 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:53.690 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:53.690 Test: blockdev write read max offset ...passed 00:07:53.690 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:53.690 Test: blockdev writev readv 8 blocks ...passed 00:07:53.690 Test: blockdev writev readv 30 x 1block ...passed 00:07:53.690 Test: blockdev writev readv block ...passed 00:07:53.690 Test: blockdev writev readv size > 128k ...passed 00:07:53.690 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:53.690 Test: blockdev comparev and writev ...[2024-11-18 23:03:12.937312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bfc02000 len:0x1000 00:07:53.690 [2024-11-18 23:03:12.937369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:53.690 passed 00:07:53.690 Test: blockdev nvme passthru rw ...passed 00:07:53.690 Test: blockdev nvme passthru vendor specific ...passed 00:07:53.690 Test: blockdev nvme admin passthru ...[2024-11-18 23:03:12.938011] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:53.690 [2024-11-18 23:03:12.938040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:53.690 passed 00:07:53.690 Test: blockdev copy ...passed 00:07:53.690 Suite: bdevio tests on: Nvme1n1p2 00:07:53.690 Test: blockdev write read block ...passed 00:07:53.690 Test: blockdev write zeroes read block ...passed 00:07:53.690 Test: blockdev write zeroes read no split ...passed 00:07:53.690 Test: blockdev write zeroes read split ...passed 00:07:53.690 Test: blockdev write zeroes read split partial ...passed 00:07:53.690 Test: blockdev reset ...[2024-11-18 23:03:12.953314] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:07:53.690 [2024-11-18 23:03:12.955373] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:53.691 passed 00:07:53.691 Test: blockdev write read 8 blocks ...passed 00:07:53.691 Test: blockdev write read size > 128k ...passed 00:07:53.691 Test: blockdev write read invalid size ...passed 00:07:53.691 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:53.691 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:53.691 Test: blockdev write read max offset ...passed 00:07:53.691 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:53.691 Test: blockdev writev readv 8 blocks ...passed 00:07:53.691 Test: blockdev writev readv 30 x 1block ...passed 00:07:53.691 Test: blockdev writev readv block ...passed 00:07:53.691 Test: blockdev writev readv size > 128k ...passed 00:07:53.691 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:53.691 Test: blockdev comparev and writev ...[2024-11-18 23:03:12.961489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2d7a3b000 len:0x1000 00:07:53.691 [2024-11-18 23:03:12.961542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:53.691 passed 00:07:53.691 Test: blockdev nvme passthru rw ...passed 00:07:53.691 Test: blockdev nvme passthru vendor specific ...passed 00:07:53.691 Test: blockdev nvme admin passthru ...passed 00:07:53.691 Test: blockdev copy ...passed 00:07:53.691 Suite: bdevio tests on: Nvme1n1p1 00:07:53.691 Test: blockdev write read block ...passed 00:07:53.691 Test: blockdev write zeroes read block ...passed 00:07:53.691 Test: blockdev write zeroes read no split ...passed 00:07:53.691 Test: blockdev write zeroes read split ...passed 00:07:53.691 Test: blockdev write zeroes read split partial ...passed 00:07:53.691 Test: blockdev reset ...[2024-11-18 23:03:12.973757] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:07:53.691 [2024-11-18 23:03:12.975480] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:53.691 passed 00:07:53.691 Test: blockdev write read 8 blocks ...passed 00:07:53.691 Test: blockdev write read size > 128k ...passed 00:07:53.691 Test: blockdev write read invalid size ...passed 00:07:53.691 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:53.691 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:53.691 Test: blockdev write read max offset ...passed 00:07:53.691 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:53.691 Test: blockdev writev readv 8 blocks ...passed 00:07:53.691 Test: blockdev writev readv 30 x 1block ...passed 00:07:53.691 Test: blockdev writev readv block ...passed 00:07:53.691 Test: blockdev writev readv size > 128k ...passed 00:07:53.691 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:53.691 Test: blockdev comparev and writev ...[2024-11-18 23:03:12.980359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2d7a37000 len:0x1000 00:07:53.691 [2024-11-18 23:03:12.980414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:53.691 passed 00:07:53.691 Test: blockdev nvme passthru rw ...passed 00:07:53.691 Test: blockdev nvme passthru vendor specific ...passed 00:07:53.691 Test: blockdev nvme admin passthru ...passed 00:07:53.691 Test: blockdev copy ...passed 00:07:53.691 Suite: bdevio tests on: Nvme0n1 00:07:53.691 Test: blockdev write read block ...passed 00:07:53.691 Test: blockdev write zeroes read block ...passed 00:07:53.691 Test: blockdev write zeroes read no split ...passed 00:07:53.691 Test: blockdev write zeroes read split ...passed 00:07:53.691 Test: blockdev write zeroes read split partial ...passed 00:07:53.691 Test: blockdev reset ...[2024-11-18 23:03:12.992374] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:07:53.691 [2024-11-18 23:03:12.994201] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:53.691 passed 00:07:53.691 Test: blockdev write read 8 blocks ...passed 00:07:53.691 Test: blockdev write read size > 128k ...passed 00:07:53.691 Test: blockdev write read invalid size ...passed 00:07:53.691 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:53.691 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:53.691 Test: blockdev write read max offset ...passed 00:07:53.691 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:53.691 Test: blockdev writev readv 8 blocks ...passed 00:07:53.691 Test: blockdev writev readv 30 x 1block ...passed 00:07:53.691 Test: blockdev writev readv block ...passed 00:07:53.691 Test: blockdev writev readv size > 128k ...passed 00:07:53.691 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:53.691 Test: blockdev comparev and writev ...[2024-11-18 23:03:12.998462] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:53.691 separate metadata which is not supported yet. 00:07:53.691 passed 00:07:53.691 Test: blockdev nvme passthru rw ...passed 00:07:53.691 Test: blockdev nvme passthru vendor specific ...[2024-11-18 23:03:12.999037] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:53.691 passed 00:07:53.691 Test: blockdev nvme admin passthru ...[2024-11-18 23:03:12.999087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:53.691 passed 00:07:53.691 Test: blockdev copy ...passed 00:07:53.691 00:07:53.691 Run Summary: Type Total Ran Passed Failed Inactive 00:07:53.691 suites 7 7 n/a 0 0 00:07:53.691 tests 161 161 161 0 0 00:07:53.691 asserts 1025 1025 1025 0 n/a 00:07:53.691 00:07:53.691 Elapsed time = 0.372 seconds 00:07:53.691 0 00:07:53.691 23:03:13 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 73807 00:07:53.691 23:03:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 73807 ']' 00:07:53.691 23:03:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 73807 00:07:53.691 23:03:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:07:53.691 23:03:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:53.691 23:03:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73807 00:07:53.691 23:03:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:53.691 23:03:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:53.691 killing process with pid 73807 00:07:53.691 23:03:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73807' 00:07:53.691 23:03:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@969 -- # kill 73807 00:07:53.691 23:03:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@974 -- # wait 73807 00:07:53.950 23:03:13 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:53.950 00:07:53.950 real 0m1.499s 00:07:53.950 user 0m3.797s 00:07:53.950 sys 0m0.291s 00:07:53.950 ************************************ 00:07:53.950 END TEST bdev_bounds 00:07:53.950 ************************************ 00:07:53.950 23:03:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:53.950 23:03:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:54.211 23:03:13 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:54.211 23:03:13 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:54.211 23:03:13 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:54.211 23:03:13 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:54.211 ************************************ 00:07:54.211 START TEST bdev_nbd 00:07:54.211 ************************************ 00:07:54.211 23:03:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:54.211 23:03:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:54.211 23:03:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:54.211 23:03:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:54.211 23:03:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:54.211 23:03:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:54.211 23:03:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:54.211 23:03:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:07:54.211 23:03:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:54.211 23:03:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:54.211 23:03:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:54.211 23:03:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:07:54.211 23:03:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:54.211 23:03:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:54.211 23:03:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:54.211 23:03:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:54.211 23:03:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=73850 00:07:54.211 23:03:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:54.211 23:03:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 73850 /var/tmp/spdk-nbd.sock 00:07:54.211 23:03:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 73850 ']' 00:07:54.211 23:03:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:54.211 23:03:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:54.211 23:03:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:54.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:54.211 23:03:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:54.211 23:03:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:54.211 23:03:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:54.211 [2024-11-18 23:03:13.451418] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:54.211 [2024-11-18 23:03:13.451538] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:54.471 [2024-11-18 23:03:13.603585] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.471 [2024-11-18 23:03:13.648695] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.037 23:03:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:55.037 23:03:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:07:55.037 23:03:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:55.037 23:03:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:55.037 23:03:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:55.037 23:03:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:55.037 23:03:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:55.037 23:03:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:55.037 23:03:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:55.037 23:03:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:55.037 23:03:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:55.037 23:03:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:55.037 23:03:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:55.037 23:03:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:55.037 23:03:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:55.295 23:03:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:55.296 23:03:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:55.296 23:03:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:55.296 23:03:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:55.296 23:03:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:55.296 23:03:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:55.296 23:03:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:55.296 23:03:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:55.296 23:03:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:55.296 23:03:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:55.296 23:03:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:55.296 23:03:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:55.296 1+0 records in 00:07:55.296 1+0 records out 00:07:55.296 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000484024 s, 8.5 MB/s 00:07:55.296 23:03:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:55.296 23:03:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:55.296 23:03:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:55.296 23:03:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:55.296 23:03:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:55.296 23:03:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:55.296 23:03:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:55.296 23:03:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:07:55.296 23:03:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:55.296 23:03:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:55.296 23:03:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:55.296 23:03:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:55.296 23:03:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:55.296 23:03:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:55.296 23:03:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:55.296 23:03:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:55.296 23:03:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:55.296 23:03:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:55.296 23:03:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:55.296 23:03:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:55.296 1+0 records in 00:07:55.296 1+0 records out 00:07:55.296 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000523573 s, 7.8 MB/s 00:07:55.296 23:03:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:55.296 23:03:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:55.296 23:03:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:55.296 23:03:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:55.296 23:03:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:55.296 23:03:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:55.296 23:03:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:55.296 23:03:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:07:55.554 23:03:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:55.554 23:03:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:55.554 23:03:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:55.554 23:03:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:07:55.554 23:03:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:55.554 23:03:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:55.554 23:03:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:55.554 23:03:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:07:55.554 23:03:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:55.554 23:03:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:55.554 23:03:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:55.554 23:03:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:55.554 1+0 records in 00:07:55.554 1+0 records out 00:07:55.554 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000381304 s, 10.7 MB/s 00:07:55.554 23:03:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:55.554 23:03:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:55.554 23:03:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:55.554 23:03:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:55.554 23:03:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:55.554 23:03:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:55.554 23:03:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:55.554 23:03:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:55.812 23:03:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:55.812 23:03:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:55.812 23:03:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:55.812 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:07:55.812 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:55.812 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:55.812 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:55.812 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:07:55.812 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:55.812 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:55.812 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:55.812 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:55.812 1+0 records in 00:07:55.812 1+0 records out 00:07:55.812 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000474484 s, 8.6 MB/s 00:07:55.812 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:55.812 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:55.812 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:55.812 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:55.812 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:55.812 23:03:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:55.812 23:03:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:55.812 23:03:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:56.069 23:03:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:56.069 23:03:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:56.069 23:03:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:56.069 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:07:56.069 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:56.069 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:56.069 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:56.069 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:07:56.069 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:56.069 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:56.069 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:56.069 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:56.069 1+0 records in 00:07:56.069 1+0 records out 00:07:56.069 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00031979 s, 12.8 MB/s 00:07:56.069 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:56.069 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:56.069 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:56.069 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:56.069 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:56.069 23:03:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:56.069 23:03:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:56.069 23:03:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:56.326 23:03:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:56.326 23:03:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:56.327 23:03:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:56.327 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:07:56.327 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:56.327 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:56.327 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:56.327 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:07:56.327 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:56.327 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:56.327 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:56.327 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:56.327 1+0 records in 00:07:56.327 1+0 records out 00:07:56.327 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000586545 s, 7.0 MB/s 00:07:56.327 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:56.327 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:56.327 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:56.327 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:56.327 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:56.327 23:03:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:56.327 23:03:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:56.327 23:03:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:56.584 23:03:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:07:56.584 23:03:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:07:56.584 23:03:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:07:56.584 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd6 00:07:56.584 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:56.584 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:56.584 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:56.584 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd6 /proc/partitions 00:07:56.584 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:56.584 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:56.584 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:56.584 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:56.584 1+0 records in 00:07:56.584 1+0 records out 00:07:56.584 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000380783 s, 10.8 MB/s 00:07:56.584 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:56.584 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:56.584 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:56.584 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:56.584 23:03:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:56.584 23:03:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:56.584 23:03:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:56.584 23:03:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:56.843 23:03:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:56.843 { 00:07:56.843 "nbd_device": "/dev/nbd0", 00:07:56.843 "bdev_name": "Nvme0n1" 00:07:56.843 }, 00:07:56.843 { 00:07:56.843 "nbd_device": "/dev/nbd1", 00:07:56.843 "bdev_name": "Nvme1n1p1" 00:07:56.843 }, 00:07:56.843 { 00:07:56.843 "nbd_device": "/dev/nbd2", 00:07:56.843 "bdev_name": "Nvme1n1p2" 00:07:56.843 }, 00:07:56.843 { 00:07:56.843 "nbd_device": "/dev/nbd3", 00:07:56.843 "bdev_name": "Nvme2n1" 00:07:56.843 }, 00:07:56.843 { 00:07:56.843 "nbd_device": "/dev/nbd4", 00:07:56.843 "bdev_name": "Nvme2n2" 00:07:56.843 }, 00:07:56.843 { 00:07:56.843 "nbd_device": "/dev/nbd5", 00:07:56.843 "bdev_name": "Nvme2n3" 00:07:56.843 }, 00:07:56.843 { 00:07:56.843 "nbd_device": "/dev/nbd6", 00:07:56.843 "bdev_name": "Nvme3n1" 00:07:56.843 } 00:07:56.843 ]' 00:07:56.843 23:03:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:56.843 23:03:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:56.843 23:03:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:56.843 { 00:07:56.843 "nbd_device": "/dev/nbd0", 00:07:56.843 "bdev_name": "Nvme0n1" 00:07:56.843 }, 00:07:56.843 { 00:07:56.843 "nbd_device": "/dev/nbd1", 00:07:56.843 "bdev_name": "Nvme1n1p1" 00:07:56.843 }, 00:07:56.843 { 00:07:56.843 "nbd_device": "/dev/nbd2", 00:07:56.843 "bdev_name": "Nvme1n1p2" 00:07:56.843 }, 00:07:56.843 { 00:07:56.843 "nbd_device": "/dev/nbd3", 00:07:56.843 "bdev_name": "Nvme2n1" 00:07:56.843 }, 00:07:56.843 { 00:07:56.843 "nbd_device": "/dev/nbd4", 00:07:56.843 "bdev_name": "Nvme2n2" 00:07:56.843 }, 00:07:56.843 { 00:07:56.843 "nbd_device": "/dev/nbd5", 00:07:56.843 "bdev_name": "Nvme2n3" 00:07:56.843 }, 00:07:56.843 { 00:07:56.844 "nbd_device": "/dev/nbd6", 00:07:56.844 "bdev_name": "Nvme3n1" 00:07:56.844 } 00:07:56.844 ]' 00:07:56.844 23:03:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:07:56.844 23:03:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:56.844 23:03:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:07:56.844 23:03:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:56.844 23:03:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:56.844 23:03:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:56.844 23:03:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:56.844 23:03:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:56.844 23:03:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:56.844 23:03:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:56.844 23:03:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:56.844 23:03:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:56.844 23:03:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:57.102 23:03:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:57.102 23:03:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:57.102 23:03:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:57.102 23:03:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:57.102 23:03:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:57.102 23:03:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:57.102 23:03:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:57.102 23:03:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:57.102 23:03:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:57.102 23:03:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:57.102 23:03:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:57.102 23:03:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:57.102 23:03:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:57.102 23:03:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:57.360 23:03:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:57.360 23:03:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:57.360 23:03:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:57.360 23:03:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:57.360 23:03:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:57.360 23:03:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:57.360 23:03:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:57.360 23:03:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:57.360 23:03:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:57.360 23:03:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:57.618 23:03:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:57.618 23:03:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:57.618 23:03:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:57.618 23:03:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:57.618 23:03:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:57.618 23:03:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:57.618 23:03:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:57.618 23:03:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:57.618 23:03:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:57.618 23:03:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:57.875 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:57.875 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:57.875 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:57.875 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:57.875 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:57.875 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:57.875 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:57.875 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:57.875 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:57.875 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:58.132 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:58.132 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:58.132 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:58.132 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:58.132 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:58.132 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:58.132 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:58.132 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:58.133 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:58.133 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:07:58.133 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:07:58.133 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:07:58.133 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:07:58.133 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:58.133 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:58.133 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:07:58.390 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:58.390 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:58.390 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:58.390 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:58.390 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:58.390 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:58.390 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:58.390 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:58.390 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:58.390 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:58.390 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:58.390 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:58.390 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:58.390 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:58.390 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:58.390 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:58.390 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:58.390 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:58.390 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:58.390 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:58.390 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:58.390 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:58.390 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:58.390 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:58.390 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:58.390 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:58.390 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:58.390 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:58.390 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:58.390 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:58.390 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:58.390 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:58.390 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:58.648 /dev/nbd0 00:07:58.648 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:58.648 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:58.648 23:03:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:58.648 23:03:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:58.648 23:03:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:58.648 23:03:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:58.648 23:03:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:58.648 23:03:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:58.648 23:03:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:58.648 23:03:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:58.648 23:03:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:58.648 1+0 records in 00:07:58.648 1+0 records out 00:07:58.648 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000550196 s, 7.4 MB/s 00:07:58.648 23:03:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:58.648 23:03:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:58.648 23:03:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:58.648 23:03:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:58.648 23:03:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:58.648 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:58.648 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:58.648 23:03:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:07:58.906 /dev/nbd1 00:07:58.906 23:03:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:58.906 23:03:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:58.906 23:03:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:58.906 23:03:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:58.906 23:03:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:58.906 23:03:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:58.906 23:03:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:58.906 23:03:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:58.906 23:03:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:58.906 23:03:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:58.906 23:03:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:58.906 1+0 records in 00:07:58.906 1+0 records out 00:07:58.906 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000671721 s, 6.1 MB/s 00:07:58.906 23:03:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:58.906 23:03:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:58.906 23:03:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:58.906 23:03:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:58.906 23:03:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:58.906 23:03:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:58.906 23:03:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:58.906 23:03:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:07:59.164 /dev/nbd10 00:07:59.164 23:03:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:59.164 23:03:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:59.164 23:03:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:07:59.164 23:03:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:59.164 23:03:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:59.164 23:03:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:59.164 23:03:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:07:59.164 23:03:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:59.164 23:03:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:59.164 23:03:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:59.164 23:03:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:59.164 1+0 records in 00:07:59.164 1+0 records out 00:07:59.164 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000580845 s, 7.1 MB/s 00:07:59.164 23:03:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:59.164 23:03:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:59.164 23:03:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:59.164 23:03:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:59.164 23:03:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:59.164 23:03:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:59.164 23:03:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:59.164 23:03:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:07:59.423 /dev/nbd11 00:07:59.423 23:03:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:59.423 23:03:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:59.423 23:03:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:07:59.423 23:03:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:59.423 23:03:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:59.423 23:03:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:59.423 23:03:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:07:59.423 23:03:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:59.423 23:03:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:59.423 23:03:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:59.423 23:03:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:59.423 1+0 records in 00:07:59.423 1+0 records out 00:07:59.423 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000499482 s, 8.2 MB/s 00:07:59.423 23:03:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:59.423 23:03:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:59.423 23:03:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:59.423 23:03:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:59.423 23:03:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:59.423 23:03:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:59.423 23:03:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:59.423 23:03:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:07:59.681 /dev/nbd12 00:07:59.681 23:03:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:59.681 23:03:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:59.681 23:03:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:07:59.681 23:03:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:59.681 23:03:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:59.681 23:03:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:59.681 23:03:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:07:59.681 23:03:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:59.681 23:03:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:59.681 23:03:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:59.681 23:03:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:59.681 1+0 records in 00:07:59.681 1+0 records out 00:07:59.681 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0005659 s, 7.2 MB/s 00:07:59.681 23:03:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:59.681 23:03:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:59.681 23:03:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:59.681 23:03:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:59.681 23:03:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:59.681 23:03:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:59.681 23:03:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:59.681 23:03:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:07:59.946 /dev/nbd13 00:07:59.946 23:03:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:59.946 23:03:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:59.946 23:03:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:07:59.946 23:03:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:59.946 23:03:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:59.946 23:03:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:59.946 23:03:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:07:59.946 23:03:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:59.946 23:03:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:59.946 23:03:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:59.946 23:03:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:59.946 1+0 records in 00:07:59.946 1+0 records out 00:07:59.946 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000445519 s, 9.2 MB/s 00:07:59.946 23:03:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:59.946 23:03:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:59.946 23:03:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:59.946 23:03:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:59.946 23:03:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:59.946 23:03:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:59.946 23:03:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:59.946 23:03:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:07:59.946 /dev/nbd14 00:08:00.203 23:03:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:08:00.203 23:03:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:08:00.203 23:03:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd14 00:08:00.203 23:03:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:00.203 23:03:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:00.203 23:03:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:00.203 23:03:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd14 /proc/partitions 00:08:00.203 23:03:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:00.203 23:03:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:00.203 23:03:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:00.203 23:03:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:00.203 1+0 records in 00:08:00.203 1+0 records out 00:08:00.203 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000652171 s, 6.3 MB/s 00:08:00.203 23:03:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:00.203 23:03:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:00.203 23:03:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:00.203 23:03:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:00.203 23:03:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:00.203 23:03:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:00.203 23:03:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:00.204 23:03:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:00.204 23:03:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:00.204 23:03:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:00.204 23:03:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:00.204 { 00:08:00.204 "nbd_device": "/dev/nbd0", 00:08:00.204 "bdev_name": "Nvme0n1" 00:08:00.204 }, 00:08:00.204 { 00:08:00.204 "nbd_device": "/dev/nbd1", 00:08:00.204 "bdev_name": "Nvme1n1p1" 00:08:00.204 }, 00:08:00.204 { 00:08:00.204 "nbd_device": "/dev/nbd10", 00:08:00.204 "bdev_name": "Nvme1n1p2" 00:08:00.204 }, 00:08:00.204 { 00:08:00.204 "nbd_device": "/dev/nbd11", 00:08:00.204 "bdev_name": "Nvme2n1" 00:08:00.204 }, 00:08:00.204 { 00:08:00.204 "nbd_device": "/dev/nbd12", 00:08:00.204 "bdev_name": "Nvme2n2" 00:08:00.204 }, 00:08:00.204 { 00:08:00.204 "nbd_device": "/dev/nbd13", 00:08:00.204 "bdev_name": "Nvme2n3" 00:08:00.204 }, 00:08:00.204 { 00:08:00.204 "nbd_device": "/dev/nbd14", 00:08:00.204 "bdev_name": "Nvme3n1" 00:08:00.204 } 00:08:00.204 ]' 00:08:00.204 23:03:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:00.204 23:03:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:00.204 { 00:08:00.204 "nbd_device": "/dev/nbd0", 00:08:00.204 "bdev_name": "Nvme0n1" 00:08:00.204 }, 00:08:00.204 { 00:08:00.204 "nbd_device": "/dev/nbd1", 00:08:00.204 "bdev_name": "Nvme1n1p1" 00:08:00.204 }, 00:08:00.204 { 00:08:00.204 "nbd_device": "/dev/nbd10", 00:08:00.204 "bdev_name": "Nvme1n1p2" 00:08:00.204 }, 00:08:00.204 { 00:08:00.204 "nbd_device": "/dev/nbd11", 00:08:00.204 "bdev_name": "Nvme2n1" 00:08:00.204 }, 00:08:00.204 { 00:08:00.204 "nbd_device": "/dev/nbd12", 00:08:00.204 "bdev_name": "Nvme2n2" 00:08:00.204 }, 00:08:00.204 { 00:08:00.204 "nbd_device": "/dev/nbd13", 00:08:00.204 "bdev_name": "Nvme2n3" 00:08:00.204 }, 00:08:00.204 { 00:08:00.204 "nbd_device": "/dev/nbd14", 00:08:00.204 "bdev_name": "Nvme3n1" 00:08:00.204 } 00:08:00.204 ]' 00:08:00.204 23:03:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:00.204 /dev/nbd1 00:08:00.204 /dev/nbd10 00:08:00.204 /dev/nbd11 00:08:00.204 /dev/nbd12 00:08:00.204 /dev/nbd13 00:08:00.204 /dev/nbd14' 00:08:00.463 23:03:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:00.463 /dev/nbd1 00:08:00.463 /dev/nbd10 00:08:00.463 /dev/nbd11 00:08:00.463 /dev/nbd12 00:08:00.463 /dev/nbd13 00:08:00.463 /dev/nbd14' 00:08:00.463 23:03:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:00.463 23:03:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:08:00.463 23:03:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:08:00.463 23:03:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:08:00.463 23:03:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:08:00.463 23:03:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:08:00.463 23:03:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:00.463 23:03:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:00.463 23:03:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:00.463 23:03:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:00.463 23:03:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:00.463 23:03:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:00.463 256+0 records in 00:08:00.463 256+0 records out 00:08:00.463 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0070734 s, 148 MB/s 00:08:00.463 23:03:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:00.463 23:03:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:00.463 256+0 records in 00:08:00.463 256+0 records out 00:08:00.463 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.106149 s, 9.9 MB/s 00:08:00.463 23:03:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:00.463 23:03:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:00.463 256+0 records in 00:08:00.463 256+0 records out 00:08:00.463 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0981259 s, 10.7 MB/s 00:08:00.463 23:03:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:00.463 23:03:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:08:00.734 256+0 records in 00:08:00.734 256+0 records out 00:08:00.734 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.103508 s, 10.1 MB/s 00:08:00.734 23:03:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:00.734 23:03:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:00.734 256+0 records in 00:08:00.734 256+0 records out 00:08:00.734 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0935426 s, 11.2 MB/s 00:08:00.734 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:00.734 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:01.022 256+0 records in 00:08:01.022 256+0 records out 00:08:01.022 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.121005 s, 8.7 MB/s 00:08:01.022 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:01.022 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:01.022 256+0 records in 00:08:01.022 256+0 records out 00:08:01.022 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0955776 s, 11.0 MB/s 00:08:01.022 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:01.022 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:08:01.022 256+0 records in 00:08:01.022 256+0 records out 00:08:01.022 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0896137 s, 11.7 MB/s 00:08:01.022 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:08:01.022 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:01.022 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:01.022 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:01.022 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:01.022 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:01.022 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:01.022 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:01.022 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:01.022 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:01.022 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:01.022 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:01.022 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:01.022 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:01.022 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:01.022 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:01.022 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:01.022 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:01.022 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:01.022 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:01.022 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:08:01.022 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:01.022 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:01.022 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:01.022 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:01.022 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:01.022 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:01.022 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:01.022 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:01.280 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:01.280 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:01.280 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:01.280 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:01.280 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:01.280 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:01.280 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:01.280 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:01.280 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:01.280 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:01.538 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:01.538 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:01.538 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:01.538 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:01.538 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:01.538 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:01.538 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:01.538 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:01.538 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:01.538 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:01.796 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:01.796 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:01.796 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:01.796 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:01.796 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:01.796 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:01.796 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:01.796 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:01.796 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:01.796 23:03:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:02.054 23:03:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:02.054 23:03:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:02.054 23:03:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:02.054 23:03:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:02.054 23:03:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:02.054 23:03:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:02.054 23:03:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:02.054 23:03:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:02.054 23:03:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:02.054 23:03:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:02.054 23:03:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:02.054 23:03:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:02.054 23:03:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:02.054 23:03:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:02.054 23:03:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:02.054 23:03:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:02.054 23:03:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:02.054 23:03:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:02.054 23:03:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:02.054 23:03:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:02.312 23:03:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:02.312 23:03:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:02.312 23:03:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:02.312 23:03:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:02.312 23:03:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:02.312 23:03:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:02.312 23:03:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:02.312 23:03:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:02.312 23:03:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:02.312 23:03:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:08:02.570 23:03:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:08:02.570 23:03:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:08:02.570 23:03:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:08:02.570 23:03:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:02.570 23:03:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:02.570 23:03:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:08:02.570 23:03:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:02.570 23:03:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:02.570 23:03:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:02.570 23:03:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:02.570 23:03:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:02.828 23:03:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:02.828 23:03:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:02.828 23:03:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:02.828 23:03:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:02.828 23:03:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:02.828 23:03:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:02.828 23:03:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:02.828 23:03:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:02.828 23:03:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:02.828 23:03:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:08:02.828 23:03:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:02.828 23:03:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:08:02.828 23:03:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:02.828 23:03:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:02.828 23:03:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:08:02.828 23:03:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:03.086 malloc_lvol_verify 00:08:03.086 23:03:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:03.343 8c4e35a7-d44f-42a6-8a5a-4d4442d5ff3f 00:08:03.343 23:03:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:03.343 c5f14d28-512a-4448-9cf3-682aa3923e28 00:08:03.343 23:03:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:03.601 /dev/nbd0 00:08:03.601 23:03:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:08:03.601 23:03:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:08:03.601 23:03:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:08:03.601 23:03:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:08:03.601 23:03:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:08:03.601 mke2fs 1.47.0 (5-Feb-2023) 00:08:03.601 Discarding device blocks: 0/4096 done 00:08:03.601 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:03.601 00:08:03.601 Allocating group tables: 0/1 done 00:08:03.601 Writing inode tables: 0/1 done 00:08:03.601 Creating journal (1024 blocks): done 00:08:03.601 Writing superblocks and filesystem accounting information: 0/1 done 00:08:03.601 00:08:03.601 23:03:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:03.601 23:03:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:03.601 23:03:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:03.601 23:03:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:03.601 23:03:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:03.601 23:03:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:03.601 23:03:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:03.859 23:03:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:03.859 23:03:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:03.859 23:03:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:03.859 23:03:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:03.859 23:03:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:03.859 23:03:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:03.859 23:03:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:03.859 23:03:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:03.859 23:03:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 73850 00:08:03.859 23:03:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 73850 ']' 00:08:03.859 23:03:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 73850 00:08:03.859 23:03:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:08:03.859 23:03:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:03.859 23:03:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73850 00:08:03.860 23:03:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:03.860 23:03:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:03.860 killing process with pid 73850 00:08:03.860 23:03:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73850' 00:08:03.860 23:03:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@969 -- # kill 73850 00:08:03.860 23:03:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@974 -- # wait 73850 00:08:04.117 23:03:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:08:04.117 00:08:04.117 real 0m9.961s 00:08:04.117 user 0m14.174s 00:08:04.117 sys 0m3.462s 00:08:04.117 23:03:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:04.117 23:03:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:04.117 ************************************ 00:08:04.117 END TEST bdev_nbd 00:08:04.117 ************************************ 00:08:04.117 23:03:23 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:08:04.117 23:03:23 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:08:04.117 23:03:23 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:08:04.117 skipping fio tests on NVMe due to multi-ns failures. 00:08:04.117 23:03:23 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:08:04.117 23:03:23 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:04.117 23:03:23 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:04.117 23:03:23 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:08:04.117 23:03:23 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:04.117 23:03:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:04.117 ************************************ 00:08:04.117 START TEST bdev_verify 00:08:04.117 ************************************ 00:08:04.117 23:03:23 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:04.117 [2024-11-18 23:03:23.445427] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:04.118 [2024-11-18 23:03:23.445540] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74253 ] 00:08:04.375 [2024-11-18 23:03:23.590564] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:04.375 [2024-11-18 23:03:23.628181] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.375 [2024-11-18 23:03:23.628237] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:04.941 Running I/O for 5 seconds... 00:08:07.245 23296.00 IOPS, 91.00 MiB/s [2024-11-18T23:03:27.562Z] 23648.00 IOPS, 92.38 MiB/s [2024-11-18T23:03:28.505Z] 23637.33 IOPS, 92.33 MiB/s [2024-11-18T23:03:29.447Z] 23360.00 IOPS, 91.25 MiB/s [2024-11-18T23:03:29.447Z] 22835.20 IOPS, 89.20 MiB/s 00:08:10.069 Latency(us) 00:08:10.069 [2024-11-18T23:03:29.447Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:10.069 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:10.069 Verification LBA range: start 0x0 length 0xbd0bd 00:08:10.069 Nvme0n1 : 5.07 1615.46 6.31 0.00 0.00 79017.10 19660.80 77836.60 00:08:10.069 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:10.069 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:08:10.069 Nvme0n1 : 5.07 1602.81 6.26 0.00 0.00 79416.51 12552.66 85095.98 00:08:10.069 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:10.069 Verification LBA range: start 0x0 length 0x4ff80 00:08:10.069 Nvme1n1p1 : 5.08 1613.58 6.30 0.00 0.00 78960.49 20971.52 73400.32 00:08:10.069 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:10.069 Verification LBA range: start 0x4ff80 length 0x4ff80 00:08:10.069 Nvme1n1p1 : 5.09 1609.60 6.29 0.00 0.00 79130.90 12905.55 70173.93 00:08:10.069 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:10.069 Verification LBA range: start 0x0 length 0x4ff7f 00:08:10.069 Nvme1n1p2 : 5.08 1613.04 6.30 0.00 0.00 78878.87 22080.59 70577.23 00:08:10.069 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:10.069 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:08:10.069 Nvme1n1p2 : 5.09 1608.60 6.28 0.00 0.00 78988.41 14619.57 64527.75 00:08:10.069 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:10.069 Verification LBA range: start 0x0 length 0x80000 00:08:10.069 Nvme2n1 : 5.08 1612.57 6.30 0.00 0.00 78806.88 23794.61 70173.93 00:08:10.069 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:10.069 Verification LBA range: start 0x80000 length 0x80000 00:08:10.069 Nvme2n1 : 5.09 1608.15 6.28 0.00 0.00 78816.59 14720.39 60494.77 00:08:10.069 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:10.069 Verification LBA range: start 0x0 length 0x80000 00:08:10.069 Nvme2n2 : 5.08 1612.07 6.30 0.00 0.00 78697.83 24500.38 72593.72 00:08:10.069 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:10.069 Verification LBA range: start 0x80000 length 0x80000 00:08:10.069 Nvme2n2 : 5.10 1607.69 6.28 0.00 0.00 78649.82 15022.87 62107.96 00:08:10.069 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:10.069 Verification LBA range: start 0x0 length 0x80000 00:08:10.069 Nvme2n3 : 5.08 1611.63 6.30 0.00 0.00 78592.22 21878.94 75416.81 00:08:10.069 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:10.069 Verification LBA range: start 0x80000 length 0x80000 00:08:10.069 Nvme2n3 : 5.10 1607.24 6.28 0.00 0.00 78580.14 15325.34 65737.65 00:08:10.069 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:10.069 Verification LBA range: start 0x0 length 0x20000 00:08:10.069 Nvme3n1 : 5.08 1611.14 6.29 0.00 0.00 78489.88 18753.38 76626.71 00:08:10.069 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:10.069 Verification LBA range: start 0x20000 length 0x20000 00:08:10.069 Nvme3n1 : 5.10 1606.80 6.28 0.00 0.00 78535.46 15325.34 67754.14 00:08:10.069 [2024-11-18T23:03:29.447Z] =================================================================================================================== 00:08:10.069 [2024-11-18T23:03:29.447Z] Total : 22540.39 88.05 0.00 0.00 78825.46 12552.66 85095.98 00:08:10.665 00:08:10.665 real 0m6.532s 00:08:10.665 user 0m12.238s 00:08:10.665 sys 0m0.235s 00:08:10.665 23:03:29 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:10.665 23:03:29 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:08:10.665 ************************************ 00:08:10.665 END TEST bdev_verify 00:08:10.665 ************************************ 00:08:10.665 23:03:29 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:10.665 23:03:29 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:08:10.665 23:03:29 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:10.665 23:03:29 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:10.665 ************************************ 00:08:10.665 START TEST bdev_verify_big_io 00:08:10.665 ************************************ 00:08:10.665 23:03:29 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:10.922 [2024-11-18 23:03:30.042783] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:10.922 [2024-11-18 23:03:30.042888] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74346 ] 00:08:10.922 [2024-11-18 23:03:30.189501] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:10.922 [2024-11-18 23:03:30.238188] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.922 [2024-11-18 23:03:30.238213] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:11.485 Running I/O for 5 seconds... 00:08:17.727 2669.00 IOPS, 166.81 MiB/s [2024-11-18T23:03:37.363Z] 3698.00 IOPS, 231.12 MiB/s [2024-11-18T23:03:37.621Z] 3185.00 IOPS, 199.06 MiB/s 00:08:18.243 Latency(us) 00:08:18.243 [2024-11-18T23:03:37.621Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:18.243 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:18.243 Verification LBA range: start 0x0 length 0xbd0b 00:08:18.243 Nvme0n1 : 5.70 112.45 7.03 0.00 0.00 1078796.25 19963.27 1174405.12 00:08:18.243 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:18.243 Verification LBA range: start 0xbd0b length 0xbd0b 00:08:18.243 Nvme0n1 : 6.22 48.85 3.05 0.00 0.00 2428648.91 15022.87 2219754.73 00:08:18.243 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:18.243 Verification LBA range: start 0x0 length 0x4ff8 00:08:18.243 Nvme1n1p1 : 5.70 115.03 7.19 0.00 0.00 1042310.12 100421.32 1303460.63 00:08:18.243 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:18.243 Verification LBA range: start 0x4ff8 length 0x4ff8 00:08:18.243 Nvme1n1p1 : 6.12 72.88 4.56 0.00 0.00 1587975.94 80659.69 1716438.25 00:08:18.243 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:18.243 Verification LBA range: start 0x0 length 0x4ff7 00:08:18.243 Nvme1n1p2 : 5.78 109.55 6.85 0.00 0.00 1065385.84 72997.02 1858399.31 00:08:18.243 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:18.243 Verification LBA range: start 0x4ff7 length 0x4ff7 00:08:18.243 Nvme1n1p2 : 6.15 79.13 4.95 0.00 0.00 1383510.61 32465.53 1716438.25 00:08:18.243 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:18.243 Verification LBA range: start 0x0 length 0x8000 00:08:18.243 Nvme2n1 : 5.90 124.35 7.77 0.00 0.00 908129.78 50210.66 1329271.73 00:08:18.243 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:18.243 Verification LBA range: start 0x8000 length 0x8000 00:08:18.243 Nvme2n1 : 6.16 83.50 5.22 0.00 0.00 1242969.78 32868.82 1742249.35 00:08:18.243 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:18.243 Verification LBA range: start 0x0 length 0x8000 00:08:18.243 Nvme2n2 : 5.90 130.07 8.13 0.00 0.00 850372.66 71787.13 1058255.16 00:08:18.243 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:18.243 Verification LBA range: start 0x8000 length 0x8000 00:08:18.243 Nvme2n2 : 6.27 102.09 6.38 0.00 0.00 989543.42 19761.62 1780966.01 00:08:18.243 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:18.243 Verification LBA range: start 0x0 length 0x8000 00:08:18.243 Nvme2n3 : 5.99 139.57 8.72 0.00 0.00 769951.88 27222.65 1084066.26 00:08:18.244 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:18.244 Verification LBA range: start 0x8000 length 0x8000 00:08:18.244 Nvme2n3 : 6.50 151.19 9.45 0.00 0.00 637634.74 10989.88 1819682.66 00:08:18.244 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:18.244 Verification LBA range: start 0x0 length 0x2000 00:08:18.244 Nvme3n1 : 6.08 151.09 9.44 0.00 0.00 692197.79 17845.96 1122782.92 00:08:18.244 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:18.244 Verification LBA range: start 0x2000 length 0x2000 00:08:18.244 Nvme3n1 : 6.75 261.24 16.33 0.00 0.00 352113.81 573.44 1845493.76 00:08:18.244 [2024-11-18T23:03:37.622Z] =================================================================================================================== 00:08:18.244 [2024-11-18T23:03:37.622Z] Total : 1681.01 105.06 0.00 0.00 899767.35 573.44 2219754.73 00:08:20.141 00:08:20.141 real 0m9.330s 00:08:20.141 user 0m17.814s 00:08:20.141 sys 0m0.281s 00:08:20.141 23:03:39 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:20.141 23:03:39 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:08:20.141 ************************************ 00:08:20.141 END TEST bdev_verify_big_io 00:08:20.141 ************************************ 00:08:20.141 23:03:39 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:20.141 23:03:39 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:08:20.141 23:03:39 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:20.141 23:03:39 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:20.141 ************************************ 00:08:20.141 START TEST bdev_write_zeroes 00:08:20.141 ************************************ 00:08:20.141 23:03:39 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:20.141 [2024-11-18 23:03:39.412488] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:20.141 [2024-11-18 23:03:39.412594] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74455 ] 00:08:20.399 [2024-11-18 23:03:39.553497] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.399 [2024-11-18 23:03:39.593952] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.657 Running I/O for 1 seconds... 00:08:22.032 60480.00 IOPS, 236.25 MiB/s 00:08:22.032 Latency(us) 00:08:22.032 [2024-11-18T23:03:41.410Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:22.032 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:22.032 Nvme0n1 : 1.02 8630.11 33.71 0.00 0.00 14799.64 6654.42 26819.35 00:08:22.032 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:22.032 Nvme1n1p1 : 1.02 8619.00 33.67 0.00 0.00 14797.46 11594.83 27424.30 00:08:22.032 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:22.032 Nvme1n1p2 : 1.03 8608.33 33.63 0.00 0.00 14748.69 11292.36 27021.00 00:08:22.032 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:22.032 Nvme2n1 : 1.03 8598.60 33.59 0.00 0.00 14703.92 10586.58 23693.78 00:08:22.032 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:22.032 Nvme2n2 : 1.03 8588.42 33.55 0.00 0.00 14670.00 7813.91 23996.26 00:08:22.032 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:22.032 Nvme2n3 : 1.03 8578.25 33.51 0.00 0.00 14658.25 8570.09 23996.26 00:08:22.032 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:22.032 Nvme3n1 : 1.03 8568.12 33.47 0.00 0.00 14645.79 8166.79 24197.91 00:08:22.032 [2024-11-18T23:03:41.410Z] =================================================================================================================== 00:08:22.032 [2024-11-18T23:03:41.410Z] Total : 60190.82 235.12 0.00 0.00 14717.68 6654.42 27424.30 00:08:22.032 00:08:22.032 real 0m1.903s 00:08:22.032 user 0m1.586s 00:08:22.032 sys 0m0.196s 00:08:22.032 23:03:41 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:22.032 ************************************ 00:08:22.032 END TEST bdev_write_zeroes 00:08:22.032 ************************************ 00:08:22.032 23:03:41 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:08:22.032 23:03:41 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:22.032 23:03:41 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:08:22.032 23:03:41 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:22.032 23:03:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:22.032 ************************************ 00:08:22.032 START TEST bdev_json_nonenclosed 00:08:22.032 ************************************ 00:08:22.032 23:03:41 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:22.032 [2024-11-18 23:03:41.366540] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:22.032 [2024-11-18 23:03:41.366655] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74497 ] 00:08:22.292 [2024-11-18 23:03:41.513306] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.292 [2024-11-18 23:03:41.556038] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.292 [2024-11-18 23:03:41.556140] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:22.292 [2024-11-18 23:03:41.556167] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:22.293 [2024-11-18 23:03:41.556180] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:22.293 00:08:22.293 real 0m0.343s 00:08:22.293 user 0m0.143s 00:08:22.293 sys 0m0.096s 00:08:22.293 23:03:41 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:22.293 ************************************ 00:08:22.293 END TEST bdev_json_nonenclosed 00:08:22.293 ************************************ 00:08:22.293 23:03:41 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:08:22.553 23:03:41 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:22.553 23:03:41 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:08:22.553 23:03:41 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:22.553 23:03:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:22.553 ************************************ 00:08:22.553 START TEST bdev_json_nonarray 00:08:22.553 ************************************ 00:08:22.553 23:03:41 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:22.553 [2024-11-18 23:03:41.763513] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:22.553 [2024-11-18 23:03:41.763627] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74523 ] 00:08:22.553 [2024-11-18 23:03:41.912582] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.813 [2024-11-18 23:03:41.963009] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.813 [2024-11-18 23:03:41.963133] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:22.813 [2024-11-18 23:03:41.963168] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:22.813 [2024-11-18 23:03:41.963182] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:22.813 00:08:22.813 real 0m0.356s 00:08:22.813 user 0m0.148s 00:08:22.813 sys 0m0.104s 00:08:22.813 23:03:42 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:22.813 23:03:42 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:08:22.813 ************************************ 00:08:22.813 END TEST bdev_json_nonarray 00:08:22.813 ************************************ 00:08:22.813 23:03:42 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:08:22.813 23:03:42 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:08:22.813 23:03:42 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:08:22.813 23:03:42 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:22.813 23:03:42 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:22.813 23:03:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:22.813 ************************************ 00:08:22.813 START TEST bdev_gpt_uuid 00:08:22.813 ************************************ 00:08:22.813 23:03:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1125 -- # bdev_gpt_uuid 00:08:22.813 23:03:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:08:22.813 23:03:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:08:22.813 23:03:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=74543 00:08:22.813 23:03:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:22.814 23:03:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 74543 00:08:22.814 23:03:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@831 -- # '[' -z 74543 ']' 00:08:22.814 23:03:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:22.814 23:03:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:22.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:22.814 23:03:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:22.814 23:03:42 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:22.814 23:03:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:22.814 23:03:42 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:22.814 [2024-11-18 23:03:42.179282] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:22.814 [2024-11-18 23:03:42.179422] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74543 ] 00:08:23.074 [2024-11-18 23:03:42.328255] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.074 [2024-11-18 23:03:42.376581] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.017 23:03:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:24.017 23:03:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # return 0 00:08:24.017 23:03:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:24.017 23:03:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.017 23:03:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:24.017 Some configs were skipped because the RPC state that can call them passed over. 00:08:24.017 23:03:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.017 23:03:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:08:24.017 23:03:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.017 23:03:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:24.017 23:03:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.017 23:03:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:08:24.017 23:03:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.017 23:03:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:24.017 23:03:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.017 23:03:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:08:24.017 { 00:08:24.017 "name": "Nvme1n1p1", 00:08:24.017 "aliases": [ 00:08:24.017 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:08:24.017 ], 00:08:24.017 "product_name": "GPT Disk", 00:08:24.017 "block_size": 4096, 00:08:24.017 "num_blocks": 655104, 00:08:24.017 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:24.017 "assigned_rate_limits": { 00:08:24.017 "rw_ios_per_sec": 0, 00:08:24.017 "rw_mbytes_per_sec": 0, 00:08:24.017 "r_mbytes_per_sec": 0, 00:08:24.017 "w_mbytes_per_sec": 0 00:08:24.017 }, 00:08:24.017 "claimed": false, 00:08:24.017 "zoned": false, 00:08:24.017 "supported_io_types": { 00:08:24.017 "read": true, 00:08:24.017 "write": true, 00:08:24.017 "unmap": true, 00:08:24.017 "flush": true, 00:08:24.017 "reset": true, 00:08:24.017 "nvme_admin": false, 00:08:24.017 "nvme_io": false, 00:08:24.017 "nvme_io_md": false, 00:08:24.017 "write_zeroes": true, 00:08:24.017 "zcopy": false, 00:08:24.017 "get_zone_info": false, 00:08:24.017 "zone_management": false, 00:08:24.017 "zone_append": false, 00:08:24.017 "compare": true, 00:08:24.017 "compare_and_write": false, 00:08:24.017 "abort": true, 00:08:24.017 "seek_hole": false, 00:08:24.017 "seek_data": false, 00:08:24.017 "copy": true, 00:08:24.017 "nvme_iov_md": false 00:08:24.017 }, 00:08:24.017 "driver_specific": { 00:08:24.017 "gpt": { 00:08:24.017 "base_bdev": "Nvme1n1", 00:08:24.017 "offset_blocks": 256, 00:08:24.017 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:08:24.017 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:24.017 "partition_name": "SPDK_TEST_first" 00:08:24.017 } 00:08:24.017 } 00:08:24.017 } 00:08:24.017 ]' 00:08:24.017 23:03:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:08:24.278 23:03:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:08:24.278 23:03:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:08:24.278 23:03:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:24.278 23:03:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:24.278 23:03:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:24.278 23:03:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:24.278 23:03:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.278 23:03:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:24.278 23:03:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.278 23:03:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:08:24.278 { 00:08:24.278 "name": "Nvme1n1p2", 00:08:24.278 "aliases": [ 00:08:24.278 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:08:24.278 ], 00:08:24.278 "product_name": "GPT Disk", 00:08:24.278 "block_size": 4096, 00:08:24.278 "num_blocks": 655103, 00:08:24.278 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:24.278 "assigned_rate_limits": { 00:08:24.278 "rw_ios_per_sec": 0, 00:08:24.278 "rw_mbytes_per_sec": 0, 00:08:24.278 "r_mbytes_per_sec": 0, 00:08:24.278 "w_mbytes_per_sec": 0 00:08:24.278 }, 00:08:24.278 "claimed": false, 00:08:24.278 "zoned": false, 00:08:24.278 "supported_io_types": { 00:08:24.278 "read": true, 00:08:24.278 "write": true, 00:08:24.278 "unmap": true, 00:08:24.278 "flush": true, 00:08:24.278 "reset": true, 00:08:24.278 "nvme_admin": false, 00:08:24.278 "nvme_io": false, 00:08:24.278 "nvme_io_md": false, 00:08:24.278 "write_zeroes": true, 00:08:24.278 "zcopy": false, 00:08:24.278 "get_zone_info": false, 00:08:24.278 "zone_management": false, 00:08:24.278 "zone_append": false, 00:08:24.278 "compare": true, 00:08:24.278 "compare_and_write": false, 00:08:24.278 "abort": true, 00:08:24.278 "seek_hole": false, 00:08:24.278 "seek_data": false, 00:08:24.278 "copy": true, 00:08:24.278 "nvme_iov_md": false 00:08:24.278 }, 00:08:24.278 "driver_specific": { 00:08:24.278 "gpt": { 00:08:24.278 "base_bdev": "Nvme1n1", 00:08:24.278 "offset_blocks": 655360, 00:08:24.278 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:08:24.278 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:24.278 "partition_name": "SPDK_TEST_second" 00:08:24.278 } 00:08:24.278 } 00:08:24.278 } 00:08:24.278 ]' 00:08:24.278 23:03:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:08:24.278 23:03:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:08:24.278 23:03:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:08:24.278 23:03:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:24.278 23:03:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:24.278 23:03:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:24.278 23:03:43 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 74543 00:08:24.278 23:03:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@950 -- # '[' -z 74543 ']' 00:08:24.278 23:03:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # kill -0 74543 00:08:24.278 23:03:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # uname 00:08:24.278 23:03:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:24.278 23:03:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74543 00:08:24.278 killing process with pid 74543 00:08:24.278 23:03:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:24.278 23:03:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:24.278 23:03:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74543' 00:08:24.278 23:03:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@969 -- # kill 74543 00:08:24.278 23:03:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@974 -- # wait 74543 00:08:24.851 00:08:24.851 real 0m1.845s 00:08:24.851 user 0m1.947s 00:08:24.851 sys 0m0.420s 00:08:24.851 23:03:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:24.851 23:03:43 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:24.851 ************************************ 00:08:24.851 END TEST bdev_gpt_uuid 00:08:24.851 ************************************ 00:08:24.851 23:03:43 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:08:24.851 23:03:43 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:08:24.851 23:03:43 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:08:24.851 23:03:43 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:24.851 23:03:43 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:24.851 23:03:44 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:08:24.851 23:03:44 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:08:24.851 23:03:44 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:08:24.851 23:03:44 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:25.112 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:25.112 Waiting for block devices as requested 00:08:25.370 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:25.370 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:25.370 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:25.631 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:30.938 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:30.938 23:03:49 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:08:30.938 23:03:49 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:08:30.938 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:08:30.938 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:08:30.938 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:08:30.938 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:08:30.938 23:03:50 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:08:30.938 00:08:30.938 real 0m49.947s 00:08:30.938 user 1m4.152s 00:08:30.938 sys 0m7.863s 00:08:30.938 ************************************ 00:08:30.938 END TEST blockdev_nvme_gpt 00:08:30.938 ************************************ 00:08:30.938 23:03:50 blockdev_nvme_gpt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:30.938 23:03:50 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:30.938 23:03:50 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:30.938 23:03:50 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:30.938 23:03:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:30.938 23:03:50 -- common/autotest_common.sh@10 -- # set +x 00:08:30.938 ************************************ 00:08:30.938 START TEST nvme 00:08:30.938 ************************************ 00:08:30.938 23:03:50 nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:30.938 * Looking for test storage... 00:08:30.938 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:30.938 23:03:50 nvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:30.938 23:03:50 nvme -- common/autotest_common.sh@1681 -- # lcov --version 00:08:30.938 23:03:50 nvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:30.938 23:03:50 nvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:30.938 23:03:50 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:30.938 23:03:50 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:30.938 23:03:50 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:30.938 23:03:50 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:08:30.938 23:03:50 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:08:30.938 23:03:50 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:08:30.938 23:03:50 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:08:30.938 23:03:50 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:08:30.938 23:03:50 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:08:30.938 23:03:50 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:08:30.938 23:03:50 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:30.938 23:03:50 nvme -- scripts/common.sh@344 -- # case "$op" in 00:08:30.938 23:03:50 nvme -- scripts/common.sh@345 -- # : 1 00:08:30.938 23:03:50 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:30.938 23:03:50 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:30.938 23:03:50 nvme -- scripts/common.sh@365 -- # decimal 1 00:08:30.938 23:03:50 nvme -- scripts/common.sh@353 -- # local d=1 00:08:30.938 23:03:50 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:30.938 23:03:50 nvme -- scripts/common.sh@355 -- # echo 1 00:08:30.938 23:03:50 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:08:30.938 23:03:50 nvme -- scripts/common.sh@366 -- # decimal 2 00:08:30.938 23:03:50 nvme -- scripts/common.sh@353 -- # local d=2 00:08:30.938 23:03:50 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:30.938 23:03:50 nvme -- scripts/common.sh@355 -- # echo 2 00:08:30.938 23:03:50 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:08:30.938 23:03:50 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:30.938 23:03:50 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:30.938 23:03:50 nvme -- scripts/common.sh@368 -- # return 0 00:08:30.938 23:03:50 nvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:30.938 23:03:50 nvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:30.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.938 --rc genhtml_branch_coverage=1 00:08:30.938 --rc genhtml_function_coverage=1 00:08:30.938 --rc genhtml_legend=1 00:08:30.938 --rc geninfo_all_blocks=1 00:08:30.938 --rc geninfo_unexecuted_blocks=1 00:08:30.938 00:08:30.938 ' 00:08:30.938 23:03:50 nvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:30.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.938 --rc genhtml_branch_coverage=1 00:08:30.938 --rc genhtml_function_coverage=1 00:08:30.938 --rc genhtml_legend=1 00:08:30.938 --rc geninfo_all_blocks=1 00:08:30.938 --rc geninfo_unexecuted_blocks=1 00:08:30.938 00:08:30.938 ' 00:08:30.938 23:03:50 nvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:30.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.938 --rc genhtml_branch_coverage=1 00:08:30.938 --rc genhtml_function_coverage=1 00:08:30.938 --rc genhtml_legend=1 00:08:30.938 --rc geninfo_all_blocks=1 00:08:30.938 --rc geninfo_unexecuted_blocks=1 00:08:30.938 00:08:30.938 ' 00:08:30.938 23:03:50 nvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:30.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.938 --rc genhtml_branch_coverage=1 00:08:30.938 --rc genhtml_function_coverage=1 00:08:30.938 --rc genhtml_legend=1 00:08:30.938 --rc geninfo_all_blocks=1 00:08:30.938 --rc geninfo_unexecuted_blocks=1 00:08:30.938 00:08:30.938 ' 00:08:30.938 23:03:50 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:31.511 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:32.082 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:32.082 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:32.082 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:32.082 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:32.082 23:03:51 nvme -- nvme/nvme.sh@79 -- # uname 00:08:32.082 23:03:51 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:08:32.082 23:03:51 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:08:32.082 23:03:51 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:08:32.082 23:03:51 nvme -- common/autotest_common.sh@1082 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:08:32.082 23:03:51 nvme -- common/autotest_common.sh@1068 -- # _randomize_va_space=2 00:08:32.082 23:03:51 nvme -- common/autotest_common.sh@1069 -- # echo 0 00:08:32.082 Waiting for stub to ready for secondary processes... 00:08:32.082 23:03:51 nvme -- common/autotest_common.sh@1071 -- # stubpid=75167 00:08:32.082 23:03:51 nvme -- common/autotest_common.sh@1072 -- # echo Waiting for stub to ready for secondary processes... 00:08:32.082 23:03:51 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:32.082 23:03:51 nvme -- common/autotest_common.sh@1075 -- # [[ -e /proc/75167 ]] 00:08:32.082 23:03:51 nvme -- common/autotest_common.sh@1076 -- # sleep 1s 00:08:32.082 23:03:51 nvme -- common/autotest_common.sh@1070 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:08:32.082 [2024-11-18 23:03:51.422450] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:32.082 [2024-11-18 23:03:51.422564] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:08:33.024 [2024-11-18 23:03:52.177498] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:33.024 [2024-11-18 23:03:52.197792] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:33.024 [2024-11-18 23:03:52.198208] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:33.024 [2024-11-18 23:03:52.198290] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:08:33.024 [2024-11-18 23:03:52.209393] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:08:33.024 [2024-11-18 23:03:52.209529] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:33.024 [2024-11-18 23:03:52.220360] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:08:33.025 [2024-11-18 23:03:52.220594] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:08:33.025 [2024-11-18 23:03:52.221403] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:33.025 [2024-11-18 23:03:52.221549] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:08:33.025 [2024-11-18 23:03:52.221592] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:08:33.025 [2024-11-18 23:03:52.222310] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:33.025 [2024-11-18 23:03:52.222431] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:08:33.025 [2024-11-18 23:03:52.222470] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:08:33.025 [2024-11-18 23:03:52.223566] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:33.025 [2024-11-18 23:03:52.223839] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:08:33.025 [2024-11-18 23:03:52.223891] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:08:33.025 [2024-11-18 23:03:52.223951] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:08:33.025 [2024-11-18 23:03:52.224004] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:08:33.025 done. 00:08:33.025 23:03:52 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:33.025 23:03:52 nvme -- common/autotest_common.sh@1078 -- # echo done. 00:08:33.025 23:03:52 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:33.025 23:03:52 nvme -- common/autotest_common.sh@1101 -- # '[' 10 -le 1 ']' 00:08:33.025 23:03:52 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:33.025 23:03:52 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:33.286 ************************************ 00:08:33.286 START TEST nvme_reset 00:08:33.286 ************************************ 00:08:33.286 23:03:52 nvme.nvme_reset -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:33.286 Initializing NVMe Controllers 00:08:33.286 Skipping QEMU NVMe SSD at 0000:00:13.0 00:08:33.286 Skipping QEMU NVMe SSD at 0000:00:10.0 00:08:33.286 Skipping QEMU NVMe SSD at 0000:00:11.0 00:08:33.286 Skipping QEMU NVMe SSD at 0000:00:12.0 00:08:33.286 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:08:33.286 00:08:33.286 real 0m0.190s 00:08:33.286 user 0m0.060s 00:08:33.286 sys 0m0.087s 00:08:33.286 23:03:52 nvme.nvme_reset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:33.286 23:03:52 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:08:33.286 ************************************ 00:08:33.286 END TEST nvme_reset 00:08:33.286 ************************************ 00:08:33.286 23:03:52 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:08:33.286 23:03:52 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:33.286 23:03:52 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:33.286 23:03:52 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:33.286 ************************************ 00:08:33.286 START TEST nvme_identify 00:08:33.286 ************************************ 00:08:33.286 23:03:52 nvme.nvme_identify -- common/autotest_common.sh@1125 -- # nvme_identify 00:08:33.286 23:03:52 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:08:33.286 23:03:52 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:08:33.286 23:03:52 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:08:33.551 23:03:52 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:08:33.551 23:03:52 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:33.551 23:03:52 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # local bdfs 00:08:33.551 23:03:52 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:33.551 23:03:52 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:33.551 23:03:52 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:33.551 23:03:52 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:08:33.551 23:03:52 nvme.nvme_identify -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:33.551 23:03:52 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:08:33.551 ===================================================== 00:08:33.551 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:33.551 ===================================================== 00:08:33.551 Controller Capabilities/Features 00:08:33.551 ================================ 00:08:33.551 Vendor ID: 1b36 00:08:33.551 Subsystem Vendor ID: 1af4 00:08:33.551 Serial Number: 12343 00:08:33.551 Model Number: QEMU NVMe Ctrl 00:08:33.551 Firmware Version: 8.0.0 00:08:33.551 Recommended Arb Burst: 6 00:08:33.551 IEEE OUI Identifier: 00 54 52 00:08:33.551 Multi-path I/O 00:08:33.551 May have multiple subsystem ports: No 00:08:33.551 May have multiple controllers: Yes 00:08:33.551 Associated with SR-IOV VF: No 00:08:33.551 Max Data Transfer Size: 524288 00:08:33.551 Max Number of Namespaces: 256 00:08:33.551 Max Number of I/O Queues: 64 00:08:33.551 NVMe Specification Version (VS): 1.4 00:08:33.551 NVMe Specification Version (Identify): 1.4 00:08:33.551 Maximum Queue Entries: 2048 00:08:33.551 Contiguous Queues Required: Yes 00:08:33.551 Arbitration Mechanisms Supported 00:08:33.551 Weighted Round Robin: Not Supported 00:08:33.551 Vendor Specific: Not Supported 00:08:33.551 Reset Timeout: 7500 ms 00:08:33.551 Doorbell Stride: 4 bytes 00:08:33.551 NVM Subsystem Reset: Not Supported 00:08:33.551 Command Sets Supported 00:08:33.551 NVM Command Set: Supported 00:08:33.551 Boot Partition: Not Supported 00:08:33.551 Memory Page Size Minimum: 4096 bytes 00:08:33.551 Memory Page Size Maximum: 65536 bytes 00:08:33.551 Persistent Memory Region: Not Supported 00:08:33.551 Optional Asynchronous Events Supported 00:08:33.551 Namespace Attribute Notices: Supported 00:08:33.551 Firmware Activation Notices: Not Supported 00:08:33.551 ANA Change Notices: Not Supported 00:08:33.551 PLE Aggregate Log Change Notices: Not Supported 00:08:33.551 LBA Status Info Alert Notices: Not Supported 00:08:33.551 EGE Aggregate Log Change Notices: Not Supported 00:08:33.551 Normal NVM Subsystem Shutdown event: Not Supported 00:08:33.551 Zone Descriptor Change Notices: Not Supported 00:08:33.551 Discovery Log Change Notices: Not Supported 00:08:33.551 Controller Attributes 00:08:33.551 128-bit Host Identifier: Not Supported 00:08:33.551 Non-Operational Permissive Mode: Not Supported 00:08:33.551 NVM Sets: Not Supported 00:08:33.551 Read Recovery Levels: Not Supported 00:08:33.551 Endurance Groups: Supported 00:08:33.551 Predictable Latency Mode: Not Supported 00:08:33.551 Traffic Based Keep ALive: Not Supported 00:08:33.551 Namespace Granularity: Not Supported 00:08:33.551 SQ Associations: Not Supported 00:08:33.551 UUID List: Not Supported 00:08:33.551 Multi-Domain Subsystem: Not Supported 00:08:33.551 Fixed Capacity Management: Not Supported 00:08:33.551 Variable Capacity Management: Not Supported 00:08:33.551 Delete Endurance Group: Not Supported 00:08:33.551 Delete NVM Set: Not Supported 00:08:33.551 Extended LBA Formats Supported: Supported 00:08:33.551 Flexible Data Placement Supported: Supported 00:08:33.551 00:08:33.551 Controller Memory Buffer Support 00:08:33.551 ================================ 00:08:33.551 Supported: No 00:08:33.551 00:08:33.551 Persistent Memory Region Support 00:08:33.551 ================================ 00:08:33.551 Supported: No 00:08:33.551 00:08:33.551 Admin Command Set Attributes 00:08:33.551 ============================ 00:08:33.551 Security Send/Receive: Not Supported 00:08:33.551 Format NVM: Supported 00:08:33.551 Firmware Activate/Download: Not Supported 00:08:33.551 Namespace Management: Supported 00:08:33.551 Device Self-Test: Not Supported 00:08:33.551 Directives: Supported 00:08:33.551 NVMe-MI: Not Supported 00:08:33.551 Virtualization Management: Not Supported 00:08:33.551 Doorbell Buffer Config: Supported 00:08:33.551 Get LBA Status Capability: Not Supported 00:08:33.551 Command & Feature Lockdown Capability: Not Supported 00:08:33.551 Abort Command Limit: 4 00:08:33.551 Async Event Request Limit: 4 00:08:33.551 Number of Firmware Slots: N/A 00:08:33.551 Firmware Slot 1 Read-Only: N/A 00:08:33.551 Firmware Activation Without Reset: N/A 00:08:33.551 Multiple Update Detection Support: N/A 00:08:33.551 Firmware Update Granularity: No Information Provided 00:08:33.551 Per-Namespace SMART Log: Yes 00:08:33.551 Asymmetric Namespace Access Log Page: Not Supported 00:08:33.551 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:33.551 Command Effects Log Page: Supported 00:08:33.551 Get Log Page Extended Data: Supported 00:08:33.551 Telemetry Log Pages: Not Supported 00:08:33.551 Persistent Event Log Pages: Not Supported 00:08:33.551 Supported Log Pages Log Page: May Support 00:08:33.551 Commands Supported & Effects Log Page: Not Supported 00:08:33.551 Feature Identifiers & Effects Log Page:May Support 00:08:33.551 NVMe-MI Commands & Effects Log Page: May Support 00:08:33.551 Data Area 4 for Telemetry Log: Not Supported 00:08:33.551 Error Log Page Entries Supported: 1 00:08:33.551 Keep Alive: Not Supported 00:08:33.551 00:08:33.551 NVM Command Set Attributes 00:08:33.551 ========================== 00:08:33.551 Submission Queue Entry Size 00:08:33.551 Max: 64 00:08:33.551 Min: 64 00:08:33.551 Completion Queue Entry Size 00:08:33.551 Max: 16 00:08:33.551 Min: 16 00:08:33.551 Number of Namespaces: 256 00:08:33.551 Compare Command: Supported 00:08:33.551 Write Uncorrectable Command: Not Supported 00:08:33.551 Dataset Management Command: Supported 00:08:33.551 Write Zeroes Command: Supported 00:08:33.551 Set Features Save Field: Supported 00:08:33.551 Reservations: Not Supported 00:08:33.551 Timestamp: Supported 00:08:33.551 Copy: Supported 00:08:33.551 Volatile Write Cache: Present 00:08:33.551 Atomic Write Unit (Normal): 1 00:08:33.551 Atomic Write Unit (PFail): 1 00:08:33.551 Atomic Compare & Write Unit: 1 00:08:33.551 Fused Compare & Write: Not Supported 00:08:33.551 Scatter-Gather List 00:08:33.551 SGL Command Set: Supported 00:08:33.551 SGL Keyed: Not Supported 00:08:33.551 SGL Bit Bucket Descriptor: Not Supported 00:08:33.551 SGL Metadata Pointer: Not Supported 00:08:33.551 Oversized SGL: Not Supported 00:08:33.551 SGL Metadata Address: Not Supported 00:08:33.551 SGL Offset: Not Supported 00:08:33.551 Transport SGL Data Block: Not Supported 00:08:33.551 Replay Protected Memory Block: Not Supported 00:08:33.551 00:08:33.551 Firmware Slot Information 00:08:33.551 ========================= 00:08:33.551 Active slot: 1 00:08:33.551 Slot 1 Firmware Revision: 1.0 00:08:33.551 00:08:33.551 00:08:33.551 Commands Supported and Effects 00:08:33.551 ============================== 00:08:33.551 Admin Commands 00:08:33.551 -------------- 00:08:33.551 Delete I/O Submission Queue (00h): Supported 00:08:33.552 Create I/O Submission Queue (01h): Supported 00:08:33.552 Get Log Page (02h): Supported 00:08:33.552 Delete I/O Completion Queue (04h): Supported 00:08:33.552 Create I/O Completion Queue (05h): Supported 00:08:33.552 Identify (06h): Supported 00:08:33.552 Abort (08h): Supported 00:08:33.552 Set Features (09h): Supported 00:08:33.552 Get Features (0Ah): Supported 00:08:33.552 Asynchronous Event Request (0Ch): Supported 00:08:33.552 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:33.552 Directive Send (19h): Supported 00:08:33.552 Directive Receive (1Ah): Supported 00:08:33.552 Virtualization Management (1Ch): Supported 00:08:33.552 Doorbell Buffer Config (7Ch): Supported 00:08:33.552 Format NVM (80h): Supported LBA-Change 00:08:33.552 I/O Commands 00:08:33.552 ------------ 00:08:33.552 Flush (00h): Supported LBA-Change 00:08:33.552 Write (01h): Supported LBA-Change 00:08:33.552 Read (02h): Supported 00:08:33.552 Compare (05h): Supported 00:08:33.552 Write Zeroes (08h): Supported LBA-Change 00:08:33.552 Dataset Management (09h): Supported LBA-Change 00:08:33.552 Unknown (0Ch): Supported 00:08:33.552 Unknown (12h): Supported 00:08:33.552 Copy (19h): Supported LBA-Change 00:08:33.552 Unknown (1Dh): Supported LBA-Change 00:08:33.552 00:08:33.552 Error Log 00:08:33.552 ========= 00:08:33.552 00:08:33.552 Arbitration 00:08:33.552 =========== 00:08:33.552 Arbitration Burst: no limit 00:08:33.552 00:08:33.552 Power Management 00:08:33.552 ================ 00:08:33.552 Number of Power States: 1 00:08:33.552 Current Power State: Power State #0 00:08:33.552 Power State #0: 00:08:33.552 Max Power: 25.00 W 00:08:33.552 Non-Operational State: Operational 00:08:33.552 Entry Latency: 16 microseconds 00:08:33.552 Exit Latency: 4 microseconds 00:08:33.552 Relative Read Throughput: 0 00:08:33.552 Relative Read Latency: 0 00:08:33.552 Relative Write Throughput: 0 00:08:33.552 Relative Write Latency: 0 00:08:33.552 Idle Power: Not Reported 00:08:33.552 Active Power: Not Reported 00:08:33.552 Non-Operational Permissive Mode: Not Supported 00:08:33.552 00:08:33.552 Health Information 00:08:33.552 ================== 00:08:33.552 Critical Warnings: 00:08:33.552 Available Spare Space: OK 00:08:33.552 Temperature: OK 00:08:33.552 Device Reliability: OK 00:08:33.552 Read Only: No 00:08:33.552 Volatile Memory Backup: OK 00:08:33.552 Current Temperature: 323 Kelvin (50 Celsius) 00:08:33.552 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:33.552 Available Spare: 0% 00:08:33.552 Available Spare Threshold: 0% 00:08:33.552 Life Percentage Used: 0% 00:08:33.552 Data Units Read: 941 00:08:33.552 Data Units Written: 870 00:08:33.552 Host Read Commands: 38931 00:08:33.552 Host Write Commands: 38354 00:08:33.552 Controller Busy Time: 0 minutes 00:08:33.552 Power Cycles: 0 00:08:33.552 Power On Hours: 0 hours 00:08:33.552 Unsafe Shutdowns: 0 00:08:33.552 Unrecoverable Media Errors: 0 00:08:33.552 Lifetime Error Log Entries: 0 00:08:33.552 Warning Temperature Time: 0 minutes 00:08:33.552 Critical Temperature Time: 0 minutes 00:08:33.552 00:08:33.552 Number of Queues 00:08:33.552 ================ 00:08:33.552 Number of I/O Submission Queues: 64 00:08:33.552 Number of I/O Completion Queues: 64 00:08:33.552 00:08:33.552 ZNS Specific Controller Data 00:08:33.552 ============================ 00:08:33.552 Zone Append Size Limit: 0 00:08:33.552 00:08:33.552 00:08:33.552 Active Namespaces 00:08:33.552 ================= 00:08:33.552 Namespace ID:1 00:08:33.552 Error Recovery Timeout: Unlimited 00:08:33.552 Command Set Identifier: NVM (00h) 00:08:33.552 Deallocate: Supported 00:08:33.552 Deallocated/Unwritten Error: Supported 00:08:33.552 Deallocated Read Value: All 0x00 00:08:33.552 Deallocate in Write Zeroes: Not Supported 00:08:33.552 Deallocated Guard Field: 0xFFFF 00:08:33.552 Flush: Supported 00:08:33.552 Reservation: Not Supported 00:08:33.552 Namespace Sharing Capabilities: Multiple Controllers 00:08:33.552 Size (in LBAs): 262144 (1GiB) 00:08:33.552 Capacity (in LBAs): 262144 (1GiB) 00:08:33.552 Utilization (in LBAs): 262144 (1GiB) 00:08:33.552 Thin Provisioning: Not Supported 00:08:33.552 Per-NS Atomic Units: No 00:08:33.552 Maximum Single Source Range Length: 128 00:08:33.552 Maximum Copy Length: 128 00:08:33.552 Maximum Source Range Count: 128 00:08:33.552 NGUID/EUI64 Never Reused: No 00:08:33.552 Namespace Write Protected: No 00:08:33.552 Endurance group ID: 1 00:08:33.552 Number of LBA Formats: 8 00:08:33.552 Current LBA Format: LBA Format #04 00:08:33.552 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:33.552 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:33.552 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:33.552 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:33.552 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:33.552 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:33.552 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:33.552 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:33.552 00:08:33.552 Get Feature FDP: 00:08:33.552 ================ 00:08:33.552 Enabled: Yes 00:08:33.552 FDP configuration index: 0 00:08:33.552 00:08:33.552 FDP configurations log page 00:08:33.552 =========================== 00:08:33.552 Number of FDP configurations: 1 00:08:33.552 Version: 0 00:08:33.552 Size: 112 00:08:33.552 FDP Configuration Descriptor: 0 00:08:33.552 Descriptor Size: 96 00:08:33.552 Reclaim Group Identifier format: 2 00:08:33.552 FDP Volatile Write Cache: Not Present 00:08:33.552 FDP Configuration: Valid 00:08:33.552 Vendor Specific Size: 0 00:08:33.552 Number of Reclaim Groups: 2 00:08:33.552 Number of Recalim Unit Handles: 8 00:08:33.552 Max Placement Identifiers: 128 00:08:33.552 Number of Namespaces Suppprted: 256 00:08:33.552 Reclaim unit Nominal Size: 6000000 bytes 00:08:33.552 Estimated Reclaim Unit Time Limit: Not Reported 00:08:33.552 RUH Desc #000: RUH Type: Initially Isolated 00:08:33.552 RUH Desc #001: RUH Type: Initially Isolated 00:08:33.552 RUH Desc #002: RUH Type: Initially Isolated 00:08:33.552 RUH Desc #003: RUH Type: Initially Isolated 00:08:33.552 RUH Desc #004: RUH Type: Initially Isolated 00:08:33.552 RUH Desc #005: RUH Type: Initially Isolated 00:08:33.552 RUH Desc #006: RUH Type: Initially Isolated 00:08:33.552 RUH Desc #007: RUH Type: Initially Isolated 00:08:33.552 00:08:33.552 FDP reclaim unit handle usage log page 00:08:33.552 ====================================== 00:08:33.552 Number of Reclaim Unit Handles: 8 00:08:33.552 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:33.552 RUH Usage Desc #001: RUH Attributes: Unused 00:08:33.552 RUH Usage Desc #002: RUH Attributes: Unused 00:08:33.552 RUH Usage Desc #003: RUH Attributes: Unused 00:08:33.552 RUH Usage Desc #004: RUH Attributes: Unused 00:08:33.552 RUH Usage Desc #005: RUH Attributes: Unused 00:08:33.552 RUH Usage Desc #006: RUH Attributes: Unused 00:08:33.552 RUH Usage Desc #007: RUH Attributes: Unused 00:08:33.552 00:08:33.552 FDP statistics log page 00:08:33.552 ======================= 00:08:33.552 Host bytes with metadata written: 547856384 00:08:33.552 Media bytes with metadata written: 547934208 00:08:33.552 Media bytes erased: 0 00:08:33.552 00:08:33.552 FDP events log page 00:08:33.552 =================== 00:08:33.552 Number of FDP events: 0 00:08:33.552 00:08:33.552 NVM Specific Namespace Data 00:08:33.552 =========================== 00:08:33.552 Logical Block Storage Tag Mask: 0 00:08:33.552 Protection Information Capabilities: 00:08:33.552 16b Guard Protection Information Storage Tag Support: No 00:08:33.552 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:33.552 Storage Tag Check Read Support: No 00:08:33.552 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.552 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.552 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.552 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.552 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.552 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.552 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.552 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.552 ===================================================== 00:08:33.552 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:33.552 ===================================================== 00:08:33.552 Controller Capabilities/Features 00:08:33.552 ================================ 00:08:33.552 Vendor ID: 1b36 00:08:33.552 Subsystem Vendor ID: 1af4 00:08:33.552 Serial Number: 12340 00:08:33.552 Model Number: QEMU NVMe Ctrl 00:08:33.553 Firmware Version: 8.0.0 00:08:33.553 Recommended Arb Burst: 6 00:08:33.553 IEEE OUI Identifier: 00 54 52 00:08:33.553 Multi-path I/O 00:08:33.553 May have multiple subsystem ports: No 00:08:33.553 May have multiple controllers: No 00:08:33.553 Associated with SR-IOV VF: No 00:08:33.553 Max Data Transfer Size: 524288 00:08:33.553 Max Number of Namespaces: 256 00:08:33.553 Max Number of I/O Queues: 64 00:08:33.553 NVMe Specification Version (VS): 1.4 00:08:33.553 NVMe Specification Version (Identify): 1.4 00:08:33.553 Maximum Queue Entries: 2048 00:08:33.553 Contiguous Queues Required: Yes 00:08:33.553 Arbitration Mechanisms Supported 00:08:33.553 Weighted Round Robin: Not Supported 00:08:33.553 Vendor Specific: Not Supported 00:08:33.553 Reset Timeout: 7500 ms 00:08:33.553 Doorbell Stride: 4 bytes 00:08:33.553 NVM Subsystem Reset: Not Supported 00:08:33.553 Command Sets Supported 00:08:33.553 NVM Command Set: Supported 00:08:33.553 Boot Partition: Not Supported 00:08:33.553 Memory Page Size Minimum: 4096 bytes 00:08:33.553 Memory Page Size Maximum: 65536 bytes 00:08:33.553 Persistent Memory Region: Not Supported 00:08:33.553 Optional Asynchronous Events Supported 00:08:33.553 Namespace Attribute Notices: Supported 00:08:33.553 Firmware Activation Notices: Not Supported 00:08:33.553 ANA Change Notices: Not Supported 00:08:33.553 PLE Aggregate Log Change Notices: Not Supported 00:08:33.553 LBA Status Info Alert Notices: Not Supported 00:08:33.553 EGE Aggregate Log Change Notices: Not Supported 00:08:33.553 Normal NVM Subsystem Shutdown event: Not Supported 00:08:33.553 Zone Descriptor Change Notices: Not Supported 00:08:33.553 Discovery Log Change Notices: Not Supported 00:08:33.553 Controller Attributes 00:08:33.553 128-bit Host Identifier: Not Supported 00:08:33.553 Non-Operational Permissive Mode: Not Supported 00:08:33.553 NVM Sets: Not Supported 00:08:33.553 Read Recovery Levels: Not Supported 00:08:33.553 Endurance Groups: Not Supported 00:08:33.553 Predictable Latency Mode: Not Supported 00:08:33.553 Traffic Based Keep ALive: Not Supported 00:08:33.553 Namespace Granularity: Not Supported 00:08:33.553 SQ Associations: Not Supported 00:08:33.553 UUID List: Not Supported 00:08:33.553 Multi-Domain Subsystem: Not Supported 00:08:33.553 Fixed Capacity Management: Not Supported 00:08:33.553 Variable Capacity Management: Not Supported 00:08:33.553 Delete Endurance Group: Not Supported 00:08:33.553 Delete NVM Set: Not Supported 00:08:33.553 Extended LBA Formats Supported: Supported 00:08:33.553 Flexible Data Placement Supported: Not Supported 00:08:33.553 00:08:33.553 Controller Memory Buffer Support 00:08:33.553 ================================ 00:08:33.553 Supported: No 00:08:33.553 00:08:33.553 Persistent Memory Region Support 00:08:33.553 ================================ 00:08:33.553 Supported: No 00:08:33.553 00:08:33.553 Admin Command Set Attributes 00:08:33.553 ============================ 00:08:33.553 Security Send/Receive: Not Supported 00:08:33.553 Format NVM: Supported 00:08:33.553 Firmware Activate/Download: Not Supported 00:08:33.553 Namespace Management: Supported 00:08:33.553 Device Self-Test: Not Supported 00:08:33.553 Directives: Supported 00:08:33.553 NVMe-MI: Not Supported 00:08:33.553 Virtualization Management: Not Supported 00:08:33.553 Doorbell Buffer Config: Supported 00:08:33.553 Get LBA Status Capability: Not Supported 00:08:33.553 Command & Feature Lockdown Capability: Not Supported 00:08:33.553 Abort Command Limit: 4 00:08:33.553 Async Event Request Limit: 4 00:08:33.553 Number of Firmware Slots: N/A 00:08:33.553 Firmware Slot 1 Read-Only: N/A 00:08:33.553 Firmware Activation Without Reset: N/A 00:08:33.553 Multiple Update Detection Support: N/A 00:08:33.553 Firmware Update Granularity: No Information Provided 00:08:33.553 Per-Namespace SMART Log: Yes 00:08:33.553 Asymmetric Namespace Access Log Page: Not Supported 00:08:33.553 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:33.553 Command Effects Log Page: Supported 00:08:33.553 Get Log Page Extended Data: Supported 00:08:33.553 Telemetry Log Pages: Not Supported 00:08:33.553 Persistent Event Log Pages: Not Supported 00:08:33.553 Supported Log Pages Log Page: May Support 00:08:33.553 Commands Supported & Effects Log Page: Not Supported 00:08:33.553 Feature Identifiers & Effects Log Page:May Support 00:08:33.553 NVMe-MI Commands & Effects Log Page: May Support 00:08:33.553 Data Area 4 for Telemetry Log: Not Supported 00:08:33.553 Error Log Page Entries Supported: 1 00:08:33.553 Keep Alive: Not Supported 00:08:33.553 00:08:33.553 NVM Command Set Attributes 00:08:33.553 ========================== 00:08:33.553 Submission Queue Entry Size 00:08:33.553 Max: 64 00:08:33.553 Min: 64 00:08:33.553 Completion Queue Entry Size 00:08:33.553 Max: 16 00:08:33.553 Min: 16 00:08:33.553 Number of Namespaces: 256 00:08:33.553 Compare Command: Supported 00:08:33.553 Write Uncorrectable Command: Not Supported 00:08:33.553 Dataset Management Command: Supported 00:08:33.553 Write Zeroes Command: Supported 00:08:33.553 Set Features Save Field: Supported 00:08:33.553 Reservations: Not Supported 00:08:33.553 Timestamp: Supported 00:08:33.553 Copy: Supported 00:08:33.553 Volatile Write Cache: Present 00:08:33.553 Atomic Write Unit (Normal): 1 00:08:33.553 Atomic Write Unit (PFail): 1 00:08:33.553 Atomic Compare & Write Unit: 1 00:08:33.553 Fused Compare & Write: Not Supported 00:08:33.553 Scatter-Gather List 00:08:33.553 SGL Command Set: Supported 00:08:33.553 SGL Keyed: Not Supported 00:08:33.553 SGL Bit Bucket Descriptor: Not Supported 00:08:33.553 SGL Metadata Pointer: Not Supported 00:08:33.553 Oversized SGL: Not Supported 00:08:33.553 SGL Metadata Address: Not Supported 00:08:33.553 SGL Offset: Not Supported 00:08:33.553 Transport SGL Data Block: Not Supported 00:08:33.553 Replay Protected Memory Block: Not Supported 00:08:33.553 00:08:33.553 Firmware Slot Information 00:08:33.553 ========================= 00:08:33.553 Active slot: 1 00:08:33.553 Slot 1 Firmware Revision: 1.0 00:08:33.553 00:08:33.553 00:08:33.553 Commands Supported and Effects 00:08:33.553 ============================== 00:08:33.553 Admin Commands 00:08:33.553 -------------- 00:08:33.553 Delete I/O Submission Queue (00h): Supported 00:08:33.553 Create I/O Submission Queue (01h): Supported 00:08:33.553 Get Log Page (02h): Supported 00:08:33.553 Delete I/O Completion Queue (04h): Supported 00:08:33.553 Create I/O Completion Queue (05h): Supported 00:08:33.553 Identify (06h): Supported 00:08:33.553 Abort (08h): Supported 00:08:33.553 Set Features (09h): Supported 00:08:33.553 Get Features (0Ah): Supported 00:08:33.553 Asynchronous Event Request (0Ch): Supported 00:08:33.553 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:33.553 Directive Send (19h): Supported 00:08:33.553 Directive Receive (1Ah): Supported 00:08:33.553 Virtualization Management (1Ch): Supported 00:08:33.553 Doorbell Buffer Config (7Ch): Supported 00:08:33.553 Format NVM (80h): Supported LBA-Change 00:08:33.553 I/O Commands 00:08:33.553 ------------ 00:08:33.553 Flush (00h): Supported LBA-Change 00:08:33.553 Write (01h): Supported LBA-Change 00:08:33.553 Read (02h): Supported 00:08:33.553 Compare (05h): Supported 00:08:33.554 Write Zeroes (08h): Supported LBA-Change 00:08:33.554 Dataset Management (09h): Supported LBA-Change 00:08:33.554 Unknown (0Ch): Supported 00:08:33.554 Unknown (12h): Supported 00:08:33.554 Copy (19h): Supported LBA-Change 00:08:33.554 Unknown (1Dh): Supported LBA-Change 00:08:33.554 00:08:33.554 Error Log 00:08:33.554 ========= 00:08:33.554 00:08:33.554 Arbitration 00:08:33.554 =========== 00:08:33.554 Arbitration Burst: no limit 00:08:33.554 00:08:33.554 Power Management 00:08:33.554 ================ 00:08:33.554 Number of Power States: 1 00:08:33.554 Current Power State: Power State #0 00:08:33.554 Power State #0: 00:08:33.554 Max Power: 25.00 W 00:08:33.554 Non-Operational State: Operational 00:08:33.554 Entry Latency: 16 microseconds 00:08:33.554 Exit Latency: 4 microseconds 00:08:33.554 Relative Read Throughput: 0 00:08:33.554 Relative Read Latency: 0 00:08:33.554 Relative Write Throughput: 0 00:08:33.554 Relative Write Latency: 0 00:08:33.554 Idle Power: Not Reported 00:08:33.554 Active Power: Not Reported 00:08:33.554 Non-Operational Permissive Mode: Not Supported 00:08:33.554 00:08:33.554 Health Information 00:08:33.554 ================== 00:08:33.554 Critical Warnings: 00:08:33.554 Available Spare Space: OK 00:08:33.554 Temperature: OK 00:08:33.554 Device Reliability: OK 00:08:33.554 Read Only: No 00:08:33.554 Volatile Memory Backup: OK 00:08:33.554 Current Temperature: 323 Kelvin (50 Celsius) 00:08:33.554 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:33.554 Available Spare: 0% 00:08:33.554 Available Spare Threshold: 0% 00:08:33.554 Life Percentage Used: 0% 00:08:33.554 Data Units Read: 632 00:08:33.554 Data Units Written: 560 00:08:33.554 Host Read Commands: 36093 00:08:33.554 Host Write Commands: 35879 00:08:33.554 Controller Busy Time: 0 minutes 00:08:33.554 Power Cycles: 0 00:08:33.554 Power On Hours: 0 hours 00:08:33.554 Unsafe Shutdowns: 0 00:08:33.554 Unrecoverable Media Errors: 0 00:08:33.554 Lifetime Error Log Entries: 0 00:08:33.554 Warning Temperature Time: 0 minutes 00:08:33.554 Critical Temperature Time: 0 minutes 00:08:33.554 00:08:33.554 Number of Queues 00:08:33.554 ================ 00:08:33.554 Number of I/O Submission Queues: 64 00:08:33.554 Number of I/O Completion Queues: 64 00:08:33.554 00:08:33.554 ZNS Specific Controller Data 00:08:33.554 ============================ 00:08:33.554 Zone Append Size Limit: 0 00:08:33.554 00:08:33.554 00:08:33.554 Active Namespaces 00:08:33.554 ================= 00:08:33.554 Namespace ID:1 00:08:33.554 Error Recovery Timeout: Unlimited 00:08:33.554 Command Set Identifier: NVM (00h) 00:08:33.554 Deallocate: Supported 00:08:33.554 Deallocated/Unwritten Error: Supported 00:08:33.554 Deallocated Read Value: All 0x00 00:08:33.554 Deallocate in Write Zeroes: Not Supported 00:08:33.554 Deallocated Guard Field: 0xFFFF 00:08:33.554 Flush: Supported 00:08:33.554 Reservation: Not Supported 00:08:33.554 Metadata Transferred as: Separate Metadata Buffer 00:08:33.554 Namespace Sharing Capabilities: Private 00:08:33.554 Size (in LBAs): 1548666 (5GiB) 00:08:33.554 Capacity (in LBAs): 1548666 (5GiB) 00:08:33.554 Utilization (in LBAs): 1548666 (5GiB) 00:08:33.554 Thin Provisioning: Not Supported 00:08:33.554 Per-NS Atomic Units: No 00:08:33.554 Maximum Single Source Range Length: 128 00:08:33.554 Maximum Copy Length: 128 00:08:33.554 Maximum Source Range Count: 128 00:08:33.554 NGUID/EUI64 Never Reused: No 00:08:33.554 Namespace Write Protected: No 00:08:33.554 Number of LBA Formats: 8 00:08:33.554 Current LBA Format: LBA Format #07 00:08:33.554 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:33.554 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:33.554 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:33.554 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:33.554 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:33.554 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:33.554 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:33.554 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:33.554 00:08:33.554 NVM Specific Namespace Data 00:08:33.554 =========================== 00:08:33.554 Logical Block Storage Tag Mask: 0 00:08:33.554 Protection Information Capabilities: 00:08:33.554 16b Guard Protection Information Storage Tag Support: No 00:08:33.554 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:33.554 Storage Tag Check Read Support: No 00:08:33.554 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.554 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.554 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.554 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.554 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.554 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.554 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.554 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.554 ===================================================== 00:08:33.554 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:33.554 ===================================================== 00:08:33.554 Controller Capabilities/Features 00:08:33.554 ================================ 00:08:33.554 Vendor ID: 1b36 00:08:33.554 Subsystem Vendor ID: 1af4 00:08:33.554 Serial Number: 12341 00:08:33.554 Model Number: QEMU NVMe Ctrl 00:08:33.554 Firmware Version: 8.0.0 00:08:33.554 Recommended Arb Burst: 6 00:08:33.554 IEEE OUI Identifier: 00 54 52 00:08:33.554 Multi-path I/O 00:08:33.554 May have multiple subsystem ports: No 00:08:33.554 May have multiple controllers: No 00:08:33.554 Associated with SR-IOV VF: No 00:08:33.554 Max Data Transfer Size: 524288 00:08:33.554 Max Number of Namespaces: 256 00:08:33.554 Max Number of I/O Queues: 64 00:08:33.554 NVMe Specification Version (VS): 1.4 00:08:33.554 NVMe Specification Version (Identify): 1.4 00:08:33.554 Maximum Queue Entries: 2048 00:08:33.554 Contiguous Queues Required: Yes 00:08:33.554 Arbitration Mechanisms Supported 00:08:33.554 Weighted Round Robin: Not Supported 00:08:33.554 Vendor Specific: Not Supported 00:08:33.554 Reset Timeout: 7500 ms 00:08:33.554 Doorbell Stride: 4 bytes 00:08:33.554 NVM Subsystem Reset: Not Supported 00:08:33.554 Command Sets Supported 00:08:33.554 NVM Command Set: Supported 00:08:33.554 Boot Partition: Not Supported 00:08:33.554 Memory Page Size Minimum: 4096 bytes 00:08:33.554 Memory Page Size Maximum: 65536 bytes 00:08:33.554 Persistent Memory Region: Not Supported 00:08:33.554 Optional Asynchronous Events Supported 00:08:33.554 Namespace Attribute Notices: Supported 00:08:33.554 Firmware Activation Notices: Not Supported 00:08:33.554 ANA Change Notices: Not Supported 00:08:33.554 PLE Aggregate Log Change Notices: Not Supported 00:08:33.554 LBA Status Info Alert Notices: Not Supported 00:08:33.554 EGE Aggregate Log Change Notices: Not Supported 00:08:33.554 Normal NVM Subsystem Shutdown event: Not Supported 00:08:33.554 Zone Descriptor Change Notices: Not Supported 00:08:33.554 Discovery Log Change Notices: Not Supported 00:08:33.554 Controller Attributes 00:08:33.554 128-bit Host Identifier: Not Supported 00:08:33.554 Non-Operational Permissive Mode: Not Supported 00:08:33.554 NVM Sets: Not Supported 00:08:33.554 Read Recovery Levels: Not Supported 00:08:33.554 Endurance Groups: Not Supported 00:08:33.554 Predictable Latency Mode: Not Supported 00:08:33.554 Traffic Based Keep ALive: Not Supported 00:08:33.554 Namespace Granularity: Not Supported 00:08:33.554 SQ Associations: Not Supported 00:08:33.554 UUID List: Not Supported 00:08:33.554 Multi-Domain Subsystem: Not Supported 00:08:33.554 Fixed Capacity Management: Not Supported 00:08:33.554 Variable Capacity Management: Not Supported 00:08:33.554 Delete Endurance Group: Not Supported 00:08:33.554 Delete NVM Set: Not Supported 00:08:33.554 Extended LBA Formats Supported: Supported 00:08:33.554 Flexible Data Placement Supported: Not Supported 00:08:33.554 00:08:33.554 Controller Memory Buffer Support 00:08:33.554 ================================ 00:08:33.554 Supported: No 00:08:33.554 00:08:33.554 Persistent Memory Region Support 00:08:33.554 ================================ 00:08:33.554 Supported: No 00:08:33.554 00:08:33.554 Admin Command Set Attributes 00:08:33.554 ============================ 00:08:33.555 Security Send/Receive: Not Supported 00:08:33.555 Format NVM: Supported 00:08:33.555 Firmware Activate/Download: Not Supported 00:08:33.555 Namespace Management: Supported 00:08:33.555 Device Self-Test: Not Supported 00:08:33.555 Directives: Supported 00:08:33.555 NVMe-MI: Not Supported 00:08:33.555 Virtualization Management: Not Supported 00:08:33.555 Doorbell Buffer Config: Supported 00:08:33.555 Get LBA Status Capability: Not Supported 00:08:33.555 Command & Feature Lockdown Capability: Not Supported 00:08:33.555 Abort Command Limit: 4 00:08:33.555 Async Event Request Limit: 4 00:08:33.555 Number of Firmware Slots: N/A 00:08:33.555 Firmware Slot 1 Read-Only: N/A 00:08:33.555 Firmware Activation Without Reset: N/A 00:08:33.555 Multiple Update Detection Support: N/A 00:08:33.555 Firmware Update Granularity: No Information Provided 00:08:33.555 Per-Namespace SMART Log: Yes 00:08:33.555 Asymmetric Namespace Access Log Page: Not Supported 00:08:33.555 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:33.555 Command Effects Log Page: Supported 00:08:33.555 Get Log Page Extended Data: Supported 00:08:33.555 Telemetry Log Pages: Not Supported 00:08:33.555 Persistent Event Log Pages: Not Supported 00:08:33.555 Supported Log Pages Log Page: May Support 00:08:33.555 Commands Supported & Effects Log Page: Not Supported 00:08:33.555 Feature Identifiers & Effects Log Page:May Support 00:08:33.555 NVMe-MI Commands & Effects Log Page: May Support 00:08:33.555 Data Area 4 for Telemetry Log: Not Supported 00:08:33.555 Error Log Page Entries Supported: 1 00:08:33.555 Keep Alive: Not Supported 00:08:33.555 00:08:33.555 NVM Command Set Attributes 00:08:33.555 ========================== 00:08:33.555 Submission Queue Entry Size 00:08:33.555 Max: 64 00:08:33.555 Min: 64 00:08:33.555 Completion Queue Entry Size 00:08:33.555 Max: 16 00:08:33.555 Min: 16 00:08:33.555 Number of Namespaces: 256 00:08:33.555 Compare Command: Supported 00:08:33.555 Write Uncorrectable Command: Not Supported 00:08:33.555 Dataset Management Command: Supported 00:08:33.555 Write Zeroes Command: Supported 00:08:33.555 Set Features Save Field: Supported 00:08:33.555 Reservations: Not Supported 00:08:33.555 Timestamp: Supported 00:08:33.555 Copy: Supported 00:08:33.555 Volatile Write Cache: Present 00:08:33.555 Atomic Write Unit (Normal): 1 00:08:33.555 Atomic Write Unit (PFail): 1 00:08:33.555 Atomic Compare & Write Unit: 1 00:08:33.555 Fused Compare & Write: Not Supported 00:08:33.555 Scatter-Gather List 00:08:33.555 SGL Command Set: Supported 00:08:33.555 SGL Keyed: Not Supported 00:08:33.555 SGL Bit Bucket Descriptor: Not Supported 00:08:33.555 SGL Metadata Pointer: Not Supported 00:08:33.555 Oversized SGL: Not Supported 00:08:33.555 SGL Metadata Address: Not Supported 00:08:33.555 SGL Offset: Not Supported 00:08:33.555 Transport SGL Data Block: Not Supported 00:08:33.555 Replay Protected Memory Block: Not Supported 00:08:33.555 00:08:33.555 Firmware Slot Information 00:08:33.555 ========================= 00:08:33.555 Active slot: 1 00:08:33.555 Slot 1 Firmware Revision: 1.0 00:08:33.555 00:08:33.555 00:08:33.555 Commands Supported and Effects 00:08:33.555 ============================== 00:08:33.555 Admin Commands 00:08:33.555 -------------- 00:08:33.555 Delete I/O Submission Queue (00h): Supported 00:08:33.555 Create I/O Submission Queue (01h): Supported 00:08:33.555 Get Log Page (02h): Supported 00:08:33.555 Delete I/O Completion Queue (04h): Supported 00:08:33.555 Create I/O Completion Queue (05h): Supported 00:08:33.555 Identify (06h): Supported 00:08:33.555 Abort (08h): Supported 00:08:33.555 Set Features (09h): Supported 00:08:33.555 Get Featu[2024-11-18 23:03:52.887983] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0] process 75189 terminated unexpected 00:08:33.555 [2024-11-18 23:03:52.889958] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 75189 terminated unexpected 00:08:33.555 [2024-11-18 23:03:52.890646] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0] process 75189 terminated unexpected 00:08:33.555 res (0Ah): Supported 00:08:33.555 Asynchronous Event Request (0Ch): Supported 00:08:33.555 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:33.555 Directive Send (19h): Supported 00:08:33.555 Directive Receive (1Ah): Supported 00:08:33.555 Virtualization Management (1Ch): Supported 00:08:33.555 Doorbell Buffer Config (7Ch): Supported 00:08:33.555 Format NVM (80h): Supported LBA-Change 00:08:33.555 I/O Commands 00:08:33.555 ------------ 00:08:33.555 Flush (00h): Supported LBA-Change 00:08:33.555 Write (01h): Supported LBA-Change 00:08:33.555 Read (02h): Supported 00:08:33.555 Compare (05h): Supported 00:08:33.555 Write Zeroes (08h): Supported LBA-Change 00:08:33.555 Dataset Management (09h): Supported LBA-Change 00:08:33.555 Unknown (0Ch): Supported 00:08:33.555 Unknown (12h): Supported 00:08:33.555 Copy (19h): Supported LBA-Change 00:08:33.555 Unknown (1Dh): Supported LBA-Change 00:08:33.555 00:08:33.555 Error Log 00:08:33.555 ========= 00:08:33.555 00:08:33.555 Arbitration 00:08:33.555 =========== 00:08:33.555 Arbitration Burst: no limit 00:08:33.555 00:08:33.555 Power Management 00:08:33.555 ================ 00:08:33.555 Number of Power States: 1 00:08:33.555 Current Power State: Power State #0 00:08:33.555 Power State #0: 00:08:33.555 Max Power: 25.00 W 00:08:33.555 Non-Operational State: Operational 00:08:33.555 Entry Latency: 16 microseconds 00:08:33.555 Exit Latency: 4 microseconds 00:08:33.555 Relative Read Throughput: 0 00:08:33.555 Relative Read Latency: 0 00:08:33.555 Relative Write Throughput: 0 00:08:33.555 Relative Write Latency: 0 00:08:33.555 Idle Power: Not Reported 00:08:33.555 Active Power: Not Reported 00:08:33.555 Non-Operational Permissive Mode: Not Supported 00:08:33.555 00:08:33.555 Health Information 00:08:33.555 ================== 00:08:33.555 Critical Warnings: 00:08:33.555 Available Spare Space: OK 00:08:33.555 Temperature: OK 00:08:33.555 Device Reliability: OK 00:08:33.555 Read Only: No 00:08:33.555 Volatile Memory Backup: OK 00:08:33.555 Current Temperature: 323 Kelvin (50 Celsius) 00:08:33.555 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:33.555 Available Spare: 0% 00:08:33.555 Available Spare Threshold: 0% 00:08:33.555 Life Percentage Used: 0% 00:08:33.555 Data Units Read: 1006 00:08:33.555 Data Units Written: 867 00:08:33.555 Host Read Commands: 55080 00:08:33.555 Host Write Commands: 53764 00:08:33.555 Controller Busy Time: 0 minutes 00:08:33.555 Power Cycles: 0 00:08:33.555 Power On Hours: 0 hours 00:08:33.555 Unsafe Shutdowns: 0 00:08:33.555 Unrecoverable Media Errors: 0 00:08:33.555 Lifetime Error Log Entries: 0 00:08:33.555 Warning Temperature Time: 0 minutes 00:08:33.555 Critical Temperature Time: 0 minutes 00:08:33.555 00:08:33.555 Number of Queues 00:08:33.555 ================ 00:08:33.555 Number of I/O Submission Queues: 64 00:08:33.555 Number of I/O Completion Queues: 64 00:08:33.555 00:08:33.555 ZNS Specific Controller Data 00:08:33.555 ============================ 00:08:33.555 Zone Append Size Limit: 0 00:08:33.555 00:08:33.555 00:08:33.555 Active Namespaces 00:08:33.555 ================= 00:08:33.555 Namespace ID:1 00:08:33.555 Error Recovery Timeout: Unlimited 00:08:33.555 Command Set Identifier: NVM (00h) 00:08:33.555 Deallocate: Supported 00:08:33.555 Deallocated/Unwritten Error: Supported 00:08:33.555 Deallocated Read Value: All 0x00 00:08:33.555 Deallocate in Write Zeroes: Not Supported 00:08:33.555 Deallocated Guard Field: 0xFFFF 00:08:33.555 Flush: Supported 00:08:33.555 Reservation: Not Supported 00:08:33.555 Namespace Sharing Capabilities: Private 00:08:33.555 Size (in LBAs): 1310720 (5GiB) 00:08:33.555 Capacity (in LBAs): 1310720 (5GiB) 00:08:33.555 Utilization (in LBAs): 1310720 (5GiB) 00:08:33.555 Thin Provisioning: Not Supported 00:08:33.555 Per-NS Atomic Units: No 00:08:33.555 Maximum Single Source Range Length: 128 00:08:33.555 Maximum Copy Length: 128 00:08:33.555 Maximum Source Range Count: 128 00:08:33.555 NGUID/EUI64 Never Reused: No 00:08:33.555 Namespace Write Protected: No 00:08:33.555 Number of LBA Formats: 8 00:08:33.555 Current LBA Format: LBA Format #04 00:08:33.555 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:33.555 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:33.555 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:33.555 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:33.556 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:33.556 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:33.556 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:33.556 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:33.556 00:08:33.556 NVM Specific Namespace Data 00:08:33.556 =========================== 00:08:33.556 Logical Block Storage Tag Mask: 0 00:08:33.556 Protection Information Capabilities: 00:08:33.556 16b Guard Protection Information Storage Tag Support: No 00:08:33.556 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:33.556 Storage Tag Check Read Support: No 00:08:33.556 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.556 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.556 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.556 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.556 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.556 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.556 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.556 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.556 ===================================================== 00:08:33.556 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:33.556 ===================================================== 00:08:33.556 Controller Capabilities/Features 00:08:33.556 ================================ 00:08:33.556 Vendor ID: 1b36 00:08:33.556 Subsystem Vendor ID: 1af4 00:08:33.556 Serial Number: 12342 00:08:33.556 Model Number: QEMU NVMe Ctrl 00:08:33.556 Firmware Version: 8.0.0 00:08:33.556 Recommended Arb Burst: 6 00:08:33.556 IEEE OUI Identifier: 00 54 52 00:08:33.556 Multi-path I/O 00:08:33.556 May have multiple subsystem ports: No 00:08:33.556 May have multiple controllers: No 00:08:33.556 Associated with SR-IOV VF: No 00:08:33.556 Max Data Transfer Size: 524288 00:08:33.556 Max Number of Namespaces: 256 00:08:33.556 Max Number of I/O Queues: 64 00:08:33.556 NVMe Specification Version (VS): 1.4 00:08:33.556 NVMe Specification Version (Identify): 1.4 00:08:33.556 Maximum Queue Entries: 2048 00:08:33.556 Contiguous Queues Required: Yes 00:08:33.556 Arbitration Mechanisms Supported 00:08:33.556 Weighted Round Robin: Not Supported 00:08:33.556 Vendor Specific: Not Supported 00:08:33.556 Reset Timeout: 7500 ms 00:08:33.556 Doorbell Stride: 4 bytes 00:08:33.556 NVM Subsystem Reset: Not Supported 00:08:33.556 Command Sets Supported 00:08:33.556 NVM Command Set: Supported 00:08:33.556 Boot Partition: Not Supported 00:08:33.556 Memory Page Size Minimum: 4096 bytes 00:08:33.556 Memory Page Size Maximum: 65536 bytes 00:08:33.556 Persistent Memory Region: Not Supported 00:08:33.556 Optional Asynchronous Events Supported 00:08:33.556 Namespace Attribute Notices: Supported 00:08:33.556 Firmware Activation Notices: Not Supported 00:08:33.556 ANA Change Notices: Not Supported 00:08:33.556 PLE Aggregate Log Change Notices: Not Supported 00:08:33.556 LBA Status Info Alert Notices: Not Supported 00:08:33.556 EGE Aggregate Log Change Notices: Not Supported 00:08:33.556 Normal NVM Subsystem Shutdown event: Not Supported 00:08:33.556 Zone Descriptor Change Notices: Not Supported 00:08:33.556 Discovery Log Change Notices: Not Supported 00:08:33.556 Controller Attributes 00:08:33.556 128-bit Host Identifier: Not Supported 00:08:33.556 Non-Operational Permissive Mode: Not Supported 00:08:33.556 NVM Sets: Not Supported 00:08:33.556 Read Recovery Levels: Not Supported 00:08:33.556 Endurance Groups: Not Supported 00:08:33.556 Predictable Latency Mode: Not Supported 00:08:33.556 Traffic Based Keep ALive: Not Supported 00:08:33.556 Namespace Granularity: Not Supported 00:08:33.556 SQ Associations: Not Supported 00:08:33.556 UUID List: Not Supported 00:08:33.556 Multi-Domain Subsystem: Not Supported 00:08:33.556 Fixed Capacity Management: Not Supported 00:08:33.556 Variable Capacity Management: Not Supported 00:08:33.556 Delete Endurance Group: Not Supported 00:08:33.556 Delete NVM Set: Not[2024-11-18 23:03:52.892164] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0] process 75189 terminated unexpected 00:08:33.556 Supported 00:08:33.556 Extended LBA Formats Supported: Supported 00:08:33.556 Flexible Data Placement Supported: Not Supported 00:08:33.556 00:08:33.556 Controller Memory Buffer Support 00:08:33.556 ================================ 00:08:33.556 Supported: No 00:08:33.556 00:08:33.556 Persistent Memory Region Support 00:08:33.556 ================================ 00:08:33.556 Supported: No 00:08:33.556 00:08:33.556 Admin Command Set Attributes 00:08:33.556 ============================ 00:08:33.556 Security Send/Receive: Not Supported 00:08:33.556 Format NVM: Supported 00:08:33.556 Firmware Activate/Download: Not Supported 00:08:33.556 Namespace Management: Supported 00:08:33.556 Device Self-Test: Not Supported 00:08:33.556 Directives: Supported 00:08:33.556 NVMe-MI: Not Supported 00:08:33.556 Virtualization Management: Not Supported 00:08:33.556 Doorbell Buffer Config: Supported 00:08:33.556 Get LBA Status Capability: Not Supported 00:08:33.556 Command & Feature Lockdown Capability: Not Supported 00:08:33.556 Abort Command Limit: 4 00:08:33.556 Async Event Request Limit: 4 00:08:33.556 Number of Firmware Slots: N/A 00:08:33.556 Firmware Slot 1 Read-Only: N/A 00:08:33.556 Firmware Activation Without Reset: N/A 00:08:33.556 Multiple Update Detection Support: N/A 00:08:33.556 Firmware Update Granularity: No Information Provided 00:08:33.556 Per-Namespace SMART Log: Yes 00:08:33.556 Asymmetric Namespace Access Log Page: Not Supported 00:08:33.556 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:33.556 Command Effects Log Page: Supported 00:08:33.556 Get Log Page Extended Data: Supported 00:08:33.556 Telemetry Log Pages: Not Supported 00:08:33.556 Persistent Event Log Pages: Not Supported 00:08:33.556 Supported Log Pages Log Page: May Support 00:08:33.556 Commands Supported & Effects Log Page: Not Supported 00:08:33.556 Feature Identifiers & Effects Log Page:May Support 00:08:33.556 NVMe-MI Commands & Effects Log Page: May Support 00:08:33.556 Data Area 4 for Telemetry Log: Not Supported 00:08:33.556 Error Log Page Entries Supported: 1 00:08:33.556 Keep Alive: Not Supported 00:08:33.556 00:08:33.556 NVM Command Set Attributes 00:08:33.556 ========================== 00:08:33.556 Submission Queue Entry Size 00:08:33.556 Max: 64 00:08:33.556 Min: 64 00:08:33.556 Completion Queue Entry Size 00:08:33.556 Max: 16 00:08:33.556 Min: 16 00:08:33.556 Number of Namespaces: 256 00:08:33.556 Compare Command: Supported 00:08:33.556 Write Uncorrectable Command: Not Supported 00:08:33.556 Dataset Management Command: Supported 00:08:33.556 Write Zeroes Command: Supported 00:08:33.556 Set Features Save Field: Supported 00:08:33.556 Reservations: Not Supported 00:08:33.556 Timestamp: Supported 00:08:33.556 Copy: Supported 00:08:33.556 Volatile Write Cache: Present 00:08:33.556 Atomic Write Unit (Normal): 1 00:08:33.556 Atomic Write Unit (PFail): 1 00:08:33.556 Atomic Compare & Write Unit: 1 00:08:33.556 Fused Compare & Write: Not Supported 00:08:33.556 Scatter-Gather List 00:08:33.556 SGL Command Set: Supported 00:08:33.556 SGL Keyed: Not Supported 00:08:33.556 SGL Bit Bucket Descriptor: Not Supported 00:08:33.556 SGL Metadata Pointer: Not Supported 00:08:33.556 Oversized SGL: Not Supported 00:08:33.556 SGL Metadata Address: Not Supported 00:08:33.556 SGL Offset: Not Supported 00:08:33.556 Transport SGL Data Block: Not Supported 00:08:33.556 Replay Protected Memory Block: Not Supported 00:08:33.556 00:08:33.556 Firmware Slot Information 00:08:33.556 ========================= 00:08:33.556 Active slot: 1 00:08:33.556 Slot 1 Firmware Revision: 1.0 00:08:33.556 00:08:33.556 00:08:33.556 Commands Supported and Effects 00:08:33.556 ============================== 00:08:33.556 Admin Commands 00:08:33.556 -------------- 00:08:33.556 Delete I/O Submission Queue (00h): Supported 00:08:33.556 Create I/O Submission Queue (01h): Supported 00:08:33.556 Get Log Page (02h): Supported 00:08:33.556 Delete I/O Completion Queue (04h): Supported 00:08:33.556 Create I/O Completion Queue (05h): Supported 00:08:33.556 Identify (06h): Supported 00:08:33.556 Abort (08h): Supported 00:08:33.556 Set Features (09h): Supported 00:08:33.557 Get Features (0Ah): Supported 00:08:33.557 Asynchronous Event Request (0Ch): Supported 00:08:33.557 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:33.557 Directive Send (19h): Supported 00:08:33.557 Directive Receive (1Ah): Supported 00:08:33.557 Virtualization Management (1Ch): Supported 00:08:33.557 Doorbell Buffer Config (7Ch): Supported 00:08:33.557 Format NVM (80h): Supported LBA-Change 00:08:33.557 I/O Commands 00:08:33.557 ------------ 00:08:33.557 Flush (00h): Supported LBA-Change 00:08:33.557 Write (01h): Supported LBA-Change 00:08:33.557 Read (02h): Supported 00:08:33.557 Compare (05h): Supported 00:08:33.557 Write Zeroes (08h): Supported LBA-Change 00:08:33.557 Dataset Management (09h): Supported LBA-Change 00:08:33.557 Unknown (0Ch): Supported 00:08:33.557 Unknown (12h): Supported 00:08:33.557 Copy (19h): Supported LBA-Change 00:08:33.557 Unknown (1Dh): Supported LBA-Change 00:08:33.557 00:08:33.557 Error Log 00:08:33.557 ========= 00:08:33.557 00:08:33.557 Arbitration 00:08:33.557 =========== 00:08:33.557 Arbitration Burst: no limit 00:08:33.557 00:08:33.557 Power Management 00:08:33.557 ================ 00:08:33.557 Number of Power States: 1 00:08:33.557 Current Power State: Power State #0 00:08:33.557 Power State #0: 00:08:33.557 Max Power: 25.00 W 00:08:33.557 Non-Operational State: Operational 00:08:33.557 Entry Latency: 16 microseconds 00:08:33.557 Exit Latency: 4 microseconds 00:08:33.557 Relative Read Throughput: 0 00:08:33.557 Relative Read Latency: 0 00:08:33.557 Relative Write Throughput: 0 00:08:33.557 Relative Write Latency: 0 00:08:33.557 Idle Power: Not Reported 00:08:33.557 Active Power: Not Reported 00:08:33.557 Non-Operational Permissive Mode: Not Supported 00:08:33.557 00:08:33.557 Health Information 00:08:33.557 ================== 00:08:33.557 Critical Warnings: 00:08:33.557 Available Spare Space: OK 00:08:33.557 Temperature: OK 00:08:33.557 Device Reliability: OK 00:08:33.557 Read Only: No 00:08:33.557 Volatile Memory Backup: OK 00:08:33.557 Current Temperature: 323 Kelvin (50 Celsius) 00:08:33.557 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:33.557 Available Spare: 0% 00:08:33.557 Available Spare Threshold: 0% 00:08:33.557 Life Percentage Used: 0% 00:08:33.557 Data Units Read: 2204 00:08:33.557 Data Units Written: 1991 00:08:33.557 Host Read Commands: 111837 00:08:33.557 Host Write Commands: 110106 00:08:33.557 Controller Busy Time: 0 minutes 00:08:33.557 Power Cycles: 0 00:08:33.557 Power On Hours: 0 hours 00:08:33.557 Unsafe Shutdowns: 0 00:08:33.557 Unrecoverable Media Errors: 0 00:08:33.557 Lifetime Error Log Entries: 0 00:08:33.557 Warning Temperature Time: 0 minutes 00:08:33.557 Critical Temperature Time: 0 minutes 00:08:33.557 00:08:33.557 Number of Queues 00:08:33.557 ================ 00:08:33.557 Number of I/O Submission Queues: 64 00:08:33.557 Number of I/O Completion Queues: 64 00:08:33.557 00:08:33.557 ZNS Specific Controller Data 00:08:33.557 ============================ 00:08:33.557 Zone Append Size Limit: 0 00:08:33.557 00:08:33.557 00:08:33.557 Active Namespaces 00:08:33.557 ================= 00:08:33.557 Namespace ID:1 00:08:33.557 Error Recovery Timeout: Unlimited 00:08:33.557 Command Set Identifier: NVM (00h) 00:08:33.557 Deallocate: Supported 00:08:33.557 Deallocated/Unwritten Error: Supported 00:08:33.557 Deallocated Read Value: All 0x00 00:08:33.557 Deallocate in Write Zeroes: Not Supported 00:08:33.557 Deallocated Guard Field: 0xFFFF 00:08:33.557 Flush: Supported 00:08:33.557 Reservation: Not Supported 00:08:33.557 Namespace Sharing Capabilities: Private 00:08:33.557 Size (in LBAs): 1048576 (4GiB) 00:08:33.557 Capacity (in LBAs): 1048576 (4GiB) 00:08:33.557 Utilization (in LBAs): 1048576 (4GiB) 00:08:33.557 Thin Provisioning: Not Supported 00:08:33.557 Per-NS Atomic Units: No 00:08:33.557 Maximum Single Source Range Length: 128 00:08:33.557 Maximum Copy Length: 128 00:08:33.557 Maximum Source Range Count: 128 00:08:33.557 NGUID/EUI64 Never Reused: No 00:08:33.557 Namespace Write Protected: No 00:08:33.557 Number of LBA Formats: 8 00:08:33.557 Current LBA Format: LBA Format #04 00:08:33.557 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:33.557 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:33.557 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:33.557 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:33.557 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:33.557 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:33.557 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:33.557 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:33.557 00:08:33.557 NVM Specific Namespace Data 00:08:33.557 =========================== 00:08:33.557 Logical Block Storage Tag Mask: 0 00:08:33.557 Protection Information Capabilities: 00:08:33.557 16b Guard Protection Information Storage Tag Support: No 00:08:33.557 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:33.557 Storage Tag Check Read Support: No 00:08:33.557 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.557 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.557 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.557 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.557 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.557 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.557 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.557 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.557 Namespace ID:2 00:08:33.557 Error Recovery Timeout: Unlimited 00:08:33.557 Command Set Identifier: NVM (00h) 00:08:33.557 Deallocate: Supported 00:08:33.557 Deallocated/Unwritten Error: Supported 00:08:33.557 Deallocated Read Value: All 0x00 00:08:33.557 Deallocate in Write Zeroes: Not Supported 00:08:33.557 Deallocated Guard Field: 0xFFFF 00:08:33.557 Flush: Supported 00:08:33.557 Reservation: Not Supported 00:08:33.557 Namespace Sharing Capabilities: Private 00:08:33.557 Size (in LBAs): 1048576 (4GiB) 00:08:33.557 Capacity (in LBAs): 1048576 (4GiB) 00:08:33.557 Utilization (in LBAs): 1048576 (4GiB) 00:08:33.557 Thin Provisioning: Not Supported 00:08:33.557 Per-NS Atomic Units: No 00:08:33.557 Maximum Single Source Range Length: 128 00:08:33.557 Maximum Copy Length: 128 00:08:33.557 Maximum Source Range Count: 128 00:08:33.557 NGUID/EUI64 Never Reused: No 00:08:33.557 Namespace Write Protected: No 00:08:33.557 Number of LBA Formats: 8 00:08:33.557 Current LBA Format: LBA Format #04 00:08:33.557 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:33.557 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:33.557 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:33.557 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:33.557 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:33.557 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:33.557 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:33.557 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:33.557 00:08:33.557 NVM Specific Namespace Data 00:08:33.557 =========================== 00:08:33.557 Logical Block Storage Tag Mask: 0 00:08:33.557 Protection Information Capabilities: 00:08:33.557 16b Guard Protection Information Storage Tag Support: No 00:08:33.557 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:33.557 Storage Tag Check Read Support: No 00:08:33.557 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.557 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.557 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.557 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.557 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.557 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.557 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.557 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.557 Namespace ID:3 00:08:33.557 Error Recovery Timeout: Unlimited 00:08:33.557 Command Set Identifier: NVM (00h) 00:08:33.557 Deallocate: Supported 00:08:33.557 Deallocated/Unwritten Error: Supported 00:08:33.557 Deallocated Read Value: All 0x00 00:08:33.557 Deallocate in Write Zeroes: Not Supported 00:08:33.557 Deallocated Guard Field: 0xFFFF 00:08:33.558 Flush: Supported 00:08:33.558 Reservation: Not Supported 00:08:33.558 Namespace Sharing Capabilities: Private 00:08:33.558 Size (in LBAs): 1048576 (4GiB) 00:08:33.558 Capacity (in LBAs): 1048576 (4GiB) 00:08:33.558 Utilization (in LBAs): 1048576 (4GiB) 00:08:33.558 Thin Provisioning: Not Supported 00:08:33.558 Per-NS Atomic Units: No 00:08:33.558 Maximum Single Source Range Length: 128 00:08:33.558 Maximum Copy Length: 128 00:08:33.558 Maximum Source Range Count: 128 00:08:33.558 NGUID/EUI64 Never Reused: No 00:08:33.558 Namespace Write Protected: No 00:08:33.558 Number of LBA Formats: 8 00:08:33.558 Current LBA Format: LBA Format #04 00:08:33.558 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:33.558 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:33.558 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:33.558 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:33.558 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:33.558 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:33.558 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:33.558 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:33.558 00:08:33.558 NVM Specific Namespace Data 00:08:33.558 =========================== 00:08:33.558 Logical Block Storage Tag Mask: 0 00:08:33.558 Protection Information Capabilities: 00:08:33.558 16b Guard Protection Information Storage Tag Support: No 00:08:33.558 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:33.558 Storage Tag Check Read Support: No 00:08:33.558 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.558 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.558 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.558 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.558 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.558 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.558 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.558 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.820 23:03:52 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:33.820 23:03:52 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:08:33.820 ===================================================== 00:08:33.820 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:33.820 ===================================================== 00:08:33.820 Controller Capabilities/Features 00:08:33.820 ================================ 00:08:33.820 Vendor ID: 1b36 00:08:33.820 Subsystem Vendor ID: 1af4 00:08:33.820 Serial Number: 12340 00:08:33.820 Model Number: QEMU NVMe Ctrl 00:08:33.820 Firmware Version: 8.0.0 00:08:33.820 Recommended Arb Burst: 6 00:08:33.820 IEEE OUI Identifier: 00 54 52 00:08:33.820 Multi-path I/O 00:08:33.820 May have multiple subsystem ports: No 00:08:33.820 May have multiple controllers: No 00:08:33.820 Associated with SR-IOV VF: No 00:08:33.820 Max Data Transfer Size: 524288 00:08:33.820 Max Number of Namespaces: 256 00:08:33.820 Max Number of I/O Queues: 64 00:08:33.820 NVMe Specification Version (VS): 1.4 00:08:33.820 NVMe Specification Version (Identify): 1.4 00:08:33.820 Maximum Queue Entries: 2048 00:08:33.820 Contiguous Queues Required: Yes 00:08:33.820 Arbitration Mechanisms Supported 00:08:33.820 Weighted Round Robin: Not Supported 00:08:33.820 Vendor Specific: Not Supported 00:08:33.820 Reset Timeout: 7500 ms 00:08:33.820 Doorbell Stride: 4 bytes 00:08:33.820 NVM Subsystem Reset: Not Supported 00:08:33.820 Command Sets Supported 00:08:33.820 NVM Command Set: Supported 00:08:33.820 Boot Partition: Not Supported 00:08:33.820 Memory Page Size Minimum: 4096 bytes 00:08:33.820 Memory Page Size Maximum: 65536 bytes 00:08:33.820 Persistent Memory Region: Not Supported 00:08:33.820 Optional Asynchronous Events Supported 00:08:33.820 Namespace Attribute Notices: Supported 00:08:33.820 Firmware Activation Notices: Not Supported 00:08:33.820 ANA Change Notices: Not Supported 00:08:33.820 PLE Aggregate Log Change Notices: Not Supported 00:08:33.820 LBA Status Info Alert Notices: Not Supported 00:08:33.820 EGE Aggregate Log Change Notices: Not Supported 00:08:33.820 Normal NVM Subsystem Shutdown event: Not Supported 00:08:33.820 Zone Descriptor Change Notices: Not Supported 00:08:33.820 Discovery Log Change Notices: Not Supported 00:08:33.820 Controller Attributes 00:08:33.820 128-bit Host Identifier: Not Supported 00:08:33.820 Non-Operational Permissive Mode: Not Supported 00:08:33.820 NVM Sets: Not Supported 00:08:33.820 Read Recovery Levels: Not Supported 00:08:33.820 Endurance Groups: Not Supported 00:08:33.820 Predictable Latency Mode: Not Supported 00:08:33.820 Traffic Based Keep ALive: Not Supported 00:08:33.820 Namespace Granularity: Not Supported 00:08:33.820 SQ Associations: Not Supported 00:08:33.820 UUID List: Not Supported 00:08:33.820 Multi-Domain Subsystem: Not Supported 00:08:33.820 Fixed Capacity Management: Not Supported 00:08:33.820 Variable Capacity Management: Not Supported 00:08:33.820 Delete Endurance Group: Not Supported 00:08:33.820 Delete NVM Set: Not Supported 00:08:33.820 Extended LBA Formats Supported: Supported 00:08:33.820 Flexible Data Placement Supported: Not Supported 00:08:33.820 00:08:33.820 Controller Memory Buffer Support 00:08:33.820 ================================ 00:08:33.820 Supported: No 00:08:33.820 00:08:33.820 Persistent Memory Region Support 00:08:33.820 ================================ 00:08:33.820 Supported: No 00:08:33.820 00:08:33.820 Admin Command Set Attributes 00:08:33.820 ============================ 00:08:33.820 Security Send/Receive: Not Supported 00:08:33.820 Format NVM: Supported 00:08:33.820 Firmware Activate/Download: Not Supported 00:08:33.820 Namespace Management: Supported 00:08:33.820 Device Self-Test: Not Supported 00:08:33.820 Directives: Supported 00:08:33.820 NVMe-MI: Not Supported 00:08:33.820 Virtualization Management: Not Supported 00:08:33.820 Doorbell Buffer Config: Supported 00:08:33.820 Get LBA Status Capability: Not Supported 00:08:33.820 Command & Feature Lockdown Capability: Not Supported 00:08:33.820 Abort Command Limit: 4 00:08:33.820 Async Event Request Limit: 4 00:08:33.820 Number of Firmware Slots: N/A 00:08:33.820 Firmware Slot 1 Read-Only: N/A 00:08:33.820 Firmware Activation Without Reset: N/A 00:08:33.820 Multiple Update Detection Support: N/A 00:08:33.820 Firmware Update Granularity: No Information Provided 00:08:33.820 Per-Namespace SMART Log: Yes 00:08:33.820 Asymmetric Namespace Access Log Page: Not Supported 00:08:33.820 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:33.820 Command Effects Log Page: Supported 00:08:33.821 Get Log Page Extended Data: Supported 00:08:33.821 Telemetry Log Pages: Not Supported 00:08:33.821 Persistent Event Log Pages: Not Supported 00:08:33.821 Supported Log Pages Log Page: May Support 00:08:33.821 Commands Supported & Effects Log Page: Not Supported 00:08:33.821 Feature Identifiers & Effects Log Page:May Support 00:08:33.821 NVMe-MI Commands & Effects Log Page: May Support 00:08:33.821 Data Area 4 for Telemetry Log: Not Supported 00:08:33.821 Error Log Page Entries Supported: 1 00:08:33.821 Keep Alive: Not Supported 00:08:33.821 00:08:33.821 NVM Command Set Attributes 00:08:33.821 ========================== 00:08:33.821 Submission Queue Entry Size 00:08:33.821 Max: 64 00:08:33.821 Min: 64 00:08:33.821 Completion Queue Entry Size 00:08:33.821 Max: 16 00:08:33.821 Min: 16 00:08:33.821 Number of Namespaces: 256 00:08:33.821 Compare Command: Supported 00:08:33.821 Write Uncorrectable Command: Not Supported 00:08:33.821 Dataset Management Command: Supported 00:08:33.821 Write Zeroes Command: Supported 00:08:33.821 Set Features Save Field: Supported 00:08:33.821 Reservations: Not Supported 00:08:33.821 Timestamp: Supported 00:08:33.821 Copy: Supported 00:08:33.821 Volatile Write Cache: Present 00:08:33.821 Atomic Write Unit (Normal): 1 00:08:33.821 Atomic Write Unit (PFail): 1 00:08:33.821 Atomic Compare & Write Unit: 1 00:08:33.821 Fused Compare & Write: Not Supported 00:08:33.821 Scatter-Gather List 00:08:33.821 SGL Command Set: Supported 00:08:33.821 SGL Keyed: Not Supported 00:08:33.821 SGL Bit Bucket Descriptor: Not Supported 00:08:33.821 SGL Metadata Pointer: Not Supported 00:08:33.821 Oversized SGL: Not Supported 00:08:33.821 SGL Metadata Address: Not Supported 00:08:33.821 SGL Offset: Not Supported 00:08:33.821 Transport SGL Data Block: Not Supported 00:08:33.821 Replay Protected Memory Block: Not Supported 00:08:33.821 00:08:33.821 Firmware Slot Information 00:08:33.821 ========================= 00:08:33.821 Active slot: 1 00:08:33.821 Slot 1 Firmware Revision: 1.0 00:08:33.821 00:08:33.821 00:08:33.821 Commands Supported and Effects 00:08:33.821 ============================== 00:08:33.821 Admin Commands 00:08:33.821 -------------- 00:08:33.821 Delete I/O Submission Queue (00h): Supported 00:08:33.821 Create I/O Submission Queue (01h): Supported 00:08:33.821 Get Log Page (02h): Supported 00:08:33.821 Delete I/O Completion Queue (04h): Supported 00:08:33.821 Create I/O Completion Queue (05h): Supported 00:08:33.821 Identify (06h): Supported 00:08:33.821 Abort (08h): Supported 00:08:33.821 Set Features (09h): Supported 00:08:33.821 Get Features (0Ah): Supported 00:08:33.821 Asynchronous Event Request (0Ch): Supported 00:08:33.821 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:33.821 Directive Send (19h): Supported 00:08:33.821 Directive Receive (1Ah): Supported 00:08:33.821 Virtualization Management (1Ch): Supported 00:08:33.821 Doorbell Buffer Config (7Ch): Supported 00:08:33.821 Format NVM (80h): Supported LBA-Change 00:08:33.821 I/O Commands 00:08:33.821 ------------ 00:08:33.821 Flush (00h): Supported LBA-Change 00:08:33.821 Write (01h): Supported LBA-Change 00:08:33.821 Read (02h): Supported 00:08:33.821 Compare (05h): Supported 00:08:33.821 Write Zeroes (08h): Supported LBA-Change 00:08:33.821 Dataset Management (09h): Supported LBA-Change 00:08:33.821 Unknown (0Ch): Supported 00:08:33.821 Unknown (12h): Supported 00:08:33.821 Copy (19h): Supported LBA-Change 00:08:33.821 Unknown (1Dh): Supported LBA-Change 00:08:33.821 00:08:33.821 Error Log 00:08:33.821 ========= 00:08:33.821 00:08:33.821 Arbitration 00:08:33.821 =========== 00:08:33.821 Arbitration Burst: no limit 00:08:33.821 00:08:33.821 Power Management 00:08:33.821 ================ 00:08:33.821 Number of Power States: 1 00:08:33.821 Current Power State: Power State #0 00:08:33.821 Power State #0: 00:08:33.821 Max Power: 25.00 W 00:08:33.821 Non-Operational State: Operational 00:08:33.821 Entry Latency: 16 microseconds 00:08:33.821 Exit Latency: 4 microseconds 00:08:33.821 Relative Read Throughput: 0 00:08:33.821 Relative Read Latency: 0 00:08:33.821 Relative Write Throughput: 0 00:08:33.821 Relative Write Latency: 0 00:08:33.821 Idle Power: Not Reported 00:08:33.821 Active Power: Not Reported 00:08:33.821 Non-Operational Permissive Mode: Not Supported 00:08:33.821 00:08:33.821 Health Information 00:08:33.821 ================== 00:08:33.821 Critical Warnings: 00:08:33.821 Available Spare Space: OK 00:08:33.821 Temperature: OK 00:08:33.821 Device Reliability: OK 00:08:33.821 Read Only: No 00:08:33.821 Volatile Memory Backup: OK 00:08:33.821 Current Temperature: 323 Kelvin (50 Celsius) 00:08:33.821 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:33.821 Available Spare: 0% 00:08:33.821 Available Spare Threshold: 0% 00:08:33.821 Life Percentage Used: 0% 00:08:33.821 Data Units Read: 632 00:08:33.821 Data Units Written: 560 00:08:33.821 Host Read Commands: 36093 00:08:33.821 Host Write Commands: 35879 00:08:33.821 Controller Busy Time: 0 minutes 00:08:33.821 Power Cycles: 0 00:08:33.821 Power On Hours: 0 hours 00:08:33.821 Unsafe Shutdowns: 0 00:08:33.821 Unrecoverable Media Errors: 0 00:08:33.821 Lifetime Error Log Entries: 0 00:08:33.821 Warning Temperature Time: 0 minutes 00:08:33.821 Critical Temperature Time: 0 minutes 00:08:33.821 00:08:33.821 Number of Queues 00:08:33.821 ================ 00:08:33.821 Number of I/O Submission Queues: 64 00:08:33.821 Number of I/O Completion Queues: 64 00:08:33.821 00:08:33.821 ZNS Specific Controller Data 00:08:33.821 ============================ 00:08:33.821 Zone Append Size Limit: 0 00:08:33.821 00:08:33.821 00:08:33.821 Active Namespaces 00:08:33.821 ================= 00:08:33.821 Namespace ID:1 00:08:33.821 Error Recovery Timeout: Unlimited 00:08:33.821 Command Set Identifier: NVM (00h) 00:08:33.821 Deallocate: Supported 00:08:33.821 Deallocated/Unwritten Error: Supported 00:08:33.821 Deallocated Read Value: All 0x00 00:08:33.821 Deallocate in Write Zeroes: Not Supported 00:08:33.821 Deallocated Guard Field: 0xFFFF 00:08:33.821 Flush: Supported 00:08:33.821 Reservation: Not Supported 00:08:33.821 Metadata Transferred as: Separate Metadata Buffer 00:08:33.821 Namespace Sharing Capabilities: Private 00:08:33.821 Size (in LBAs): 1548666 (5GiB) 00:08:33.821 Capacity (in LBAs): 1548666 (5GiB) 00:08:33.821 Utilization (in LBAs): 1548666 (5GiB) 00:08:33.821 Thin Provisioning: Not Supported 00:08:33.821 Per-NS Atomic Units: No 00:08:33.821 Maximum Single Source Range Length: 128 00:08:33.821 Maximum Copy Length: 128 00:08:33.821 Maximum Source Range Count: 128 00:08:33.821 NGUID/EUI64 Never Reused: No 00:08:33.821 Namespace Write Protected: No 00:08:33.821 Number of LBA Formats: 8 00:08:33.821 Current LBA Format: LBA Format #07 00:08:33.821 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:33.821 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:33.821 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:33.821 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:33.821 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:33.821 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:33.821 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:33.821 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:33.821 00:08:33.821 NVM Specific Namespace Data 00:08:33.821 =========================== 00:08:33.821 Logical Block Storage Tag Mask: 0 00:08:33.821 Protection Information Capabilities: 00:08:33.821 16b Guard Protection Information Storage Tag Support: No 00:08:33.821 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:33.821 Storage Tag Check Read Support: No 00:08:33.821 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.821 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.821 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.821 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.821 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.821 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.821 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.821 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.821 23:03:53 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:33.821 23:03:53 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:08:34.084 ===================================================== 00:08:34.084 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:34.084 ===================================================== 00:08:34.084 Controller Capabilities/Features 00:08:34.084 ================================ 00:08:34.084 Vendor ID: 1b36 00:08:34.084 Subsystem Vendor ID: 1af4 00:08:34.084 Serial Number: 12341 00:08:34.084 Model Number: QEMU NVMe Ctrl 00:08:34.084 Firmware Version: 8.0.0 00:08:34.084 Recommended Arb Burst: 6 00:08:34.084 IEEE OUI Identifier: 00 54 52 00:08:34.084 Multi-path I/O 00:08:34.084 May have multiple subsystem ports: No 00:08:34.084 May have multiple controllers: No 00:08:34.084 Associated with SR-IOV VF: No 00:08:34.084 Max Data Transfer Size: 524288 00:08:34.084 Max Number of Namespaces: 256 00:08:34.084 Max Number of I/O Queues: 64 00:08:34.084 NVMe Specification Version (VS): 1.4 00:08:34.084 NVMe Specification Version (Identify): 1.4 00:08:34.084 Maximum Queue Entries: 2048 00:08:34.084 Contiguous Queues Required: Yes 00:08:34.084 Arbitration Mechanisms Supported 00:08:34.084 Weighted Round Robin: Not Supported 00:08:34.084 Vendor Specific: Not Supported 00:08:34.084 Reset Timeout: 7500 ms 00:08:34.084 Doorbell Stride: 4 bytes 00:08:34.084 NVM Subsystem Reset: Not Supported 00:08:34.084 Command Sets Supported 00:08:34.084 NVM Command Set: Supported 00:08:34.084 Boot Partition: Not Supported 00:08:34.084 Memory Page Size Minimum: 4096 bytes 00:08:34.084 Memory Page Size Maximum: 65536 bytes 00:08:34.084 Persistent Memory Region: Not Supported 00:08:34.084 Optional Asynchronous Events Supported 00:08:34.084 Namespace Attribute Notices: Supported 00:08:34.084 Firmware Activation Notices: Not Supported 00:08:34.084 ANA Change Notices: Not Supported 00:08:34.084 PLE Aggregate Log Change Notices: Not Supported 00:08:34.084 LBA Status Info Alert Notices: Not Supported 00:08:34.084 EGE Aggregate Log Change Notices: Not Supported 00:08:34.084 Normal NVM Subsystem Shutdown event: Not Supported 00:08:34.084 Zone Descriptor Change Notices: Not Supported 00:08:34.084 Discovery Log Change Notices: Not Supported 00:08:34.084 Controller Attributes 00:08:34.084 128-bit Host Identifier: Not Supported 00:08:34.084 Non-Operational Permissive Mode: Not Supported 00:08:34.084 NVM Sets: Not Supported 00:08:34.084 Read Recovery Levels: Not Supported 00:08:34.084 Endurance Groups: Not Supported 00:08:34.084 Predictable Latency Mode: Not Supported 00:08:34.084 Traffic Based Keep ALive: Not Supported 00:08:34.084 Namespace Granularity: Not Supported 00:08:34.084 SQ Associations: Not Supported 00:08:34.084 UUID List: Not Supported 00:08:34.084 Multi-Domain Subsystem: Not Supported 00:08:34.084 Fixed Capacity Management: Not Supported 00:08:34.084 Variable Capacity Management: Not Supported 00:08:34.084 Delete Endurance Group: Not Supported 00:08:34.084 Delete NVM Set: Not Supported 00:08:34.084 Extended LBA Formats Supported: Supported 00:08:34.084 Flexible Data Placement Supported: Not Supported 00:08:34.084 00:08:34.084 Controller Memory Buffer Support 00:08:34.084 ================================ 00:08:34.084 Supported: No 00:08:34.084 00:08:34.084 Persistent Memory Region Support 00:08:34.084 ================================ 00:08:34.084 Supported: No 00:08:34.084 00:08:34.084 Admin Command Set Attributes 00:08:34.084 ============================ 00:08:34.084 Security Send/Receive: Not Supported 00:08:34.084 Format NVM: Supported 00:08:34.084 Firmware Activate/Download: Not Supported 00:08:34.084 Namespace Management: Supported 00:08:34.084 Device Self-Test: Not Supported 00:08:34.084 Directives: Supported 00:08:34.084 NVMe-MI: Not Supported 00:08:34.084 Virtualization Management: Not Supported 00:08:34.084 Doorbell Buffer Config: Supported 00:08:34.084 Get LBA Status Capability: Not Supported 00:08:34.084 Command & Feature Lockdown Capability: Not Supported 00:08:34.084 Abort Command Limit: 4 00:08:34.084 Async Event Request Limit: 4 00:08:34.084 Number of Firmware Slots: N/A 00:08:34.084 Firmware Slot 1 Read-Only: N/A 00:08:34.084 Firmware Activation Without Reset: N/A 00:08:34.084 Multiple Update Detection Support: N/A 00:08:34.084 Firmware Update Granularity: No Information Provided 00:08:34.084 Per-Namespace SMART Log: Yes 00:08:34.084 Asymmetric Namespace Access Log Page: Not Supported 00:08:34.084 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:34.084 Command Effects Log Page: Supported 00:08:34.084 Get Log Page Extended Data: Supported 00:08:34.084 Telemetry Log Pages: Not Supported 00:08:34.084 Persistent Event Log Pages: Not Supported 00:08:34.084 Supported Log Pages Log Page: May Support 00:08:34.084 Commands Supported & Effects Log Page: Not Supported 00:08:34.084 Feature Identifiers & Effects Log Page:May Support 00:08:34.084 NVMe-MI Commands & Effects Log Page: May Support 00:08:34.084 Data Area 4 for Telemetry Log: Not Supported 00:08:34.084 Error Log Page Entries Supported: 1 00:08:34.084 Keep Alive: Not Supported 00:08:34.084 00:08:34.084 NVM Command Set Attributes 00:08:34.084 ========================== 00:08:34.084 Submission Queue Entry Size 00:08:34.084 Max: 64 00:08:34.084 Min: 64 00:08:34.084 Completion Queue Entry Size 00:08:34.084 Max: 16 00:08:34.084 Min: 16 00:08:34.084 Number of Namespaces: 256 00:08:34.084 Compare Command: Supported 00:08:34.084 Write Uncorrectable Command: Not Supported 00:08:34.084 Dataset Management Command: Supported 00:08:34.084 Write Zeroes Command: Supported 00:08:34.084 Set Features Save Field: Supported 00:08:34.084 Reservations: Not Supported 00:08:34.084 Timestamp: Supported 00:08:34.084 Copy: Supported 00:08:34.084 Volatile Write Cache: Present 00:08:34.084 Atomic Write Unit (Normal): 1 00:08:34.084 Atomic Write Unit (PFail): 1 00:08:34.084 Atomic Compare & Write Unit: 1 00:08:34.084 Fused Compare & Write: Not Supported 00:08:34.084 Scatter-Gather List 00:08:34.084 SGL Command Set: Supported 00:08:34.084 SGL Keyed: Not Supported 00:08:34.085 SGL Bit Bucket Descriptor: Not Supported 00:08:34.085 SGL Metadata Pointer: Not Supported 00:08:34.085 Oversized SGL: Not Supported 00:08:34.085 SGL Metadata Address: Not Supported 00:08:34.085 SGL Offset: Not Supported 00:08:34.085 Transport SGL Data Block: Not Supported 00:08:34.085 Replay Protected Memory Block: Not Supported 00:08:34.085 00:08:34.085 Firmware Slot Information 00:08:34.085 ========================= 00:08:34.085 Active slot: 1 00:08:34.085 Slot 1 Firmware Revision: 1.0 00:08:34.085 00:08:34.085 00:08:34.085 Commands Supported and Effects 00:08:34.085 ============================== 00:08:34.085 Admin Commands 00:08:34.085 -------------- 00:08:34.085 Delete I/O Submission Queue (00h): Supported 00:08:34.085 Create I/O Submission Queue (01h): Supported 00:08:34.085 Get Log Page (02h): Supported 00:08:34.085 Delete I/O Completion Queue (04h): Supported 00:08:34.085 Create I/O Completion Queue (05h): Supported 00:08:34.085 Identify (06h): Supported 00:08:34.085 Abort (08h): Supported 00:08:34.085 Set Features (09h): Supported 00:08:34.085 Get Features (0Ah): Supported 00:08:34.085 Asynchronous Event Request (0Ch): Supported 00:08:34.085 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:34.085 Directive Send (19h): Supported 00:08:34.085 Directive Receive (1Ah): Supported 00:08:34.085 Virtualization Management (1Ch): Supported 00:08:34.085 Doorbell Buffer Config (7Ch): Supported 00:08:34.085 Format NVM (80h): Supported LBA-Change 00:08:34.085 I/O Commands 00:08:34.085 ------------ 00:08:34.085 Flush (00h): Supported LBA-Change 00:08:34.085 Write (01h): Supported LBA-Change 00:08:34.085 Read (02h): Supported 00:08:34.085 Compare (05h): Supported 00:08:34.085 Write Zeroes (08h): Supported LBA-Change 00:08:34.085 Dataset Management (09h): Supported LBA-Change 00:08:34.085 Unknown (0Ch): Supported 00:08:34.085 Unknown (12h): Supported 00:08:34.085 Copy (19h): Supported LBA-Change 00:08:34.085 Unknown (1Dh): Supported LBA-Change 00:08:34.085 00:08:34.085 Error Log 00:08:34.085 ========= 00:08:34.085 00:08:34.085 Arbitration 00:08:34.085 =========== 00:08:34.085 Arbitration Burst: no limit 00:08:34.085 00:08:34.085 Power Management 00:08:34.085 ================ 00:08:34.085 Number of Power States: 1 00:08:34.085 Current Power State: Power State #0 00:08:34.085 Power State #0: 00:08:34.085 Max Power: 25.00 W 00:08:34.085 Non-Operational State: Operational 00:08:34.085 Entry Latency: 16 microseconds 00:08:34.085 Exit Latency: 4 microseconds 00:08:34.085 Relative Read Throughput: 0 00:08:34.085 Relative Read Latency: 0 00:08:34.085 Relative Write Throughput: 0 00:08:34.085 Relative Write Latency: 0 00:08:34.085 Idle Power: Not Reported 00:08:34.085 Active Power: Not Reported 00:08:34.085 Non-Operational Permissive Mode: Not Supported 00:08:34.085 00:08:34.085 Health Information 00:08:34.085 ================== 00:08:34.085 Critical Warnings: 00:08:34.085 Available Spare Space: OK 00:08:34.085 Temperature: OK 00:08:34.085 Device Reliability: OK 00:08:34.085 Read Only: No 00:08:34.085 Volatile Memory Backup: OK 00:08:34.085 Current Temperature: 323 Kelvin (50 Celsius) 00:08:34.085 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:34.085 Available Spare: 0% 00:08:34.085 Available Spare Threshold: 0% 00:08:34.085 Life Percentage Used: 0% 00:08:34.085 Data Units Read: 1006 00:08:34.085 Data Units Written: 867 00:08:34.085 Host Read Commands: 55080 00:08:34.085 Host Write Commands: 53764 00:08:34.085 Controller Busy Time: 0 minutes 00:08:34.085 Power Cycles: 0 00:08:34.085 Power On Hours: 0 hours 00:08:34.085 Unsafe Shutdowns: 0 00:08:34.085 Unrecoverable Media Errors: 0 00:08:34.085 Lifetime Error Log Entries: 0 00:08:34.085 Warning Temperature Time: 0 minutes 00:08:34.085 Critical Temperature Time: 0 minutes 00:08:34.085 00:08:34.085 Number of Queues 00:08:34.085 ================ 00:08:34.085 Number of I/O Submission Queues: 64 00:08:34.085 Number of I/O Completion Queues: 64 00:08:34.085 00:08:34.085 ZNS Specific Controller Data 00:08:34.085 ============================ 00:08:34.085 Zone Append Size Limit: 0 00:08:34.085 00:08:34.085 00:08:34.085 Active Namespaces 00:08:34.085 ================= 00:08:34.085 Namespace ID:1 00:08:34.085 Error Recovery Timeout: Unlimited 00:08:34.085 Command Set Identifier: NVM (00h) 00:08:34.085 Deallocate: Supported 00:08:34.085 Deallocated/Unwritten Error: Supported 00:08:34.085 Deallocated Read Value: All 0x00 00:08:34.085 Deallocate in Write Zeroes: Not Supported 00:08:34.085 Deallocated Guard Field: 0xFFFF 00:08:34.085 Flush: Supported 00:08:34.085 Reservation: Not Supported 00:08:34.085 Namespace Sharing Capabilities: Private 00:08:34.085 Size (in LBAs): 1310720 (5GiB) 00:08:34.085 Capacity (in LBAs): 1310720 (5GiB) 00:08:34.085 Utilization (in LBAs): 1310720 (5GiB) 00:08:34.085 Thin Provisioning: Not Supported 00:08:34.085 Per-NS Atomic Units: No 00:08:34.085 Maximum Single Source Range Length: 128 00:08:34.085 Maximum Copy Length: 128 00:08:34.085 Maximum Source Range Count: 128 00:08:34.085 NGUID/EUI64 Never Reused: No 00:08:34.085 Namespace Write Protected: No 00:08:34.085 Number of LBA Formats: 8 00:08:34.085 Current LBA Format: LBA Format #04 00:08:34.085 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:34.085 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:34.085 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:34.085 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:34.085 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:34.085 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:34.085 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:34.085 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:34.085 00:08:34.085 NVM Specific Namespace Data 00:08:34.085 =========================== 00:08:34.085 Logical Block Storage Tag Mask: 0 00:08:34.085 Protection Information Capabilities: 00:08:34.085 16b Guard Protection Information Storage Tag Support: No 00:08:34.085 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:34.085 Storage Tag Check Read Support: No 00:08:34.085 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:34.085 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:34.085 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:34.085 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:34.085 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:34.085 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:34.085 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:34.085 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:34.085 23:03:53 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:34.085 23:03:53 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:08:34.348 ===================================================== 00:08:34.348 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:34.348 ===================================================== 00:08:34.348 Controller Capabilities/Features 00:08:34.348 ================================ 00:08:34.348 Vendor ID: 1b36 00:08:34.348 Subsystem Vendor ID: 1af4 00:08:34.348 Serial Number: 12342 00:08:34.348 Model Number: QEMU NVMe Ctrl 00:08:34.348 Firmware Version: 8.0.0 00:08:34.348 Recommended Arb Burst: 6 00:08:34.348 IEEE OUI Identifier: 00 54 52 00:08:34.348 Multi-path I/O 00:08:34.348 May have multiple subsystem ports: No 00:08:34.348 May have multiple controllers: No 00:08:34.348 Associated with SR-IOV VF: No 00:08:34.348 Max Data Transfer Size: 524288 00:08:34.348 Max Number of Namespaces: 256 00:08:34.348 Max Number of I/O Queues: 64 00:08:34.348 NVMe Specification Version (VS): 1.4 00:08:34.348 NVMe Specification Version (Identify): 1.4 00:08:34.348 Maximum Queue Entries: 2048 00:08:34.348 Contiguous Queues Required: Yes 00:08:34.348 Arbitration Mechanisms Supported 00:08:34.348 Weighted Round Robin: Not Supported 00:08:34.348 Vendor Specific: Not Supported 00:08:34.348 Reset Timeout: 7500 ms 00:08:34.348 Doorbell Stride: 4 bytes 00:08:34.348 NVM Subsystem Reset: Not Supported 00:08:34.348 Command Sets Supported 00:08:34.348 NVM Command Set: Supported 00:08:34.348 Boot Partition: Not Supported 00:08:34.348 Memory Page Size Minimum: 4096 bytes 00:08:34.348 Memory Page Size Maximum: 65536 bytes 00:08:34.348 Persistent Memory Region: Not Supported 00:08:34.348 Optional Asynchronous Events Supported 00:08:34.348 Namespace Attribute Notices: Supported 00:08:34.348 Firmware Activation Notices: Not Supported 00:08:34.348 ANA Change Notices: Not Supported 00:08:34.348 PLE Aggregate Log Change Notices: Not Supported 00:08:34.348 LBA Status Info Alert Notices: Not Supported 00:08:34.348 EGE Aggregate Log Change Notices: Not Supported 00:08:34.348 Normal NVM Subsystem Shutdown event: Not Supported 00:08:34.348 Zone Descriptor Change Notices: Not Supported 00:08:34.348 Discovery Log Change Notices: Not Supported 00:08:34.348 Controller Attributes 00:08:34.348 128-bit Host Identifier: Not Supported 00:08:34.348 Non-Operational Permissive Mode: Not Supported 00:08:34.348 NVM Sets: Not Supported 00:08:34.348 Read Recovery Levels: Not Supported 00:08:34.348 Endurance Groups: Not Supported 00:08:34.348 Predictable Latency Mode: Not Supported 00:08:34.348 Traffic Based Keep ALive: Not Supported 00:08:34.348 Namespace Granularity: Not Supported 00:08:34.348 SQ Associations: Not Supported 00:08:34.348 UUID List: Not Supported 00:08:34.348 Multi-Domain Subsystem: Not Supported 00:08:34.348 Fixed Capacity Management: Not Supported 00:08:34.348 Variable Capacity Management: Not Supported 00:08:34.348 Delete Endurance Group: Not Supported 00:08:34.348 Delete NVM Set: Not Supported 00:08:34.348 Extended LBA Formats Supported: Supported 00:08:34.348 Flexible Data Placement Supported: Not Supported 00:08:34.348 00:08:34.348 Controller Memory Buffer Support 00:08:34.348 ================================ 00:08:34.348 Supported: No 00:08:34.348 00:08:34.348 Persistent Memory Region Support 00:08:34.348 ================================ 00:08:34.348 Supported: No 00:08:34.348 00:08:34.348 Admin Command Set Attributes 00:08:34.348 ============================ 00:08:34.348 Security Send/Receive: Not Supported 00:08:34.348 Format NVM: Supported 00:08:34.348 Firmware Activate/Download: Not Supported 00:08:34.348 Namespace Management: Supported 00:08:34.348 Device Self-Test: Not Supported 00:08:34.348 Directives: Supported 00:08:34.348 NVMe-MI: Not Supported 00:08:34.348 Virtualization Management: Not Supported 00:08:34.348 Doorbell Buffer Config: Supported 00:08:34.348 Get LBA Status Capability: Not Supported 00:08:34.348 Command & Feature Lockdown Capability: Not Supported 00:08:34.348 Abort Command Limit: 4 00:08:34.348 Async Event Request Limit: 4 00:08:34.348 Number of Firmware Slots: N/A 00:08:34.348 Firmware Slot 1 Read-Only: N/A 00:08:34.348 Firmware Activation Without Reset: N/A 00:08:34.348 Multiple Update Detection Support: N/A 00:08:34.348 Firmware Update Granularity: No Information Provided 00:08:34.348 Per-Namespace SMART Log: Yes 00:08:34.348 Asymmetric Namespace Access Log Page: Not Supported 00:08:34.348 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:34.348 Command Effects Log Page: Supported 00:08:34.348 Get Log Page Extended Data: Supported 00:08:34.348 Telemetry Log Pages: Not Supported 00:08:34.348 Persistent Event Log Pages: Not Supported 00:08:34.348 Supported Log Pages Log Page: May Support 00:08:34.348 Commands Supported & Effects Log Page: Not Supported 00:08:34.348 Feature Identifiers & Effects Log Page:May Support 00:08:34.348 NVMe-MI Commands & Effects Log Page: May Support 00:08:34.348 Data Area 4 for Telemetry Log: Not Supported 00:08:34.348 Error Log Page Entries Supported: 1 00:08:34.348 Keep Alive: Not Supported 00:08:34.348 00:08:34.348 NVM Command Set Attributes 00:08:34.348 ========================== 00:08:34.348 Submission Queue Entry Size 00:08:34.348 Max: 64 00:08:34.348 Min: 64 00:08:34.349 Completion Queue Entry Size 00:08:34.349 Max: 16 00:08:34.349 Min: 16 00:08:34.349 Number of Namespaces: 256 00:08:34.349 Compare Command: Supported 00:08:34.349 Write Uncorrectable Command: Not Supported 00:08:34.349 Dataset Management Command: Supported 00:08:34.349 Write Zeroes Command: Supported 00:08:34.349 Set Features Save Field: Supported 00:08:34.349 Reservations: Not Supported 00:08:34.349 Timestamp: Supported 00:08:34.349 Copy: Supported 00:08:34.349 Volatile Write Cache: Present 00:08:34.349 Atomic Write Unit (Normal): 1 00:08:34.349 Atomic Write Unit (PFail): 1 00:08:34.349 Atomic Compare & Write Unit: 1 00:08:34.349 Fused Compare & Write: Not Supported 00:08:34.349 Scatter-Gather List 00:08:34.349 SGL Command Set: Supported 00:08:34.349 SGL Keyed: Not Supported 00:08:34.349 SGL Bit Bucket Descriptor: Not Supported 00:08:34.349 SGL Metadata Pointer: Not Supported 00:08:34.349 Oversized SGL: Not Supported 00:08:34.349 SGL Metadata Address: Not Supported 00:08:34.349 SGL Offset: Not Supported 00:08:34.349 Transport SGL Data Block: Not Supported 00:08:34.349 Replay Protected Memory Block: Not Supported 00:08:34.349 00:08:34.349 Firmware Slot Information 00:08:34.349 ========================= 00:08:34.349 Active slot: 1 00:08:34.349 Slot 1 Firmware Revision: 1.0 00:08:34.349 00:08:34.349 00:08:34.349 Commands Supported and Effects 00:08:34.349 ============================== 00:08:34.349 Admin Commands 00:08:34.349 -------------- 00:08:34.349 Delete I/O Submission Queue (00h): Supported 00:08:34.349 Create I/O Submission Queue (01h): Supported 00:08:34.349 Get Log Page (02h): Supported 00:08:34.349 Delete I/O Completion Queue (04h): Supported 00:08:34.349 Create I/O Completion Queue (05h): Supported 00:08:34.349 Identify (06h): Supported 00:08:34.349 Abort (08h): Supported 00:08:34.349 Set Features (09h): Supported 00:08:34.349 Get Features (0Ah): Supported 00:08:34.349 Asynchronous Event Request (0Ch): Supported 00:08:34.349 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:34.349 Directive Send (19h): Supported 00:08:34.349 Directive Receive (1Ah): Supported 00:08:34.349 Virtualization Management (1Ch): Supported 00:08:34.349 Doorbell Buffer Config (7Ch): Supported 00:08:34.349 Format NVM (80h): Supported LBA-Change 00:08:34.349 I/O Commands 00:08:34.349 ------------ 00:08:34.349 Flush (00h): Supported LBA-Change 00:08:34.349 Write (01h): Supported LBA-Change 00:08:34.349 Read (02h): Supported 00:08:34.349 Compare (05h): Supported 00:08:34.349 Write Zeroes (08h): Supported LBA-Change 00:08:34.349 Dataset Management (09h): Supported LBA-Change 00:08:34.349 Unknown (0Ch): Supported 00:08:34.349 Unknown (12h): Supported 00:08:34.349 Copy (19h): Supported LBA-Change 00:08:34.349 Unknown (1Dh): Supported LBA-Change 00:08:34.349 00:08:34.349 Error Log 00:08:34.349 ========= 00:08:34.349 00:08:34.349 Arbitration 00:08:34.349 =========== 00:08:34.349 Arbitration Burst: no limit 00:08:34.349 00:08:34.349 Power Management 00:08:34.349 ================ 00:08:34.349 Number of Power States: 1 00:08:34.349 Current Power State: Power State #0 00:08:34.349 Power State #0: 00:08:34.349 Max Power: 25.00 W 00:08:34.349 Non-Operational State: Operational 00:08:34.349 Entry Latency: 16 microseconds 00:08:34.349 Exit Latency: 4 microseconds 00:08:34.349 Relative Read Throughput: 0 00:08:34.349 Relative Read Latency: 0 00:08:34.349 Relative Write Throughput: 0 00:08:34.349 Relative Write Latency: 0 00:08:34.349 Idle Power: Not Reported 00:08:34.349 Active Power: Not Reported 00:08:34.349 Non-Operational Permissive Mode: Not Supported 00:08:34.349 00:08:34.349 Health Information 00:08:34.349 ================== 00:08:34.349 Critical Warnings: 00:08:34.349 Available Spare Space: OK 00:08:34.349 Temperature: OK 00:08:34.349 Device Reliability: OK 00:08:34.349 Read Only: No 00:08:34.349 Volatile Memory Backup: OK 00:08:34.349 Current Temperature: 323 Kelvin (50 Celsius) 00:08:34.349 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:34.349 Available Spare: 0% 00:08:34.349 Available Spare Threshold: 0% 00:08:34.349 Life Percentage Used: 0% 00:08:34.349 Data Units Read: 2204 00:08:34.349 Data Units Written: 1991 00:08:34.349 Host Read Commands: 111837 00:08:34.349 Host Write Commands: 110106 00:08:34.349 Controller Busy Time: 0 minutes 00:08:34.349 Power Cycles: 0 00:08:34.349 Power On Hours: 0 hours 00:08:34.349 Unsafe Shutdowns: 0 00:08:34.349 Unrecoverable Media Errors: 0 00:08:34.349 Lifetime Error Log Entries: 0 00:08:34.349 Warning Temperature Time: 0 minutes 00:08:34.349 Critical Temperature Time: 0 minutes 00:08:34.349 00:08:34.349 Number of Queues 00:08:34.349 ================ 00:08:34.349 Number of I/O Submission Queues: 64 00:08:34.349 Number of I/O Completion Queues: 64 00:08:34.349 00:08:34.349 ZNS Specific Controller Data 00:08:34.349 ============================ 00:08:34.349 Zone Append Size Limit: 0 00:08:34.349 00:08:34.349 00:08:34.349 Active Namespaces 00:08:34.349 ================= 00:08:34.349 Namespace ID:1 00:08:34.349 Error Recovery Timeout: Unlimited 00:08:34.349 Command Set Identifier: NVM (00h) 00:08:34.349 Deallocate: Supported 00:08:34.349 Deallocated/Unwritten Error: Supported 00:08:34.349 Deallocated Read Value: All 0x00 00:08:34.349 Deallocate in Write Zeroes: Not Supported 00:08:34.349 Deallocated Guard Field: 0xFFFF 00:08:34.349 Flush: Supported 00:08:34.349 Reservation: Not Supported 00:08:34.349 Namespace Sharing Capabilities: Private 00:08:34.349 Size (in LBAs): 1048576 (4GiB) 00:08:34.349 Capacity (in LBAs): 1048576 (4GiB) 00:08:34.349 Utilization (in LBAs): 1048576 (4GiB) 00:08:34.349 Thin Provisioning: Not Supported 00:08:34.349 Per-NS Atomic Units: No 00:08:34.349 Maximum Single Source Range Length: 128 00:08:34.349 Maximum Copy Length: 128 00:08:34.349 Maximum Source Range Count: 128 00:08:34.349 NGUID/EUI64 Never Reused: No 00:08:34.349 Namespace Write Protected: No 00:08:34.349 Number of LBA Formats: 8 00:08:34.349 Current LBA Format: LBA Format #04 00:08:34.349 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:34.349 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:34.349 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:34.349 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:34.349 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:34.349 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:34.349 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:34.349 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:34.349 00:08:34.349 NVM Specific Namespace Data 00:08:34.349 =========================== 00:08:34.349 Logical Block Storage Tag Mask: 0 00:08:34.349 Protection Information Capabilities: 00:08:34.349 16b Guard Protection Information Storage Tag Support: No 00:08:34.349 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:34.349 Storage Tag Check Read Support: No 00:08:34.349 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:34.349 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:34.349 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:34.349 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:34.349 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:34.349 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:34.349 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:34.349 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:34.349 Namespace ID:2 00:08:34.349 Error Recovery Timeout: Unlimited 00:08:34.349 Command Set Identifier: NVM (00h) 00:08:34.349 Deallocate: Supported 00:08:34.349 Deallocated/Unwritten Error: Supported 00:08:34.349 Deallocated Read Value: All 0x00 00:08:34.349 Deallocate in Write Zeroes: Not Supported 00:08:34.349 Deallocated Guard Field: 0xFFFF 00:08:34.349 Flush: Supported 00:08:34.349 Reservation: Not Supported 00:08:34.349 Namespace Sharing Capabilities: Private 00:08:34.349 Size (in LBAs): 1048576 (4GiB) 00:08:34.349 Capacity (in LBAs): 1048576 (4GiB) 00:08:34.349 Utilization (in LBAs): 1048576 (4GiB) 00:08:34.349 Thin Provisioning: Not Supported 00:08:34.349 Per-NS Atomic Units: No 00:08:34.349 Maximum Single Source Range Length: 128 00:08:34.349 Maximum Copy Length: 128 00:08:34.349 Maximum Source Range Count: 128 00:08:34.349 NGUID/EUI64 Never Reused: No 00:08:34.349 Namespace Write Protected: No 00:08:34.349 Number of LBA Formats: 8 00:08:34.349 Current LBA Format: LBA Format #04 00:08:34.349 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:34.350 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:34.350 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:34.350 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:34.350 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:34.350 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:34.350 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:34.350 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:34.350 00:08:34.350 NVM Specific Namespace Data 00:08:34.350 =========================== 00:08:34.350 Logical Block Storage Tag Mask: 0 00:08:34.350 Protection Information Capabilities: 00:08:34.350 16b Guard Protection Information Storage Tag Support: No 00:08:34.350 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:34.350 Storage Tag Check Read Support: No 00:08:34.350 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:34.350 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:34.350 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:34.350 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:34.350 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:34.350 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:34.350 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:34.350 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:34.350 Namespace ID:3 00:08:34.350 Error Recovery Timeout: Unlimited 00:08:34.350 Command Set Identifier: NVM (00h) 00:08:34.350 Deallocate: Supported 00:08:34.350 Deallocated/Unwritten Error: Supported 00:08:34.350 Deallocated Read Value: All 0x00 00:08:34.350 Deallocate in Write Zeroes: Not Supported 00:08:34.350 Deallocated Guard Field: 0xFFFF 00:08:34.350 Flush: Supported 00:08:34.350 Reservation: Not Supported 00:08:34.350 Namespace Sharing Capabilities: Private 00:08:34.350 Size (in LBAs): 1048576 (4GiB) 00:08:34.350 Capacity (in LBAs): 1048576 (4GiB) 00:08:34.350 Utilization (in LBAs): 1048576 (4GiB) 00:08:34.350 Thin Provisioning: Not Supported 00:08:34.350 Per-NS Atomic Units: No 00:08:34.350 Maximum Single Source Range Length: 128 00:08:34.350 Maximum Copy Length: 128 00:08:34.350 Maximum Source Range Count: 128 00:08:34.350 NGUID/EUI64 Never Reused: No 00:08:34.350 Namespace Write Protected: No 00:08:34.350 Number of LBA Formats: 8 00:08:34.350 Current LBA Format: LBA Format #04 00:08:34.350 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:34.350 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:34.350 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:34.350 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:34.350 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:34.350 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:34.350 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:34.350 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:34.350 00:08:34.350 NVM Specific Namespace Data 00:08:34.350 =========================== 00:08:34.350 Logical Block Storage Tag Mask: 0 00:08:34.350 Protection Information Capabilities: 00:08:34.350 16b Guard Protection Information Storage Tag Support: No 00:08:34.350 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:34.350 Storage Tag Check Read Support: No 00:08:34.350 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:34.350 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:34.350 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:34.350 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:34.350 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:34.350 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:34.350 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:34.350 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:34.350 23:03:53 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:34.350 23:03:53 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:08:34.613 ===================================================== 00:08:34.613 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:34.613 ===================================================== 00:08:34.613 Controller Capabilities/Features 00:08:34.613 ================================ 00:08:34.613 Vendor ID: 1b36 00:08:34.613 Subsystem Vendor ID: 1af4 00:08:34.613 Serial Number: 12343 00:08:34.613 Model Number: QEMU NVMe Ctrl 00:08:34.613 Firmware Version: 8.0.0 00:08:34.613 Recommended Arb Burst: 6 00:08:34.613 IEEE OUI Identifier: 00 54 52 00:08:34.613 Multi-path I/O 00:08:34.613 May have multiple subsystem ports: No 00:08:34.613 May have multiple controllers: Yes 00:08:34.613 Associated with SR-IOV VF: No 00:08:34.613 Max Data Transfer Size: 524288 00:08:34.613 Max Number of Namespaces: 256 00:08:34.613 Max Number of I/O Queues: 64 00:08:34.613 NVMe Specification Version (VS): 1.4 00:08:34.613 NVMe Specification Version (Identify): 1.4 00:08:34.613 Maximum Queue Entries: 2048 00:08:34.613 Contiguous Queues Required: Yes 00:08:34.613 Arbitration Mechanisms Supported 00:08:34.613 Weighted Round Robin: Not Supported 00:08:34.613 Vendor Specific: Not Supported 00:08:34.613 Reset Timeout: 7500 ms 00:08:34.613 Doorbell Stride: 4 bytes 00:08:34.613 NVM Subsystem Reset: Not Supported 00:08:34.613 Command Sets Supported 00:08:34.613 NVM Command Set: Supported 00:08:34.613 Boot Partition: Not Supported 00:08:34.613 Memory Page Size Minimum: 4096 bytes 00:08:34.613 Memory Page Size Maximum: 65536 bytes 00:08:34.613 Persistent Memory Region: Not Supported 00:08:34.613 Optional Asynchronous Events Supported 00:08:34.613 Namespace Attribute Notices: Supported 00:08:34.613 Firmware Activation Notices: Not Supported 00:08:34.613 ANA Change Notices: Not Supported 00:08:34.613 PLE Aggregate Log Change Notices: Not Supported 00:08:34.613 LBA Status Info Alert Notices: Not Supported 00:08:34.613 EGE Aggregate Log Change Notices: Not Supported 00:08:34.613 Normal NVM Subsystem Shutdown event: Not Supported 00:08:34.613 Zone Descriptor Change Notices: Not Supported 00:08:34.613 Discovery Log Change Notices: Not Supported 00:08:34.613 Controller Attributes 00:08:34.613 128-bit Host Identifier: Not Supported 00:08:34.613 Non-Operational Permissive Mode: Not Supported 00:08:34.613 NVM Sets: Not Supported 00:08:34.613 Read Recovery Levels: Not Supported 00:08:34.613 Endurance Groups: Supported 00:08:34.613 Predictable Latency Mode: Not Supported 00:08:34.613 Traffic Based Keep ALive: Not Supported 00:08:34.613 Namespace Granularity: Not Supported 00:08:34.613 SQ Associations: Not Supported 00:08:34.613 UUID List: Not Supported 00:08:34.613 Multi-Domain Subsystem: Not Supported 00:08:34.613 Fixed Capacity Management: Not Supported 00:08:34.613 Variable Capacity Management: Not Supported 00:08:34.613 Delete Endurance Group: Not Supported 00:08:34.613 Delete NVM Set: Not Supported 00:08:34.613 Extended LBA Formats Supported: Supported 00:08:34.613 Flexible Data Placement Supported: Supported 00:08:34.613 00:08:34.613 Controller Memory Buffer Support 00:08:34.613 ================================ 00:08:34.613 Supported: No 00:08:34.613 00:08:34.613 Persistent Memory Region Support 00:08:34.613 ================================ 00:08:34.613 Supported: No 00:08:34.613 00:08:34.613 Admin Command Set Attributes 00:08:34.613 ============================ 00:08:34.613 Security Send/Receive: Not Supported 00:08:34.613 Format NVM: Supported 00:08:34.613 Firmware Activate/Download: Not Supported 00:08:34.613 Namespace Management: Supported 00:08:34.613 Device Self-Test: Not Supported 00:08:34.613 Directives: Supported 00:08:34.613 NVMe-MI: Not Supported 00:08:34.613 Virtualization Management: Not Supported 00:08:34.613 Doorbell Buffer Config: Supported 00:08:34.613 Get LBA Status Capability: Not Supported 00:08:34.613 Command & Feature Lockdown Capability: Not Supported 00:08:34.613 Abort Command Limit: 4 00:08:34.613 Async Event Request Limit: 4 00:08:34.613 Number of Firmware Slots: N/A 00:08:34.613 Firmware Slot 1 Read-Only: N/A 00:08:34.613 Firmware Activation Without Reset: N/A 00:08:34.614 Multiple Update Detection Support: N/A 00:08:34.614 Firmware Update Granularity: No Information Provided 00:08:34.614 Per-Namespace SMART Log: Yes 00:08:34.614 Asymmetric Namespace Access Log Page: Not Supported 00:08:34.614 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:34.614 Command Effects Log Page: Supported 00:08:34.614 Get Log Page Extended Data: Supported 00:08:34.614 Telemetry Log Pages: Not Supported 00:08:34.614 Persistent Event Log Pages: Not Supported 00:08:34.614 Supported Log Pages Log Page: May Support 00:08:34.614 Commands Supported & Effects Log Page: Not Supported 00:08:34.614 Feature Identifiers & Effects Log Page:May Support 00:08:34.614 NVMe-MI Commands & Effects Log Page: May Support 00:08:34.614 Data Area 4 for Telemetry Log: Not Supported 00:08:34.614 Error Log Page Entries Supported: 1 00:08:34.614 Keep Alive: Not Supported 00:08:34.614 00:08:34.614 NVM Command Set Attributes 00:08:34.614 ========================== 00:08:34.614 Submission Queue Entry Size 00:08:34.614 Max: 64 00:08:34.614 Min: 64 00:08:34.614 Completion Queue Entry Size 00:08:34.614 Max: 16 00:08:34.614 Min: 16 00:08:34.614 Number of Namespaces: 256 00:08:34.614 Compare Command: Supported 00:08:34.614 Write Uncorrectable Command: Not Supported 00:08:34.614 Dataset Management Command: Supported 00:08:34.614 Write Zeroes Command: Supported 00:08:34.614 Set Features Save Field: Supported 00:08:34.614 Reservations: Not Supported 00:08:34.614 Timestamp: Supported 00:08:34.614 Copy: Supported 00:08:34.614 Volatile Write Cache: Present 00:08:34.614 Atomic Write Unit (Normal): 1 00:08:34.614 Atomic Write Unit (PFail): 1 00:08:34.614 Atomic Compare & Write Unit: 1 00:08:34.614 Fused Compare & Write: Not Supported 00:08:34.614 Scatter-Gather List 00:08:34.614 SGL Command Set: Supported 00:08:34.614 SGL Keyed: Not Supported 00:08:34.614 SGL Bit Bucket Descriptor: Not Supported 00:08:34.614 SGL Metadata Pointer: Not Supported 00:08:34.614 Oversized SGL: Not Supported 00:08:34.614 SGL Metadata Address: Not Supported 00:08:34.614 SGL Offset: Not Supported 00:08:34.614 Transport SGL Data Block: Not Supported 00:08:34.614 Replay Protected Memory Block: Not Supported 00:08:34.614 00:08:34.614 Firmware Slot Information 00:08:34.614 ========================= 00:08:34.614 Active slot: 1 00:08:34.614 Slot 1 Firmware Revision: 1.0 00:08:34.614 00:08:34.614 00:08:34.614 Commands Supported and Effects 00:08:34.614 ============================== 00:08:34.614 Admin Commands 00:08:34.614 -------------- 00:08:34.614 Delete I/O Submission Queue (00h): Supported 00:08:34.614 Create I/O Submission Queue (01h): Supported 00:08:34.614 Get Log Page (02h): Supported 00:08:34.614 Delete I/O Completion Queue (04h): Supported 00:08:34.614 Create I/O Completion Queue (05h): Supported 00:08:34.614 Identify (06h): Supported 00:08:34.614 Abort (08h): Supported 00:08:34.614 Set Features (09h): Supported 00:08:34.614 Get Features (0Ah): Supported 00:08:34.614 Asynchronous Event Request (0Ch): Supported 00:08:34.614 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:34.614 Directive Send (19h): Supported 00:08:34.614 Directive Receive (1Ah): Supported 00:08:34.614 Virtualization Management (1Ch): Supported 00:08:34.614 Doorbell Buffer Config (7Ch): Supported 00:08:34.614 Format NVM (80h): Supported LBA-Change 00:08:34.614 I/O Commands 00:08:34.614 ------------ 00:08:34.614 Flush (00h): Supported LBA-Change 00:08:34.614 Write (01h): Supported LBA-Change 00:08:34.614 Read (02h): Supported 00:08:34.614 Compare (05h): Supported 00:08:34.614 Write Zeroes (08h): Supported LBA-Change 00:08:34.614 Dataset Management (09h): Supported LBA-Change 00:08:34.614 Unknown (0Ch): Supported 00:08:34.614 Unknown (12h): Supported 00:08:34.614 Copy (19h): Supported LBA-Change 00:08:34.614 Unknown (1Dh): Supported LBA-Change 00:08:34.614 00:08:34.614 Error Log 00:08:34.614 ========= 00:08:34.614 00:08:34.614 Arbitration 00:08:34.614 =========== 00:08:34.614 Arbitration Burst: no limit 00:08:34.614 00:08:34.614 Power Management 00:08:34.614 ================ 00:08:34.614 Number of Power States: 1 00:08:34.614 Current Power State: Power State #0 00:08:34.614 Power State #0: 00:08:34.614 Max Power: 25.00 W 00:08:34.614 Non-Operational State: Operational 00:08:34.614 Entry Latency: 16 microseconds 00:08:34.614 Exit Latency: 4 microseconds 00:08:34.614 Relative Read Throughput: 0 00:08:34.614 Relative Read Latency: 0 00:08:34.614 Relative Write Throughput: 0 00:08:34.614 Relative Write Latency: 0 00:08:34.614 Idle Power: Not Reported 00:08:34.614 Active Power: Not Reported 00:08:34.614 Non-Operational Permissive Mode: Not Supported 00:08:34.614 00:08:34.614 Health Information 00:08:34.614 ================== 00:08:34.614 Critical Warnings: 00:08:34.614 Available Spare Space: OK 00:08:34.614 Temperature: OK 00:08:34.614 Device Reliability: OK 00:08:34.614 Read Only: No 00:08:34.614 Volatile Memory Backup: OK 00:08:34.614 Current Temperature: 323 Kelvin (50 Celsius) 00:08:34.614 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:34.614 Available Spare: 0% 00:08:34.614 Available Spare Threshold: 0% 00:08:34.614 Life Percentage Used: 0% 00:08:34.614 Data Units Read: 941 00:08:34.614 Data Units Written: 870 00:08:34.614 Host Read Commands: 38931 00:08:34.614 Host Write Commands: 38354 00:08:34.614 Controller Busy Time: 0 minutes 00:08:34.614 Power Cycles: 0 00:08:34.614 Power On Hours: 0 hours 00:08:34.614 Unsafe Shutdowns: 0 00:08:34.614 Unrecoverable Media Errors: 0 00:08:34.614 Lifetime Error Log Entries: 0 00:08:34.614 Warning Temperature Time: 0 minutes 00:08:34.614 Critical Temperature Time: 0 minutes 00:08:34.614 00:08:34.614 Number of Queues 00:08:34.614 ================ 00:08:34.614 Number of I/O Submission Queues: 64 00:08:34.614 Number of I/O Completion Queues: 64 00:08:34.614 00:08:34.614 ZNS Specific Controller Data 00:08:34.614 ============================ 00:08:34.614 Zone Append Size Limit: 0 00:08:34.614 00:08:34.614 00:08:34.614 Active Namespaces 00:08:34.614 ================= 00:08:34.614 Namespace ID:1 00:08:34.614 Error Recovery Timeout: Unlimited 00:08:34.614 Command Set Identifier: NVM (00h) 00:08:34.614 Deallocate: Supported 00:08:34.614 Deallocated/Unwritten Error: Supported 00:08:34.614 Deallocated Read Value: All 0x00 00:08:34.614 Deallocate in Write Zeroes: Not Supported 00:08:34.614 Deallocated Guard Field: 0xFFFF 00:08:34.614 Flush: Supported 00:08:34.614 Reservation: Not Supported 00:08:34.614 Namespace Sharing Capabilities: Multiple Controllers 00:08:34.614 Size (in LBAs): 262144 (1GiB) 00:08:34.614 Capacity (in LBAs): 262144 (1GiB) 00:08:34.614 Utilization (in LBAs): 262144 (1GiB) 00:08:34.614 Thin Provisioning: Not Supported 00:08:34.614 Per-NS Atomic Units: No 00:08:34.614 Maximum Single Source Range Length: 128 00:08:34.615 Maximum Copy Length: 128 00:08:34.615 Maximum Source Range Count: 128 00:08:34.615 NGUID/EUI64 Never Reused: No 00:08:34.615 Namespace Write Protected: No 00:08:34.615 Endurance group ID: 1 00:08:34.615 Number of LBA Formats: 8 00:08:34.615 Current LBA Format: LBA Format #04 00:08:34.615 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:34.615 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:34.615 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:34.615 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:34.615 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:34.615 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:34.615 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:34.615 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:34.615 00:08:34.615 Get Feature FDP: 00:08:34.615 ================ 00:08:34.615 Enabled: Yes 00:08:34.615 FDP configuration index: 0 00:08:34.615 00:08:34.615 FDP configurations log page 00:08:34.615 =========================== 00:08:34.615 Number of FDP configurations: 1 00:08:34.615 Version: 0 00:08:34.615 Size: 112 00:08:34.615 FDP Configuration Descriptor: 0 00:08:34.615 Descriptor Size: 96 00:08:34.615 Reclaim Group Identifier format: 2 00:08:34.615 FDP Volatile Write Cache: Not Present 00:08:34.615 FDP Configuration: Valid 00:08:34.615 Vendor Specific Size: 0 00:08:34.615 Number of Reclaim Groups: 2 00:08:34.615 Number of Recalim Unit Handles: 8 00:08:34.615 Max Placement Identifiers: 128 00:08:34.615 Number of Namespaces Suppprted: 256 00:08:34.615 Reclaim unit Nominal Size: 6000000 bytes 00:08:34.615 Estimated Reclaim Unit Time Limit: Not Reported 00:08:34.615 RUH Desc #000: RUH Type: Initially Isolated 00:08:34.615 RUH Desc #001: RUH Type: Initially Isolated 00:08:34.615 RUH Desc #002: RUH Type: Initially Isolated 00:08:34.615 RUH Desc #003: RUH Type: Initially Isolated 00:08:34.615 RUH Desc #004: RUH Type: Initially Isolated 00:08:34.615 RUH Desc #005: RUH Type: Initially Isolated 00:08:34.615 RUH Desc #006: RUH Type: Initially Isolated 00:08:34.615 RUH Desc #007: RUH Type: Initially Isolated 00:08:34.615 00:08:34.615 FDP reclaim unit handle usage log page 00:08:34.615 ====================================== 00:08:34.615 Number of Reclaim Unit Handles: 8 00:08:34.615 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:34.615 RUH Usage Desc #001: RUH Attributes: Unused 00:08:34.615 RUH Usage Desc #002: RUH Attributes: Unused 00:08:34.615 RUH Usage Desc #003: RUH Attributes: Unused 00:08:34.615 RUH Usage Desc #004: RUH Attributes: Unused 00:08:34.615 RUH Usage Desc #005: RUH Attributes: Unused 00:08:34.615 RUH Usage Desc #006: RUH Attributes: Unused 00:08:34.615 RUH Usage Desc #007: RUH Attributes: Unused 00:08:34.615 00:08:34.615 FDP statistics log page 00:08:34.615 ======================= 00:08:34.615 Host bytes with metadata written: 547856384 00:08:34.615 Media bytes with metadata written: 547934208 00:08:34.615 Media bytes erased: 0 00:08:34.615 00:08:34.615 FDP events log page 00:08:34.615 =================== 00:08:34.615 Number of FDP events: 0 00:08:34.615 00:08:34.615 NVM Specific Namespace Data 00:08:34.615 =========================== 00:08:34.615 Logical Block Storage Tag Mask: 0 00:08:34.615 Protection Information Capabilities: 00:08:34.615 16b Guard Protection Information Storage Tag Support: No 00:08:34.615 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:34.615 Storage Tag Check Read Support: No 00:08:34.615 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:34.615 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:34.615 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:34.615 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:34.615 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:34.615 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:34.615 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:34.615 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:34.615 00:08:34.615 real 0m1.107s 00:08:34.615 user 0m0.362s 00:08:34.615 sys 0m0.528s 00:08:34.615 23:03:53 nvme.nvme_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:34.615 23:03:53 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:08:34.615 ************************************ 00:08:34.615 END TEST nvme_identify 00:08:34.615 ************************************ 00:08:34.615 23:03:53 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:08:34.615 23:03:53 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:34.615 23:03:53 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:34.615 23:03:53 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:34.615 ************************************ 00:08:34.615 START TEST nvme_perf 00:08:34.615 ************************************ 00:08:34.615 23:03:53 nvme.nvme_perf -- common/autotest_common.sh@1125 -- # nvme_perf 00:08:34.615 23:03:53 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:08:36.004 Initializing NVMe Controllers 00:08:36.004 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:36.004 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:36.004 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:36.004 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:36.004 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:36.004 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:36.004 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:36.004 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:36.004 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:36.004 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:36.004 Initialization complete. Launching workers. 00:08:36.004 ======================================================== 00:08:36.004 Latency(us) 00:08:36.004 Device Information : IOPS MiB/s Average min max 00:08:36.004 PCIE (0000:00:13.0) NSID 1 from core 0: 8661.32 101.50 14799.05 8887.83 35306.18 00:08:36.004 PCIE (0000:00:10.0) NSID 1 from core 0: 8661.32 101.50 14788.68 8390.69 35131.08 00:08:36.004 PCIE (0000:00:11.0) NSID 1 from core 0: 8661.32 101.50 14776.82 8456.06 34729.41 00:08:36.004 PCIE (0000:00:12.0) NSID 1 from core 0: 8661.32 101.50 14762.93 7913.97 35398.40 00:08:36.004 PCIE (0000:00:12.0) NSID 2 from core 0: 8661.32 101.50 14749.36 7367.72 35186.75 00:08:36.004 PCIE (0000:00:12.0) NSID 3 from core 0: 8725.00 102.25 14628.30 6946.24 30301.13 00:08:36.004 ======================================================== 00:08:36.004 Total : 52031.59 609.75 14750.71 6946.24 35398.40 00:08:36.004 00:08:36.004 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:36.004 ================================================================================= 00:08:36.004 1.00000% : 9779.988us 00:08:36.004 10.00000% : 11494.006us 00:08:36.004 25.00000% : 12855.138us 00:08:36.004 50.00000% : 14619.569us 00:08:36.004 75.00000% : 16131.938us 00:08:36.004 90.00000% : 17341.834us 00:08:36.004 95.00000% : 18652.554us 00:08:36.004 98.00000% : 26214.400us 00:08:36.004 99.00000% : 30247.385us 00:08:36.004 99.50000% : 34482.018us 00:08:36.004 99.90000% : 35288.615us 00:08:36.004 99.99000% : 35490.265us 00:08:36.004 99.99900% : 35490.265us 00:08:36.004 99.99990% : 35490.265us 00:08:36.004 99.99999% : 35490.265us 00:08:36.004 00:08:36.004 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:36.004 ================================================================================= 00:08:36.004 1.00000% : 9527.926us 00:08:36.004 10.00000% : 11494.006us 00:08:36.004 25.00000% : 12855.138us 00:08:36.004 50.00000% : 14619.569us 00:08:36.004 75.00000% : 16131.938us 00:08:36.004 90.00000% : 17241.009us 00:08:36.004 95.00000% : 19055.852us 00:08:36.004 98.00000% : 26214.400us 00:08:36.004 99.00000% : 28230.892us 00:08:36.004 99.50000% : 34280.369us 00:08:36.004 99.90000% : 35086.966us 00:08:36.004 99.99000% : 35288.615us 00:08:36.004 99.99900% : 35288.615us 00:08:36.004 99.99990% : 35288.615us 00:08:36.004 99.99999% : 35288.615us 00:08:36.004 00:08:36.004 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:36.004 ================================================================================= 00:08:36.004 1.00000% : 9376.689us 00:08:36.004 10.00000% : 11494.006us 00:08:36.004 25.00000% : 12855.138us 00:08:36.004 50.00000% : 14720.394us 00:08:36.004 75.00000% : 16131.938us 00:08:36.004 90.00000% : 17241.009us 00:08:36.004 95.00000% : 18551.729us 00:08:36.004 98.00000% : 27020.997us 00:08:36.004 99.00000% : 28835.840us 00:08:36.004 99.50000% : 34078.720us 00:08:36.004 99.90000% : 34683.668us 00:08:36.004 99.99000% : 34885.317us 00:08:36.004 99.99900% : 34885.317us 00:08:36.004 99.99990% : 34885.317us 00:08:36.004 99.99999% : 34885.317us 00:08:36.004 00:08:36.004 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:36.004 ================================================================================= 00:08:36.004 1.00000% : 9427.102us 00:08:36.004 10.00000% : 11241.945us 00:08:36.004 25.00000% : 12905.551us 00:08:36.004 50.00000% : 14518.745us 00:08:36.004 75.00000% : 16131.938us 00:08:36.004 90.00000% : 17442.658us 00:08:36.004 95.00000% : 18551.729us 00:08:36.004 98.00000% : 25811.102us 00:08:36.004 99.00000% : 29239.138us 00:08:36.004 99.50000% : 34482.018us 00:08:36.004 99.90000% : 35288.615us 00:08:36.004 99.99000% : 35490.265us 00:08:36.004 99.99900% : 35490.265us 00:08:36.004 99.99990% : 35490.265us 00:08:36.004 99.99999% : 35490.265us 00:08:36.004 00:08:36.004 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:36.004 ================================================================================= 00:08:36.004 1.00000% : 9376.689us 00:08:36.004 10.00000% : 11292.357us 00:08:36.004 25.00000% : 12855.138us 00:08:36.004 50.00000% : 14518.745us 00:08:36.004 75.00000% : 16131.938us 00:08:36.004 90.00000% : 17644.308us 00:08:36.004 95.00000% : 18652.554us 00:08:36.004 98.00000% : 25710.277us 00:08:36.004 99.00000% : 29239.138us 00:08:36.004 99.50000% : 34482.018us 00:08:36.004 99.90000% : 35086.966us 00:08:36.004 99.99000% : 35288.615us 00:08:36.004 99.99900% : 35288.615us 00:08:36.004 99.99990% : 35288.615us 00:08:36.004 99.99999% : 35288.615us 00:08:36.004 00:08:36.004 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:36.004 ================================================================================= 00:08:36.004 1.00000% : 9376.689us 00:08:36.004 10.00000% : 11342.769us 00:08:36.004 25.00000% : 12855.138us 00:08:36.004 50.00000% : 14518.745us 00:08:36.004 75.00000% : 16131.938us 00:08:36.004 90.00000% : 17442.658us 00:08:36.004 95.00000% : 18350.080us 00:08:36.004 98.00000% : 24298.732us 00:08:36.004 99.00000% : 26617.698us 00:08:36.004 99.50000% : 29440.788us 00:08:36.004 99.90000% : 30247.385us 00:08:36.004 99.99000% : 30449.034us 00:08:36.004 99.99900% : 30449.034us 00:08:36.004 99.99990% : 30449.034us 00:08:36.004 99.99999% : 30449.034us 00:08:36.004 00:08:36.004 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:36.004 ============================================================================== 00:08:36.004 Range in us Cumulative IO count 00:08:36.004 8872.566 - 8922.978: 0.0689% ( 6) 00:08:36.004 8922.978 - 8973.391: 0.1034% ( 3) 00:08:36.004 8973.391 - 9023.803: 0.1264% ( 2) 00:08:36.004 9023.803 - 9074.215: 0.1608% ( 3) 00:08:36.004 9074.215 - 9124.628: 0.2183% ( 5) 00:08:36.004 9124.628 - 9175.040: 0.2642% ( 4) 00:08:36.004 9175.040 - 9225.452: 0.3332% ( 6) 00:08:36.004 9225.452 - 9275.865: 0.3906% ( 5) 00:08:36.004 9275.865 - 9326.277: 0.4596% ( 6) 00:08:36.004 9326.277 - 9376.689: 0.5170% ( 5) 00:08:36.004 9376.689 - 9427.102: 0.5859% ( 6) 00:08:36.004 9427.102 - 9477.514: 0.6549% ( 6) 00:08:36.004 9477.514 - 9527.926: 0.7238% ( 6) 00:08:36.004 9527.926 - 9578.338: 0.7927% ( 6) 00:08:36.004 9578.338 - 9628.751: 0.8502% ( 5) 00:08:36.004 9628.751 - 9679.163: 0.9191% ( 6) 00:08:36.004 9679.163 - 9729.575: 0.9995% ( 7) 00:08:36.004 9729.575 - 9779.988: 1.0685% ( 6) 00:08:36.004 9779.988 - 9830.400: 1.1719% ( 9) 00:08:36.004 9830.400 - 9880.812: 1.3212% ( 13) 00:08:36.004 9880.812 - 9931.225: 1.4476% ( 11) 00:08:36.004 9931.225 - 9981.637: 1.5625% ( 10) 00:08:36.004 9981.637 - 10032.049: 1.6659% ( 9) 00:08:36.004 10032.049 - 10082.462: 1.8038% ( 12) 00:08:36.004 10082.462 - 10132.874: 1.9991% ( 17) 00:08:36.004 10132.874 - 10183.286: 2.1484% ( 13) 00:08:36.004 10183.286 - 10233.698: 2.3438% ( 17) 00:08:36.004 10233.698 - 10284.111: 2.5161% ( 15) 00:08:36.004 10284.111 - 10334.523: 2.7459% ( 20) 00:08:36.004 10334.523 - 10384.935: 2.9986% ( 22) 00:08:36.004 10384.935 - 10435.348: 3.2399% ( 21) 00:08:36.004 10435.348 - 10485.760: 3.4697% ( 20) 00:08:36.004 10485.760 - 10536.172: 3.7914% ( 28) 00:08:36.004 10536.172 - 10586.585: 4.1475% ( 31) 00:08:36.004 10586.585 - 10636.997: 4.4233% ( 24) 00:08:36.004 10636.997 - 10687.409: 4.6415% ( 19) 00:08:36.004 10687.409 - 10737.822: 4.9288% ( 25) 00:08:36.004 10737.822 - 10788.234: 5.2390% ( 27) 00:08:36.004 10788.234 - 10838.646: 5.5607% ( 28) 00:08:36.004 10838.646 - 10889.058: 5.9398% ( 33) 00:08:36.004 10889.058 - 10939.471: 6.2845% ( 30) 00:08:36.004 10939.471 - 10989.883: 6.5832% ( 26) 00:08:36.004 10989.883 - 11040.295: 6.9278% ( 30) 00:08:36.004 11040.295 - 11090.708: 7.2955% ( 32) 00:08:36.004 11090.708 - 11141.120: 7.6746% ( 33) 00:08:36.004 11141.120 - 11191.532: 8.0653% ( 34) 00:08:36.004 11191.532 - 11241.945: 8.4674% ( 35) 00:08:36.004 11241.945 - 11292.357: 8.8006% ( 29) 00:08:36.004 11292.357 - 11342.769: 9.1452% ( 30) 00:08:36.004 11342.769 - 11393.182: 9.4784% ( 29) 00:08:36.004 11393.182 - 11443.594: 9.8575% ( 33) 00:08:36.004 11443.594 - 11494.006: 10.1792% ( 28) 00:08:36.004 11494.006 - 11544.418: 10.5009% ( 28) 00:08:36.004 11544.418 - 11594.831: 10.8111% ( 27) 00:08:36.005 11594.831 - 11645.243: 11.0754% ( 23) 00:08:36.005 11645.243 - 11695.655: 11.3856% ( 27) 00:08:36.005 11695.655 - 11746.068: 11.8451% ( 40) 00:08:36.005 11746.068 - 11796.480: 12.3392% ( 43) 00:08:36.005 11796.480 - 11846.892: 12.9825% ( 56) 00:08:36.005 11846.892 - 11897.305: 13.6029% ( 54) 00:08:36.005 11897.305 - 11947.717: 14.0970% ( 43) 00:08:36.005 11947.717 - 11998.129: 14.7174% ( 54) 00:08:36.005 11998.129 - 12048.542: 15.3608% ( 56) 00:08:36.005 12048.542 - 12098.954: 15.9467% ( 51) 00:08:36.005 12098.954 - 12149.366: 16.5901% ( 56) 00:08:36.005 12149.366 - 12199.778: 17.2220% ( 55) 00:08:36.005 12199.778 - 12250.191: 17.8079% ( 51) 00:08:36.005 12250.191 - 12300.603: 18.3824% ( 50) 00:08:36.005 12300.603 - 12351.015: 19.0028% ( 54) 00:08:36.005 12351.015 - 12401.428: 19.6576% ( 57) 00:08:36.005 12401.428 - 12451.840: 20.3699% ( 62) 00:08:36.005 12451.840 - 12502.252: 21.0133% ( 56) 00:08:36.005 12502.252 - 12552.665: 21.6567% ( 56) 00:08:36.005 12552.665 - 12603.077: 22.3346% ( 59) 00:08:36.005 12603.077 - 12653.489: 23.0124% ( 59) 00:08:36.005 12653.489 - 12703.902: 23.6788% ( 58) 00:08:36.005 12703.902 - 12754.314: 24.2877% ( 53) 00:08:36.005 12754.314 - 12804.726: 24.9311% ( 56) 00:08:36.005 12804.726 - 12855.138: 25.5515% ( 54) 00:08:36.005 12855.138 - 12905.551: 26.1144% ( 49) 00:08:36.005 12905.551 - 13006.375: 27.4242% ( 114) 00:08:36.005 13006.375 - 13107.200: 28.8143% ( 121) 00:08:36.005 13107.200 - 13208.025: 30.1930% ( 120) 00:08:36.005 13208.025 - 13308.849: 31.5947% ( 122) 00:08:36.005 13308.849 - 13409.674: 32.9733% ( 120) 00:08:36.005 13409.674 - 13510.498: 34.2946% ( 115) 00:08:36.005 13510.498 - 13611.323: 35.4320% ( 99) 00:08:36.005 13611.323 - 13712.148: 36.7877% ( 118) 00:08:36.005 13712.148 - 13812.972: 38.3272% ( 134) 00:08:36.005 13812.972 - 13913.797: 39.9357% ( 140) 00:08:36.005 13913.797 - 14014.622: 41.6590% ( 150) 00:08:36.005 14014.622 - 14115.446: 43.0492% ( 121) 00:08:36.005 14115.446 - 14216.271: 44.5887% ( 134) 00:08:36.005 14216.271 - 14317.095: 46.3350% ( 152) 00:08:36.005 14317.095 - 14417.920: 48.1618% ( 159) 00:08:36.005 14417.920 - 14518.745: 49.7358% ( 137) 00:08:36.005 14518.745 - 14619.569: 51.2638% ( 133) 00:08:36.005 14619.569 - 14720.394: 52.8033% ( 134) 00:08:36.005 14720.394 - 14821.218: 54.5037% ( 148) 00:08:36.005 14821.218 - 14922.043: 56.1006% ( 139) 00:08:36.005 14922.043 - 15022.868: 57.7895% ( 147) 00:08:36.005 15022.868 - 15123.692: 59.1222% ( 116) 00:08:36.005 15123.692 - 15224.517: 60.6733% ( 135) 00:08:36.005 15224.517 - 15325.342: 62.2932% ( 141) 00:08:36.005 15325.342 - 15426.166: 64.1085% ( 158) 00:08:36.005 15426.166 - 15526.991: 65.9237% ( 158) 00:08:36.005 15526.991 - 15627.815: 67.6815% ( 153) 00:08:36.005 15627.815 - 15728.640: 69.3704% ( 147) 00:08:36.005 15728.640 - 15829.465: 71.0248% ( 144) 00:08:36.005 15829.465 - 15930.289: 72.7941% ( 154) 00:08:36.005 15930.289 - 16031.114: 74.5864% ( 156) 00:08:36.005 16031.114 - 16131.938: 76.3787% ( 156) 00:08:36.005 16131.938 - 16232.763: 77.9642% ( 138) 00:08:36.005 16232.763 - 16333.588: 79.2969% ( 116) 00:08:36.005 16333.588 - 16434.412: 80.5836% ( 112) 00:08:36.005 16434.412 - 16535.237: 81.8244% ( 108) 00:08:36.005 16535.237 - 16636.062: 82.8814% ( 92) 00:08:36.005 16636.062 - 16736.886: 84.1912% ( 114) 00:08:36.005 16736.886 - 16837.711: 85.4894% ( 113) 00:08:36.005 16837.711 - 16938.535: 86.6728% ( 103) 00:08:36.005 16938.535 - 17039.360: 87.5689% ( 78) 00:08:36.005 17039.360 - 17140.185: 88.5455% ( 85) 00:08:36.005 17140.185 - 17241.009: 89.4072% ( 75) 00:08:36.005 17241.009 - 17341.834: 90.2574% ( 74) 00:08:36.005 17341.834 - 17442.658: 91.0156% ( 66) 00:08:36.005 17442.658 - 17543.483: 91.7394% ( 63) 00:08:36.005 17543.483 - 17644.308: 92.4058% ( 58) 00:08:36.005 17644.308 - 17745.132: 92.9573% ( 48) 00:08:36.005 17745.132 - 17845.957: 93.4398% ( 42) 00:08:36.005 17845.957 - 17946.782: 93.7270% ( 25) 00:08:36.005 17946.782 - 18047.606: 93.9338% ( 18) 00:08:36.005 18047.606 - 18148.431: 94.1291% ( 17) 00:08:36.005 18148.431 - 18249.255: 94.3015% ( 15) 00:08:36.005 18249.255 - 18350.080: 94.5427% ( 21) 00:08:36.005 18350.080 - 18450.905: 94.7266% ( 16) 00:08:36.005 18450.905 - 18551.729: 94.9678% ( 21) 00:08:36.005 18551.729 - 18652.554: 95.1172% ( 13) 00:08:36.005 18652.554 - 18753.378: 95.2780% ( 14) 00:08:36.005 18753.378 - 18854.203: 95.4159% ( 12) 00:08:36.005 18854.203 - 18955.028: 95.5997% ( 16) 00:08:36.005 18955.028 - 19055.852: 95.7261% ( 11) 00:08:36.005 19055.852 - 19156.677: 95.8410% ( 10) 00:08:36.005 19156.677 - 19257.502: 95.9674% ( 11) 00:08:36.005 19257.502 - 19358.326: 96.0938% ( 11) 00:08:36.005 19358.326 - 19459.151: 96.1857% ( 8) 00:08:36.005 19459.151 - 19559.975: 96.2431% ( 5) 00:08:36.005 19559.975 - 19660.800: 96.3006% ( 5) 00:08:36.005 19660.800 - 19761.625: 96.3235% ( 2) 00:08:36.005 20467.397 - 20568.222: 96.3350% ( 1) 00:08:36.005 20568.222 - 20669.046: 96.3810% ( 4) 00:08:36.005 20669.046 - 20769.871: 96.4384% ( 5) 00:08:36.005 20769.871 - 20870.695: 96.4959% ( 5) 00:08:36.005 20870.695 - 20971.520: 96.5533% ( 5) 00:08:36.005 20971.520 - 21072.345: 96.6108% ( 5) 00:08:36.005 21072.345 - 21173.169: 96.6682% ( 5) 00:08:36.005 21173.169 - 21273.994: 96.7256% ( 5) 00:08:36.005 21273.994 - 21374.818: 96.7946% ( 6) 00:08:36.005 21374.818 - 21475.643: 96.8520% ( 5) 00:08:36.005 21475.643 - 21576.468: 96.9095% ( 5) 00:08:36.005 21576.468 - 21677.292: 96.9439% ( 3) 00:08:36.005 21677.292 - 21778.117: 97.0014% ( 5) 00:08:36.005 21778.117 - 21878.942: 97.0358% ( 3) 00:08:36.005 21878.942 - 21979.766: 97.0588% ( 2) 00:08:36.005 25306.978 - 25407.803: 97.0703% ( 1) 00:08:36.005 25407.803 - 25508.628: 97.1507% ( 7) 00:08:36.005 25508.628 - 25609.452: 97.2886% ( 12) 00:08:36.005 25609.452 - 25710.277: 97.4150% ( 11) 00:08:36.005 25710.277 - 25811.102: 97.5414% ( 11) 00:08:36.005 25811.102 - 26012.751: 97.7941% ( 22) 00:08:36.005 26012.751 - 26214.400: 98.0469% ( 22) 00:08:36.005 26214.400 - 26416.049: 98.2881% ( 21) 00:08:36.005 26416.049 - 26617.698: 98.4835% ( 17) 00:08:36.005 26617.698 - 26819.348: 98.5294% ( 4) 00:08:36.005 29239.138 - 29440.788: 98.5983% ( 6) 00:08:36.005 29440.788 - 29642.437: 98.7132% ( 10) 00:08:36.005 29642.437 - 29844.086: 98.8166% ( 9) 00:08:36.005 29844.086 - 30045.735: 98.9315% ( 10) 00:08:36.005 30045.735 - 30247.385: 99.0349% ( 9) 00:08:36.005 30247.385 - 30449.034: 99.1498% ( 10) 00:08:36.005 30449.034 - 30650.683: 99.2647% ( 10) 00:08:36.005 33877.071 - 34078.720: 99.2762% ( 1) 00:08:36.005 34078.720 - 34280.369: 99.3911% ( 10) 00:08:36.005 34280.369 - 34482.018: 99.5175% ( 11) 00:08:36.005 34482.018 - 34683.668: 99.6324% ( 10) 00:08:36.005 34683.668 - 34885.317: 99.7472% ( 10) 00:08:36.005 34885.317 - 35086.966: 99.8736% ( 11) 00:08:36.005 35086.966 - 35288.615: 99.9885% ( 10) 00:08:36.005 35288.615 - 35490.265: 100.0000% ( 1) 00:08:36.005 00:08:36.005 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:36.005 ============================================================================== 00:08:36.005 Range in us Cumulative IO count 00:08:36.005 8368.443 - 8418.855: 0.0345% ( 3) 00:08:36.005 8418.855 - 8469.268: 0.0574% ( 2) 00:08:36.005 8469.268 - 8519.680: 0.0919% ( 3) 00:08:36.005 8519.680 - 8570.092: 0.1034% ( 1) 00:08:36.005 8570.092 - 8620.505: 0.1494% ( 4) 00:08:36.005 8620.505 - 8670.917: 0.1838% ( 3) 00:08:36.005 8670.917 - 8721.329: 0.2068% ( 2) 00:08:36.005 8721.329 - 8771.742: 0.2413% ( 3) 00:08:36.005 8771.742 - 8822.154: 0.2757% ( 3) 00:08:36.005 8822.154 - 8872.566: 0.3102% ( 3) 00:08:36.005 8872.566 - 8922.978: 0.3332% ( 2) 00:08:36.005 8922.978 - 8973.391: 0.3676% ( 3) 00:08:36.005 8973.391 - 9023.803: 0.3906% ( 2) 00:08:36.005 9023.803 - 9074.215: 0.4251% ( 3) 00:08:36.005 9074.215 - 9124.628: 0.4825% ( 5) 00:08:36.005 9124.628 - 9175.040: 0.5170% ( 3) 00:08:36.005 9175.040 - 9225.452: 0.5744% ( 5) 00:08:36.005 9225.452 - 9275.865: 0.6434% ( 6) 00:08:36.005 9275.865 - 9326.277: 0.6778% ( 3) 00:08:36.005 9326.277 - 9376.689: 0.7927% ( 10) 00:08:36.005 9376.689 - 9427.102: 0.8847% ( 8) 00:08:36.005 9427.102 - 9477.514: 0.9651% ( 7) 00:08:36.005 9477.514 - 9527.926: 1.0685% ( 9) 00:08:36.005 9527.926 - 9578.338: 1.1374% ( 6) 00:08:36.005 9578.338 - 9628.751: 1.2063% ( 6) 00:08:36.005 9628.751 - 9679.163: 1.2868% ( 7) 00:08:36.005 9679.163 - 9729.575: 1.3327% ( 4) 00:08:36.005 9729.575 - 9779.988: 1.3902% ( 5) 00:08:36.005 9779.988 - 9830.400: 1.4361% ( 4) 00:08:36.005 9830.400 - 9880.812: 1.5165% ( 7) 00:08:36.005 9880.812 - 9931.225: 1.6544% ( 12) 00:08:36.005 9931.225 - 9981.637: 1.7578% ( 9) 00:08:36.005 9981.637 - 10032.049: 1.8957% ( 12) 00:08:36.005 10032.049 - 10082.462: 2.0335% ( 12) 00:08:36.005 10082.462 - 10132.874: 2.1484% ( 10) 00:08:36.005 10132.874 - 10183.286: 2.2748% ( 11) 00:08:36.005 10183.286 - 10233.698: 2.4242% ( 13) 00:08:36.005 10233.698 - 10284.111: 2.5965% ( 15) 00:08:36.005 10284.111 - 10334.523: 2.6999% ( 9) 00:08:36.005 10334.523 - 10384.935: 2.8033% ( 9) 00:08:36.005 10384.935 - 10435.348: 2.9871% ( 16) 00:08:36.005 10435.348 - 10485.760: 3.3203% ( 29) 00:08:36.005 10485.760 - 10536.172: 3.6420% ( 28) 00:08:36.005 10536.172 - 10586.585: 3.7914% ( 13) 00:08:36.005 10586.585 - 10636.997: 3.9752% ( 16) 00:08:36.005 10636.997 - 10687.409: 4.2624% ( 25) 00:08:36.005 10687.409 - 10737.822: 4.5152% ( 22) 00:08:36.005 10737.822 - 10788.234: 4.8483% ( 29) 00:08:36.005 10788.234 - 10838.646: 5.1585% ( 27) 00:08:36.006 10838.646 - 10889.058: 5.5147% ( 31) 00:08:36.006 10889.058 - 10939.471: 5.8824% ( 32) 00:08:36.006 10939.471 - 10989.883: 6.1581% ( 24) 00:08:36.006 10989.883 - 11040.295: 6.4798% ( 28) 00:08:36.006 11040.295 - 11090.708: 6.9853% ( 44) 00:08:36.006 11090.708 - 11141.120: 7.4334% ( 39) 00:08:36.006 11141.120 - 11191.532: 7.6976% ( 23) 00:08:36.006 11191.532 - 11241.945: 8.1112% ( 36) 00:08:36.006 11241.945 - 11292.357: 8.4789% ( 32) 00:08:36.006 11292.357 - 11342.769: 8.8120% ( 29) 00:08:36.006 11342.769 - 11393.182: 9.2716% ( 40) 00:08:36.006 11393.182 - 11443.594: 9.6507% ( 33) 00:08:36.006 11443.594 - 11494.006: 10.0643% ( 36) 00:08:36.006 11494.006 - 11544.418: 10.4435% ( 33) 00:08:36.006 11544.418 - 11594.831: 10.8341% ( 34) 00:08:36.006 11594.831 - 11645.243: 11.2937% ( 40) 00:08:36.006 11645.243 - 11695.655: 11.7647% ( 41) 00:08:36.006 11695.655 - 11746.068: 12.2243% ( 40) 00:08:36.006 11746.068 - 11796.480: 12.6953% ( 41) 00:08:36.006 11796.480 - 11846.892: 13.1434% ( 39) 00:08:36.006 11846.892 - 11897.305: 13.6144% ( 41) 00:08:36.006 11897.305 - 11947.717: 14.0740% ( 40) 00:08:36.006 11947.717 - 11998.129: 14.5221% ( 39) 00:08:36.006 11998.129 - 12048.542: 15.0161% ( 43) 00:08:36.006 12048.542 - 12098.954: 15.3952% ( 33) 00:08:36.006 12098.954 - 12149.366: 15.8663% ( 41) 00:08:36.006 12149.366 - 12199.778: 16.5097% ( 56) 00:08:36.006 12199.778 - 12250.191: 16.9462% ( 38) 00:08:36.006 12250.191 - 12300.603: 17.6011% ( 57) 00:08:36.006 12300.603 - 12351.015: 18.2560% ( 57) 00:08:36.006 12351.015 - 12401.428: 18.8304% ( 50) 00:08:36.006 12401.428 - 12451.840: 19.5198% ( 60) 00:08:36.006 12451.840 - 12502.252: 20.2091% ( 60) 00:08:36.006 12502.252 - 12552.665: 20.8869% ( 59) 00:08:36.006 12552.665 - 12603.077: 21.4040% ( 45) 00:08:36.006 12603.077 - 12653.489: 22.1967% ( 69) 00:08:36.006 12653.489 - 12703.902: 22.8631% ( 58) 00:08:36.006 12703.902 - 12754.314: 23.6443% ( 68) 00:08:36.006 12754.314 - 12804.726: 24.2762% ( 55) 00:08:36.006 12804.726 - 12855.138: 25.0345% ( 66) 00:08:36.006 12855.138 - 12905.551: 25.7238% ( 60) 00:08:36.006 12905.551 - 13006.375: 27.3208% ( 139) 00:08:36.006 13006.375 - 13107.200: 28.4697% ( 100) 00:08:36.006 13107.200 - 13208.025: 30.0092% ( 134) 00:08:36.006 13208.025 - 13308.849: 31.1811% ( 102) 00:08:36.006 13308.849 - 13409.674: 32.5942% ( 123) 00:08:36.006 13409.674 - 13510.498: 34.3176% ( 150) 00:08:36.006 13510.498 - 13611.323: 35.6273% ( 114) 00:08:36.006 13611.323 - 13712.148: 36.9945% ( 119) 00:08:36.006 13712.148 - 13812.972: 38.3732% ( 120) 00:08:36.006 13812.972 - 13913.797: 39.8323% ( 127) 00:08:36.006 13913.797 - 14014.622: 41.2109% ( 120) 00:08:36.006 14014.622 - 14115.446: 43.0147% ( 157) 00:08:36.006 14115.446 - 14216.271: 44.4278% ( 123) 00:08:36.006 14216.271 - 14317.095: 45.7491% ( 115) 00:08:36.006 14317.095 - 14417.920: 46.9439% ( 104) 00:08:36.006 14417.920 - 14518.745: 48.6213% ( 146) 00:08:36.006 14518.745 - 14619.569: 50.1264% ( 131) 00:08:36.006 14619.569 - 14720.394: 52.2978% ( 189) 00:08:36.006 14720.394 - 14821.218: 53.9637% ( 145) 00:08:36.006 14821.218 - 14922.043: 55.7790% ( 158) 00:08:36.006 14922.043 - 15022.868: 57.6402% ( 162) 00:08:36.006 15022.868 - 15123.692: 59.2371% ( 139) 00:08:36.006 15123.692 - 15224.517: 60.9835% ( 152) 00:08:36.006 15224.517 - 15325.342: 62.6034% ( 141) 00:08:36.006 15325.342 - 15426.166: 64.2004% ( 139) 00:08:36.006 15426.166 - 15526.991: 66.0156% ( 158) 00:08:36.006 15526.991 - 15627.815: 67.9802% ( 171) 00:08:36.006 15627.815 - 15728.640: 69.8989% ( 167) 00:08:36.006 15728.640 - 15829.465: 71.6567% ( 153) 00:08:36.006 15829.465 - 15930.289: 73.0928% ( 125) 00:08:36.006 15930.289 - 16031.114: 74.8047% ( 149) 00:08:36.006 16031.114 - 16131.938: 76.2983% ( 130) 00:08:36.006 16131.938 - 16232.763: 77.9642% ( 145) 00:08:36.006 16232.763 - 16333.588: 79.6875% ( 150) 00:08:36.006 16333.588 - 16434.412: 81.0662% ( 120) 00:08:36.006 16434.412 - 16535.237: 82.6517% ( 138) 00:08:36.006 16535.237 - 16636.062: 84.0418% ( 121) 00:08:36.006 16636.062 - 16736.886: 85.2711% ( 107) 00:08:36.006 16736.886 - 16837.711: 86.4890% ( 106) 00:08:36.006 16837.711 - 16938.535: 87.5115% ( 89) 00:08:36.006 16938.535 - 17039.360: 88.6259% ( 97) 00:08:36.006 17039.360 - 17140.185: 89.5335% ( 79) 00:08:36.006 17140.185 - 17241.009: 90.3148% ( 68) 00:08:36.006 17241.009 - 17341.834: 90.8088% ( 43) 00:08:36.006 17341.834 - 17442.658: 91.4177% ( 53) 00:08:36.006 17442.658 - 17543.483: 91.9692% ( 48) 00:08:36.006 17543.483 - 17644.308: 92.4403% ( 41) 00:08:36.006 17644.308 - 17745.132: 92.6815% ( 21) 00:08:36.006 17745.132 - 17845.957: 93.0262% ( 30) 00:08:36.006 17845.957 - 17946.782: 93.1296% ( 9) 00:08:36.006 17946.782 - 18047.606: 93.3249% ( 17) 00:08:36.006 18047.606 - 18148.431: 93.5317% ( 18) 00:08:36.006 18148.431 - 18249.255: 93.6811% ( 13) 00:08:36.006 18249.255 - 18350.080: 93.8074% ( 11) 00:08:36.006 18350.080 - 18450.905: 93.9568% ( 13) 00:08:36.006 18450.905 - 18551.729: 94.1176% ( 14) 00:08:36.006 18551.729 - 18652.554: 94.2440% ( 11) 00:08:36.006 18652.554 - 18753.378: 94.4164% ( 15) 00:08:36.006 18753.378 - 18854.203: 94.6461% ( 20) 00:08:36.006 18854.203 - 18955.028: 94.8300% ( 16) 00:08:36.006 18955.028 - 19055.852: 95.0483% ( 19) 00:08:36.006 19055.852 - 19156.677: 95.2436% ( 17) 00:08:36.006 19156.677 - 19257.502: 95.4504% ( 18) 00:08:36.006 19257.502 - 19358.326: 95.6687% ( 19) 00:08:36.006 19358.326 - 19459.151: 95.8295% ( 14) 00:08:36.006 19459.151 - 19559.975: 96.0133% ( 16) 00:08:36.006 19559.975 - 19660.800: 96.1052% ( 8) 00:08:36.006 19660.800 - 19761.625: 96.2316% ( 11) 00:08:36.006 19761.625 - 19862.449: 96.2776% ( 4) 00:08:36.006 19862.449 - 19963.274: 96.3350% ( 5) 00:08:36.006 19963.274 - 20064.098: 96.4614% ( 11) 00:08:36.006 20064.098 - 20164.923: 96.5993% ( 12) 00:08:36.006 20164.923 - 20265.748: 96.6797% ( 7) 00:08:36.006 20265.748 - 20366.572: 96.7486% ( 6) 00:08:36.006 20366.572 - 20467.397: 96.7601% ( 1) 00:08:36.006 20467.397 - 20568.222: 96.8635% ( 9) 00:08:36.006 20568.222 - 20669.046: 96.8980% ( 3) 00:08:36.006 20669.046 - 20769.871: 96.9324% ( 3) 00:08:36.006 20769.871 - 20870.695: 96.9669% ( 3) 00:08:36.006 20870.695 - 20971.520: 97.0588% ( 8) 00:08:36.006 24802.855 - 24903.680: 97.0818% ( 2) 00:08:36.006 24903.680 - 25004.505: 97.1278% ( 4) 00:08:36.006 25004.505 - 25105.329: 97.1852% ( 5) 00:08:36.006 25105.329 - 25206.154: 97.2541% ( 6) 00:08:36.006 25206.154 - 25306.978: 97.3116% ( 5) 00:08:36.006 25306.978 - 25407.803: 97.3460% ( 3) 00:08:36.006 25407.803 - 25508.628: 97.4150% ( 6) 00:08:36.006 25508.628 - 25609.452: 97.4839% ( 6) 00:08:36.006 25609.452 - 25710.277: 97.5758% ( 8) 00:08:36.006 25710.277 - 25811.102: 97.6677% ( 8) 00:08:36.006 25811.102 - 26012.751: 97.9435% ( 24) 00:08:36.006 26012.751 - 26214.400: 98.1043% ( 14) 00:08:36.006 26214.400 - 26416.049: 98.2077% ( 9) 00:08:36.006 26416.049 - 26617.698: 98.3226% ( 10) 00:08:36.006 26617.698 - 26819.348: 98.4030% ( 7) 00:08:36.006 26819.348 - 27020.997: 98.5179% ( 10) 00:08:36.006 27020.997 - 27222.646: 98.5983% ( 7) 00:08:36.006 27222.646 - 27424.295: 98.6903% ( 8) 00:08:36.006 27424.295 - 27625.945: 98.7822% ( 8) 00:08:36.006 27625.945 - 27827.594: 98.8741% ( 8) 00:08:36.006 27827.594 - 28029.243: 98.9890% ( 10) 00:08:36.006 28029.243 - 28230.892: 99.1153% ( 11) 00:08:36.006 30045.735 - 30247.385: 99.2188% ( 9) 00:08:36.006 30247.385 - 30449.034: 99.2647% ( 4) 00:08:36.006 33272.123 - 33473.772: 99.2762% ( 1) 00:08:36.006 33675.422 - 33877.071: 99.3222% ( 4) 00:08:36.006 33877.071 - 34078.720: 99.4256% ( 9) 00:08:36.006 34078.720 - 34280.369: 99.5404% ( 10) 00:08:36.006 34280.369 - 34482.018: 99.6553% ( 10) 00:08:36.006 34482.018 - 34683.668: 99.7587% ( 9) 00:08:36.006 34683.668 - 34885.317: 99.8277% ( 6) 00:08:36.006 34885.317 - 35086.966: 99.9770% ( 13) 00:08:36.006 35086.966 - 35288.615: 100.0000% ( 2) 00:08:36.006 00:08:36.006 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:36.006 ============================================================================== 00:08:36.006 Range in us Cumulative IO count 00:08:36.006 8418.855 - 8469.268: 0.0230% ( 2) 00:08:36.006 8469.268 - 8519.680: 0.0689% ( 4) 00:08:36.006 8519.680 - 8570.092: 0.1034% ( 3) 00:08:36.006 8570.092 - 8620.505: 0.1379% ( 3) 00:08:36.006 8620.505 - 8670.917: 0.1723% ( 3) 00:08:36.006 8670.917 - 8721.329: 0.1953% ( 2) 00:08:36.006 8721.329 - 8771.742: 0.2413% ( 4) 00:08:36.006 8771.742 - 8822.154: 0.2872% ( 4) 00:08:36.006 8822.154 - 8872.566: 0.3217% ( 3) 00:08:36.006 8872.566 - 8922.978: 0.3562% ( 3) 00:08:36.006 8922.978 - 8973.391: 0.4021% ( 4) 00:08:36.006 8973.391 - 9023.803: 0.4251% ( 2) 00:08:36.006 9023.803 - 9074.215: 0.5055% ( 7) 00:08:36.006 9074.215 - 9124.628: 0.5744% ( 6) 00:08:36.006 9124.628 - 9175.040: 0.6434% ( 6) 00:08:36.006 9175.040 - 9225.452: 0.7238% ( 7) 00:08:36.006 9225.452 - 9275.865: 0.8272% ( 9) 00:08:36.006 9275.865 - 9326.277: 0.9306% ( 9) 00:08:36.006 9326.277 - 9376.689: 1.0340% ( 9) 00:08:36.006 9376.689 - 9427.102: 1.1374% ( 9) 00:08:36.006 9427.102 - 9477.514: 1.2408% ( 9) 00:08:36.006 9477.514 - 9527.926: 1.3212% ( 7) 00:08:36.006 9527.926 - 9578.338: 1.3672% ( 4) 00:08:36.006 9578.338 - 9628.751: 1.4361% ( 6) 00:08:36.006 9628.751 - 9679.163: 1.5051% ( 6) 00:08:36.007 9679.163 - 9729.575: 1.5855% ( 7) 00:08:36.007 9729.575 - 9779.988: 1.6889% ( 9) 00:08:36.007 9779.988 - 9830.400: 1.7923% ( 9) 00:08:36.007 9830.400 - 9880.812: 1.9301% ( 12) 00:08:36.007 9880.812 - 9931.225: 2.0450% ( 10) 00:08:36.007 9931.225 - 9981.637: 2.1714% ( 11) 00:08:36.007 9981.637 - 10032.049: 2.2863% ( 10) 00:08:36.007 10032.049 - 10082.462: 2.3782% ( 8) 00:08:36.007 10082.462 - 10132.874: 2.4701% ( 8) 00:08:36.007 10132.874 - 10183.286: 2.5276% ( 5) 00:08:36.007 10183.286 - 10233.698: 2.6080% ( 7) 00:08:36.007 10233.698 - 10284.111: 2.7344% ( 11) 00:08:36.007 10284.111 - 10334.523: 2.8378% ( 9) 00:08:36.007 10334.523 - 10384.935: 2.9756% ( 12) 00:08:36.007 10384.935 - 10435.348: 3.1250% ( 13) 00:08:36.007 10435.348 - 10485.760: 3.3088% ( 16) 00:08:36.007 10485.760 - 10536.172: 3.5041% ( 17) 00:08:36.007 10536.172 - 10586.585: 3.7799% ( 24) 00:08:36.007 10586.585 - 10636.997: 4.0786% ( 26) 00:08:36.007 10636.997 - 10687.409: 4.3313% ( 22) 00:08:36.007 10687.409 - 10737.822: 4.7449% ( 36) 00:08:36.007 10737.822 - 10788.234: 5.0551% ( 27) 00:08:36.007 10788.234 - 10838.646: 5.3424% ( 25) 00:08:36.007 10838.646 - 10889.058: 5.6870% ( 30) 00:08:36.007 10889.058 - 10939.471: 6.0202% ( 29) 00:08:36.007 10939.471 - 10989.883: 6.4453% ( 37) 00:08:36.007 10989.883 - 11040.295: 6.8474% ( 35) 00:08:36.007 11040.295 - 11090.708: 7.1806% ( 29) 00:08:36.007 11090.708 - 11141.120: 7.5023% ( 28) 00:08:36.007 11141.120 - 11191.532: 7.8814% ( 33) 00:08:36.007 11191.532 - 11241.945: 8.2261% ( 30) 00:08:36.007 11241.945 - 11292.357: 8.6512% ( 37) 00:08:36.007 11292.357 - 11342.769: 9.0533% ( 35) 00:08:36.007 11342.769 - 11393.182: 9.3750% ( 28) 00:08:36.007 11393.182 - 11443.594: 9.8460% ( 41) 00:08:36.007 11443.594 - 11494.006: 10.2482% ( 35) 00:08:36.007 11494.006 - 11544.418: 10.7307% ( 42) 00:08:36.007 11544.418 - 11594.831: 11.1443% ( 36) 00:08:36.007 11594.831 - 11645.243: 11.5809% ( 38) 00:08:36.007 11645.243 - 11695.655: 11.9256% ( 30) 00:08:36.007 11695.655 - 11746.068: 12.3047% ( 33) 00:08:36.007 11746.068 - 11796.480: 12.6494% ( 30) 00:08:36.007 11796.480 - 11846.892: 13.0285% ( 33) 00:08:36.007 11846.892 - 11897.305: 13.3732% ( 30) 00:08:36.007 11897.305 - 11947.717: 13.7178% ( 30) 00:08:36.007 11947.717 - 11998.129: 14.0855% ( 32) 00:08:36.007 11998.129 - 12048.542: 14.4991% ( 36) 00:08:36.007 12048.542 - 12098.954: 14.9701% ( 41) 00:08:36.007 12098.954 - 12149.366: 15.4986% ( 46) 00:08:36.007 12149.366 - 12199.778: 16.0960% ( 52) 00:08:36.007 12199.778 - 12250.191: 16.7394% ( 56) 00:08:36.007 12250.191 - 12300.603: 17.4862% ( 65) 00:08:36.007 12300.603 - 12351.015: 18.1985% ( 62) 00:08:36.007 12351.015 - 12401.428: 18.9913% ( 69) 00:08:36.007 12401.428 - 12451.840: 19.6921% ( 61) 00:08:36.007 12451.840 - 12502.252: 20.3814% ( 60) 00:08:36.007 12502.252 - 12552.665: 21.1857% ( 70) 00:08:36.007 12552.665 - 12603.077: 21.9324% ( 65) 00:08:36.007 12603.077 - 12653.489: 22.5873% ( 57) 00:08:36.007 12653.489 - 12703.902: 23.2307% ( 56) 00:08:36.007 12703.902 - 12754.314: 23.9430% ( 62) 00:08:36.007 12754.314 - 12804.726: 24.6668% ( 63) 00:08:36.007 12804.726 - 12855.138: 25.3332% ( 58) 00:08:36.007 12855.138 - 12905.551: 26.0800% ( 65) 00:08:36.007 12905.551 - 13006.375: 27.5506% ( 128) 00:08:36.007 13006.375 - 13107.200: 28.8258% ( 111) 00:08:36.007 13107.200 - 13208.025: 30.0207% ( 104) 00:08:36.007 13208.025 - 13308.849: 31.4223% ( 122) 00:08:36.007 13308.849 - 13409.674: 32.6057% ( 103) 00:08:36.007 13409.674 - 13510.498: 33.7661% ( 101) 00:08:36.007 13510.498 - 13611.323: 34.9265% ( 101) 00:08:36.007 13611.323 - 13712.148: 36.1903% ( 110) 00:08:36.007 13712.148 - 13812.972: 37.4885% ( 113) 00:08:36.007 13812.972 - 13913.797: 38.7293% ( 108) 00:08:36.007 13913.797 - 14014.622: 40.1080% ( 120) 00:08:36.007 14014.622 - 14115.446: 41.6360% ( 133) 00:08:36.007 14115.446 - 14216.271: 43.1870% ( 135) 00:08:36.007 14216.271 - 14317.095: 44.8185% ( 142) 00:08:36.007 14317.095 - 14417.920: 46.3465% ( 133) 00:08:36.007 14417.920 - 14518.745: 48.1618% ( 158) 00:08:36.007 14518.745 - 14619.569: 49.9770% ( 158) 00:08:36.007 14619.569 - 14720.394: 52.1714% ( 191) 00:08:36.007 14720.394 - 14821.218: 54.2394% ( 180) 00:08:36.007 14821.218 - 14922.043: 56.0087% ( 154) 00:08:36.007 14922.043 - 15022.868: 57.7321% ( 150) 00:08:36.007 15022.868 - 15123.692: 59.4210% ( 147) 00:08:36.007 15123.692 - 15224.517: 61.1903% ( 154) 00:08:36.007 15224.517 - 15325.342: 62.9481% ( 153) 00:08:36.007 15325.342 - 15426.166: 64.9816% ( 177) 00:08:36.007 15426.166 - 15526.991: 66.8543% ( 163) 00:08:36.007 15526.991 - 15627.815: 68.4858% ( 142) 00:08:36.007 15627.815 - 15728.640: 70.0712% ( 138) 00:08:36.007 15728.640 - 15829.465: 71.5533% ( 129) 00:08:36.007 15829.465 - 15930.289: 73.0009% ( 126) 00:08:36.007 15930.289 - 16031.114: 74.6438% ( 143) 00:08:36.007 16031.114 - 16131.938: 76.3672% ( 150) 00:08:36.007 16131.938 - 16232.763: 78.0561% ( 147) 00:08:36.007 16232.763 - 16333.588: 79.4922% ( 125) 00:08:36.007 16333.588 - 16434.412: 81.0087% ( 132) 00:08:36.007 16434.412 - 16535.237: 82.5023% ( 130) 00:08:36.007 16535.237 - 16636.062: 83.9154% ( 123) 00:08:36.007 16636.062 - 16736.886: 85.2022% ( 112) 00:08:36.007 16736.886 - 16837.711: 86.3281% ( 98) 00:08:36.007 16837.711 - 16938.535: 87.3506% ( 89) 00:08:36.007 16938.535 - 17039.360: 88.5110% ( 101) 00:08:36.007 17039.360 - 17140.185: 89.4416% ( 81) 00:08:36.007 17140.185 - 17241.009: 90.3148% ( 76) 00:08:36.007 17241.009 - 17341.834: 90.9122% ( 52) 00:08:36.007 17341.834 - 17442.658: 91.4867% ( 50) 00:08:36.007 17442.658 - 17543.483: 92.0381% ( 48) 00:08:36.007 17543.483 - 17644.308: 92.4173% ( 33) 00:08:36.007 17644.308 - 17745.132: 92.7734% ( 31) 00:08:36.007 17745.132 - 17845.957: 93.1641% ( 34) 00:08:36.007 17845.957 - 17946.782: 93.4858% ( 28) 00:08:36.007 17946.782 - 18047.606: 93.7845% ( 26) 00:08:36.007 18047.606 - 18148.431: 94.0947% ( 27) 00:08:36.007 18148.431 - 18249.255: 94.3819% ( 25) 00:08:36.007 18249.255 - 18350.080: 94.6461% ( 23) 00:08:36.007 18350.080 - 18450.905: 94.9219% ( 24) 00:08:36.007 18450.905 - 18551.729: 95.1976% ( 24) 00:08:36.007 18551.729 - 18652.554: 95.4044% ( 18) 00:08:36.007 18652.554 - 18753.378: 95.5767% ( 15) 00:08:36.007 18753.378 - 18854.203: 95.7261% ( 13) 00:08:36.007 18854.203 - 18955.028: 95.8525% ( 11) 00:08:36.007 18955.028 - 19055.852: 95.9559% ( 9) 00:08:36.007 19055.852 - 19156.677: 96.0133% ( 5) 00:08:36.007 19156.677 - 19257.502: 96.0823% ( 6) 00:08:36.007 19257.502 - 19358.326: 96.1282% ( 4) 00:08:36.007 19358.326 - 19459.151: 96.2201% ( 8) 00:08:36.007 19459.151 - 19559.975: 96.3350% ( 10) 00:08:36.007 19559.975 - 19660.800: 96.4499% ( 10) 00:08:36.007 19660.800 - 19761.625: 96.5188% ( 6) 00:08:36.007 19761.625 - 19862.449: 96.5878% ( 6) 00:08:36.007 19862.449 - 19963.274: 96.6452% ( 5) 00:08:36.007 19963.274 - 20064.098: 96.7142% ( 6) 00:08:36.007 20064.098 - 20164.923: 96.7716% ( 5) 00:08:36.007 20164.923 - 20265.748: 96.8290% ( 5) 00:08:36.007 20265.748 - 20366.572: 96.8980% ( 6) 00:08:36.007 20366.572 - 20467.397: 96.9439% ( 4) 00:08:36.007 20467.397 - 20568.222: 97.0014% ( 5) 00:08:36.007 20568.222 - 20669.046: 97.0588% ( 5) 00:08:36.007 24399.557 - 24500.382: 97.0818% ( 2) 00:08:36.007 24500.382 - 24601.206: 97.1392% ( 5) 00:08:36.007 24601.206 - 24702.031: 97.1967% ( 5) 00:08:36.007 24702.031 - 24802.855: 97.2656% ( 6) 00:08:36.007 24802.855 - 24903.680: 97.3231% ( 5) 00:08:36.007 24903.680 - 25004.505: 97.3805% ( 5) 00:08:36.007 25004.505 - 25105.329: 97.4494% ( 6) 00:08:36.007 25105.329 - 25206.154: 97.5069% ( 5) 00:08:36.007 25206.154 - 25306.978: 97.5643% ( 5) 00:08:36.007 25306.978 - 25407.803: 97.6218% ( 5) 00:08:36.007 25407.803 - 25508.628: 97.6907% ( 6) 00:08:36.007 25508.628 - 25609.452: 97.7482% ( 5) 00:08:36.007 25609.452 - 25710.277: 97.7941% ( 4) 00:08:36.007 26617.698 - 26819.348: 97.8975% ( 9) 00:08:36.007 26819.348 - 27020.997: 98.0124% ( 10) 00:08:36.007 27020.997 - 27222.646: 98.1273% ( 10) 00:08:36.007 27222.646 - 27424.295: 98.2537% ( 11) 00:08:36.007 27424.295 - 27625.945: 98.3686% ( 10) 00:08:36.007 27625.945 - 27827.594: 98.4720% ( 9) 00:08:36.007 27827.594 - 28029.243: 98.6213% ( 13) 00:08:36.007 28029.243 - 28230.892: 98.7247% ( 9) 00:08:36.007 28230.892 - 28432.542: 98.8511% ( 11) 00:08:36.007 28432.542 - 28634.191: 98.9545% ( 9) 00:08:36.007 28634.191 - 28835.840: 99.0579% ( 9) 00:08:36.007 28835.840 - 29037.489: 99.1728% ( 10) 00:08:36.007 29037.489 - 29239.138: 99.2647% ( 8) 00:08:36.007 33272.123 - 33473.772: 99.2762% ( 1) 00:08:36.007 33473.772 - 33675.422: 99.3796% ( 9) 00:08:36.007 33675.422 - 33877.071: 99.4830% ( 9) 00:08:36.007 33877.071 - 34078.720: 99.6094% ( 11) 00:08:36.007 34078.720 - 34280.369: 99.7128% ( 9) 00:08:36.007 34280.369 - 34482.018: 99.8392% ( 11) 00:08:36.007 34482.018 - 34683.668: 99.9655% ( 11) 00:08:36.007 34683.668 - 34885.317: 100.0000% ( 3) 00:08:36.007 00:08:36.007 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:36.007 ============================================================================== 00:08:36.007 Range in us Cumulative IO count 00:08:36.007 7864.320 - 7914.732: 0.0115% ( 1) 00:08:36.007 7914.732 - 7965.145: 0.0345% ( 2) 00:08:36.007 7965.145 - 8015.557: 0.0804% ( 4) 00:08:36.007 8015.557 - 8065.969: 0.1379% ( 5) 00:08:36.007 8065.969 - 8116.382: 0.2528% ( 10) 00:08:36.007 8116.382 - 8166.794: 0.3102% ( 5) 00:08:36.008 8166.794 - 8217.206: 0.3562% ( 4) 00:08:36.008 8217.206 - 8267.618: 0.3906% ( 3) 00:08:36.008 8267.618 - 8318.031: 0.4251% ( 3) 00:08:36.008 8318.031 - 8368.443: 0.4710% ( 4) 00:08:36.008 8418.855 - 8469.268: 0.4825% ( 1) 00:08:36.008 8469.268 - 8519.680: 0.5285% ( 4) 00:08:36.008 8519.680 - 8570.092: 0.5630% ( 3) 00:08:36.008 8570.092 - 8620.505: 0.5974% ( 3) 00:08:36.008 8620.505 - 8670.917: 0.6319% ( 3) 00:08:36.008 8670.917 - 8721.329: 0.6664% ( 3) 00:08:36.008 8721.329 - 8771.742: 0.7008% ( 3) 00:08:36.008 8771.742 - 8822.154: 0.7238% ( 2) 00:08:36.008 8822.154 - 8872.566: 0.7353% ( 1) 00:08:36.008 8973.391 - 9023.803: 0.7468% ( 1) 00:08:36.008 9023.803 - 9074.215: 0.7698% ( 2) 00:08:36.008 9074.215 - 9124.628: 0.7927% ( 2) 00:08:36.008 9124.628 - 9175.040: 0.8387% ( 4) 00:08:36.008 9175.040 - 9225.452: 0.8847% ( 4) 00:08:36.008 9225.452 - 9275.865: 0.9191% ( 3) 00:08:36.008 9275.865 - 9326.277: 0.9421% ( 2) 00:08:36.008 9326.277 - 9376.689: 0.9536% ( 1) 00:08:36.008 9376.689 - 9427.102: 1.0570% ( 9) 00:08:36.008 9427.102 - 9477.514: 1.1374% ( 7) 00:08:36.008 9477.514 - 9527.926: 1.2523% ( 10) 00:08:36.008 9527.926 - 9578.338: 1.4017% ( 13) 00:08:36.008 9578.338 - 9628.751: 1.5051% ( 9) 00:08:36.008 9628.751 - 9679.163: 1.5740% ( 6) 00:08:36.008 9679.163 - 9729.575: 1.6659% ( 8) 00:08:36.008 9729.575 - 9779.988: 1.7808% ( 10) 00:08:36.008 9779.988 - 9830.400: 1.8957% ( 10) 00:08:36.008 9830.400 - 9880.812: 2.0335% ( 12) 00:08:36.008 9880.812 - 9931.225: 2.1484% ( 10) 00:08:36.008 9931.225 - 9981.637: 2.2748% ( 11) 00:08:36.008 9981.637 - 10032.049: 2.3667% ( 8) 00:08:36.008 10032.049 - 10082.462: 2.4701% ( 9) 00:08:36.008 10082.462 - 10132.874: 2.5620% ( 8) 00:08:36.008 10132.874 - 10183.286: 2.6425% ( 7) 00:08:36.008 10183.286 - 10233.698: 2.7459% ( 9) 00:08:36.008 10233.698 - 10284.111: 2.9756% ( 20) 00:08:36.008 10284.111 - 10334.523: 3.2744% ( 26) 00:08:36.008 10334.523 - 10384.935: 3.5386% ( 23) 00:08:36.008 10384.935 - 10435.348: 3.9407% ( 35) 00:08:36.008 10435.348 - 10485.760: 4.1935% ( 22) 00:08:36.008 10485.760 - 10536.172: 4.5037% ( 27) 00:08:36.008 10536.172 - 10586.585: 4.8483% ( 30) 00:08:36.008 10586.585 - 10636.997: 5.2734% ( 37) 00:08:36.008 10636.997 - 10687.409: 5.6411% ( 32) 00:08:36.008 10687.409 - 10737.822: 6.0202% ( 33) 00:08:36.008 10737.822 - 10788.234: 6.3879% ( 32) 00:08:36.008 10788.234 - 10838.646: 6.7440% ( 31) 00:08:36.008 10838.646 - 10889.058: 7.1347% ( 34) 00:08:36.008 10889.058 - 10939.471: 7.5023% ( 32) 00:08:36.008 10939.471 - 10989.883: 7.9504% ( 39) 00:08:36.008 10989.883 - 11040.295: 8.4214% ( 41) 00:08:36.008 11040.295 - 11090.708: 8.8925% ( 41) 00:08:36.008 11090.708 - 11141.120: 9.3061% ( 36) 00:08:36.008 11141.120 - 11191.532: 9.7312% ( 37) 00:08:36.008 11191.532 - 11241.945: 10.1448% ( 36) 00:08:36.008 11241.945 - 11292.357: 10.5584% ( 36) 00:08:36.008 11292.357 - 11342.769: 11.0294% ( 41) 00:08:36.008 11342.769 - 11393.182: 11.4660% ( 38) 00:08:36.008 11393.182 - 11443.594: 11.9256% ( 40) 00:08:36.008 11443.594 - 11494.006: 12.2702% ( 30) 00:08:36.008 11494.006 - 11544.418: 12.6034% ( 29) 00:08:36.008 11544.418 - 11594.831: 12.9021% ( 26) 00:08:36.008 11594.831 - 11645.243: 13.2468% ( 30) 00:08:36.008 11645.243 - 11695.655: 13.6029% ( 31) 00:08:36.008 11695.655 - 11746.068: 13.9246% ( 28) 00:08:36.008 11746.068 - 11796.480: 14.1889% ( 23) 00:08:36.008 11796.480 - 11846.892: 14.5450% ( 31) 00:08:36.008 11846.892 - 11897.305: 14.9357% ( 34) 00:08:36.008 11897.305 - 11947.717: 15.1999% ( 23) 00:08:36.008 11947.717 - 11998.129: 15.5446% ( 30) 00:08:36.008 11998.129 - 12048.542: 15.9467% ( 35) 00:08:36.008 12048.542 - 12098.954: 16.3948% ( 39) 00:08:36.008 12098.954 - 12149.366: 16.8313% ( 38) 00:08:36.008 12149.366 - 12199.778: 17.3254% ( 43) 00:08:36.008 12199.778 - 12250.191: 17.9343% ( 53) 00:08:36.008 12250.191 - 12300.603: 18.5547% ( 54) 00:08:36.008 12300.603 - 12351.015: 19.0947% ( 47) 00:08:36.008 12351.015 - 12401.428: 19.7036% ( 53) 00:08:36.008 12401.428 - 12451.840: 20.2780% ( 50) 00:08:36.008 12451.840 - 12502.252: 20.7950% ( 45) 00:08:36.008 12502.252 - 12552.665: 21.3120% ( 45) 00:08:36.008 12552.665 - 12603.077: 21.8176% ( 44) 00:08:36.008 12603.077 - 12653.489: 22.3690% ( 48) 00:08:36.008 12653.489 - 12703.902: 22.9779% ( 53) 00:08:36.008 12703.902 - 12754.314: 23.5983% ( 54) 00:08:36.008 12754.314 - 12804.726: 24.2877% ( 60) 00:08:36.008 12804.726 - 12855.138: 24.8851% ( 52) 00:08:36.008 12855.138 - 12905.551: 25.4710% ( 51) 00:08:36.008 12905.551 - 13006.375: 26.6659% ( 104) 00:08:36.008 13006.375 - 13107.200: 28.0216% ( 118) 00:08:36.008 13107.200 - 13208.025: 29.2050% ( 103) 00:08:36.008 13208.025 - 13308.849: 30.4343% ( 107) 00:08:36.008 13308.849 - 13409.674: 31.7210% ( 112) 00:08:36.008 13409.674 - 13510.498: 33.2376% ( 132) 00:08:36.008 13510.498 - 13611.323: 34.7771% ( 134) 00:08:36.008 13611.323 - 13712.148: 36.6153% ( 160) 00:08:36.008 13712.148 - 13812.972: 38.2927% ( 146) 00:08:36.008 13812.972 - 13913.797: 39.7863% ( 130) 00:08:36.008 13913.797 - 14014.622: 41.3603% ( 137) 00:08:36.008 14014.622 - 14115.446: 42.9802% ( 141) 00:08:36.008 14115.446 - 14216.271: 44.4738% ( 130) 00:08:36.008 14216.271 - 14317.095: 46.1282% ( 144) 00:08:36.008 14317.095 - 14417.920: 48.1043% ( 172) 00:08:36.008 14417.920 - 14518.745: 50.0804% ( 172) 00:08:36.008 14518.745 - 14619.569: 51.9531% ( 163) 00:08:36.008 14619.569 - 14720.394: 53.8028% ( 161) 00:08:36.008 14720.394 - 14821.218: 55.4228% ( 141) 00:08:36.008 14821.218 - 14922.043: 56.9393% ( 132) 00:08:36.008 14922.043 - 15022.868: 58.5248% ( 138) 00:08:36.008 15022.868 - 15123.692: 60.1448% ( 141) 00:08:36.008 15123.692 - 15224.517: 61.7992% ( 144) 00:08:36.008 15224.517 - 15325.342: 63.2353% ( 125) 00:08:36.008 15325.342 - 15426.166: 64.8093% ( 137) 00:08:36.008 15426.166 - 15526.991: 66.2914% ( 129) 00:08:36.008 15526.991 - 15627.815: 67.6585% ( 119) 00:08:36.008 15627.815 - 15728.640: 69.0832% ( 124) 00:08:36.008 15728.640 - 15829.465: 70.5423% ( 127) 00:08:36.008 15829.465 - 15930.289: 72.2197% ( 146) 00:08:36.008 15930.289 - 16031.114: 73.8741% ( 144) 00:08:36.008 16031.114 - 16131.938: 75.8272% ( 170) 00:08:36.008 16131.938 - 16232.763: 77.4012% ( 137) 00:08:36.008 16232.763 - 16333.588: 78.9292% ( 133) 00:08:36.008 16333.588 - 16434.412: 80.3424% ( 123) 00:08:36.008 16434.412 - 16535.237: 81.9049% ( 136) 00:08:36.008 16535.237 - 16636.062: 83.3640% ( 127) 00:08:36.008 16636.062 - 16736.886: 84.5358% ( 102) 00:08:36.008 16736.886 - 16837.711: 85.6733% ( 99) 00:08:36.008 16837.711 - 16938.535: 86.5234% ( 74) 00:08:36.008 16938.535 - 17039.360: 87.4885% ( 84) 00:08:36.008 17039.360 - 17140.185: 88.2927% ( 70) 00:08:36.008 17140.185 - 17241.009: 89.0165% ( 63) 00:08:36.008 17241.009 - 17341.834: 89.7289% ( 62) 00:08:36.008 17341.834 - 17442.658: 90.5331% ( 70) 00:08:36.008 17442.658 - 17543.483: 91.1765% ( 56) 00:08:36.008 17543.483 - 17644.308: 91.6935% ( 45) 00:08:36.008 17644.308 - 17745.132: 92.2105% ( 45) 00:08:36.008 17745.132 - 17845.957: 92.6356% ( 37) 00:08:36.008 17845.957 - 17946.782: 93.1066% ( 41) 00:08:36.008 17946.782 - 18047.606: 93.5087% ( 35) 00:08:36.008 18047.606 - 18148.431: 93.8649% ( 31) 00:08:36.008 18148.431 - 18249.255: 94.1866% ( 28) 00:08:36.008 18249.255 - 18350.080: 94.5083% ( 28) 00:08:36.008 18350.080 - 18450.905: 94.8300% ( 28) 00:08:36.008 18450.905 - 18551.729: 95.1517% ( 28) 00:08:36.008 18551.729 - 18652.554: 95.3470% ( 17) 00:08:36.008 18652.554 - 18753.378: 95.5193% ( 15) 00:08:36.008 18753.378 - 18854.203: 95.6572% ( 12) 00:08:36.008 18854.203 - 18955.028: 95.8065% ( 13) 00:08:36.008 18955.028 - 19055.852: 95.9674% ( 14) 00:08:36.008 19055.852 - 19156.677: 96.0708% ( 9) 00:08:36.008 19156.677 - 19257.502: 96.1627% ( 8) 00:08:36.008 19257.502 - 19358.326: 96.2201% ( 5) 00:08:36.008 19358.326 - 19459.151: 96.2776% ( 5) 00:08:36.008 19459.151 - 19559.975: 96.3235% ( 4) 00:08:36.008 21677.292 - 21778.117: 96.3465% ( 2) 00:08:36.008 21778.117 - 21878.942: 96.4040% ( 5) 00:08:36.009 21878.942 - 21979.766: 96.4729% ( 6) 00:08:36.009 21979.766 - 22080.591: 96.5303% ( 5) 00:08:36.009 22080.591 - 22181.415: 96.5993% ( 6) 00:08:36.009 22181.415 - 22282.240: 96.6567% ( 5) 00:08:36.009 22282.240 - 22383.065: 96.7256% ( 6) 00:08:36.009 22383.065 - 22483.889: 96.7831% ( 5) 00:08:36.009 22483.889 - 22584.714: 96.8405% ( 5) 00:08:36.009 22584.714 - 22685.538: 96.9095% ( 6) 00:08:36.009 22685.538 - 22786.363: 96.9669% ( 5) 00:08:36.009 22786.363 - 22887.188: 97.0358% ( 6) 00:08:36.009 22887.188 - 22988.012: 97.0588% ( 2) 00:08:36.009 24298.732 - 24399.557: 97.0818% ( 2) 00:08:36.009 24399.557 - 24500.382: 97.1392% ( 5) 00:08:36.009 24500.382 - 24601.206: 97.1967% ( 5) 00:08:36.009 24601.206 - 24702.031: 97.2541% ( 5) 00:08:36.009 24702.031 - 24802.855: 97.3116% ( 5) 00:08:36.009 24802.855 - 24903.680: 97.3690% ( 5) 00:08:36.009 24903.680 - 25004.505: 97.4265% ( 5) 00:08:36.009 25004.505 - 25105.329: 97.4839% ( 5) 00:08:36.009 25105.329 - 25206.154: 97.5643% ( 7) 00:08:36.009 25206.154 - 25306.978: 97.6562% ( 8) 00:08:36.009 25306.978 - 25407.803: 97.7482% ( 8) 00:08:36.009 25407.803 - 25508.628: 97.8401% ( 8) 00:08:36.009 25508.628 - 25609.452: 97.9435% ( 9) 00:08:36.009 25609.452 - 25710.277: 97.9894% ( 4) 00:08:36.009 25710.277 - 25811.102: 98.0354% ( 4) 00:08:36.009 25811.102 - 26012.751: 98.1503% ( 10) 00:08:36.009 26012.751 - 26214.400: 98.2652% ( 10) 00:08:36.009 26214.400 - 26416.049: 98.3801% ( 10) 00:08:36.009 26416.049 - 26617.698: 98.5064% ( 11) 00:08:36.009 26617.698 - 26819.348: 98.5294% ( 2) 00:08:36.009 28230.892 - 28432.542: 98.5524% ( 2) 00:08:36.009 28432.542 - 28634.191: 98.7362% ( 16) 00:08:36.009 28634.191 - 28835.840: 98.8511% ( 10) 00:08:36.009 28835.840 - 29037.489: 98.9315% ( 7) 00:08:36.009 29037.489 - 29239.138: 99.0464% ( 10) 00:08:36.009 29239.138 - 29440.788: 99.1498% ( 9) 00:08:36.009 29440.788 - 29642.437: 99.2532% ( 9) 00:08:36.009 29642.437 - 29844.086: 99.2647% ( 1) 00:08:36.009 33877.071 - 34078.720: 99.2877% ( 2) 00:08:36.009 34078.720 - 34280.369: 99.4026% ( 10) 00:08:36.009 34280.369 - 34482.018: 99.5290% ( 11) 00:08:36.009 34482.018 - 34683.668: 99.6438% ( 10) 00:08:36.009 34683.668 - 34885.317: 99.7358% ( 8) 00:08:36.009 34885.317 - 35086.966: 99.8277% ( 8) 00:08:36.009 35086.966 - 35288.615: 99.9311% ( 9) 00:08:36.009 35288.615 - 35490.265: 100.0000% ( 6) 00:08:36.009 00:08:36.009 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:36.009 ============================================================================== 00:08:36.009 Range in us Cumulative IO count 00:08:36.009 7360.197 - 7410.609: 0.0345% ( 3) 00:08:36.009 7410.609 - 7461.022: 0.0574% ( 2) 00:08:36.009 7461.022 - 7511.434: 0.0919% ( 3) 00:08:36.009 7511.434 - 7561.846: 0.1264% ( 3) 00:08:36.009 7561.846 - 7612.258: 0.1723% ( 4) 00:08:36.009 7612.258 - 7662.671: 0.2068% ( 3) 00:08:36.009 7662.671 - 7713.083: 0.2413% ( 3) 00:08:36.009 7713.083 - 7763.495: 0.2757% ( 3) 00:08:36.009 7763.495 - 7813.908: 0.3102% ( 3) 00:08:36.009 7813.908 - 7864.320: 0.3562% ( 4) 00:08:36.009 7864.320 - 7914.732: 0.3906% ( 3) 00:08:36.009 7914.732 - 7965.145: 0.4251% ( 3) 00:08:36.009 7965.145 - 8015.557: 0.4596% ( 3) 00:08:36.009 8015.557 - 8065.969: 0.5055% ( 4) 00:08:36.009 8065.969 - 8116.382: 0.5400% ( 3) 00:08:36.009 8116.382 - 8166.794: 0.5744% ( 3) 00:08:36.009 8166.794 - 8217.206: 0.6089% ( 3) 00:08:36.009 8217.206 - 8267.618: 0.6434% ( 3) 00:08:36.009 8267.618 - 8318.031: 0.6778% ( 3) 00:08:36.009 8318.031 - 8368.443: 0.7123% ( 3) 00:08:36.009 8368.443 - 8418.855: 0.7353% ( 2) 00:08:36.009 8922.978 - 8973.391: 0.7583% ( 2) 00:08:36.009 8973.391 - 9023.803: 0.7927% ( 3) 00:08:36.009 9023.803 - 9074.215: 0.8272% ( 3) 00:08:36.009 9074.215 - 9124.628: 0.8502% ( 2) 00:08:36.009 9124.628 - 9175.040: 0.8961% ( 4) 00:08:36.009 9175.040 - 9225.452: 0.9306% ( 3) 00:08:36.009 9225.452 - 9275.865: 0.9536% ( 2) 00:08:36.009 9275.865 - 9326.277: 0.9766% ( 2) 00:08:36.009 9326.277 - 9376.689: 1.0110% ( 3) 00:08:36.009 9376.689 - 9427.102: 1.0685% ( 5) 00:08:36.009 9427.102 - 9477.514: 1.1834% ( 10) 00:08:36.009 9477.514 - 9527.926: 1.2523% ( 6) 00:08:36.009 9527.926 - 9578.338: 1.3442% ( 8) 00:08:36.009 9578.338 - 9628.751: 1.4591% ( 10) 00:08:36.009 9628.751 - 9679.163: 1.5510% ( 8) 00:08:36.009 9679.163 - 9729.575: 1.6429% ( 8) 00:08:36.009 9729.575 - 9779.988: 1.7578% ( 10) 00:08:36.009 9779.988 - 9830.400: 1.8612% ( 9) 00:08:36.009 9830.400 - 9880.812: 1.9646% ( 9) 00:08:36.009 9880.812 - 9931.225: 2.0565% ( 8) 00:08:36.009 9931.225 - 9981.637: 2.1484% ( 8) 00:08:36.009 9981.637 - 10032.049: 2.2633% ( 10) 00:08:36.009 10032.049 - 10082.462: 2.3667% ( 9) 00:08:36.009 10082.462 - 10132.874: 2.4701% ( 9) 00:08:36.009 10132.874 - 10183.286: 2.5506% ( 7) 00:08:36.009 10183.286 - 10233.698: 2.6540% ( 9) 00:08:36.009 10233.698 - 10284.111: 2.8608% ( 18) 00:08:36.009 10284.111 - 10334.523: 3.1020% ( 21) 00:08:36.009 10334.523 - 10384.935: 3.3433% ( 21) 00:08:36.009 10384.935 - 10435.348: 3.5501% ( 18) 00:08:36.009 10435.348 - 10485.760: 3.7914% ( 21) 00:08:36.009 10485.760 - 10536.172: 4.0786% ( 25) 00:08:36.009 10536.172 - 10586.585: 4.5037% ( 37) 00:08:36.009 10586.585 - 10636.997: 4.8598% ( 31) 00:08:36.009 10636.997 - 10687.409: 5.1700% ( 27) 00:08:36.009 10687.409 - 10737.822: 5.5147% ( 30) 00:08:36.009 10737.822 - 10788.234: 5.8938% ( 33) 00:08:36.009 10788.234 - 10838.646: 6.2615% ( 32) 00:08:36.009 10838.646 - 10889.058: 6.6866% ( 37) 00:08:36.009 10889.058 - 10939.471: 7.0542% ( 32) 00:08:36.009 10939.471 - 10989.883: 7.5253% ( 41) 00:08:36.009 10989.883 - 11040.295: 8.0308% ( 44) 00:08:36.009 11040.295 - 11090.708: 8.4674% ( 38) 00:08:36.009 11090.708 - 11141.120: 8.8695% ( 35) 00:08:36.009 11141.120 - 11191.532: 9.2371% ( 32) 00:08:36.009 11191.532 - 11241.945: 9.6967% ( 40) 00:08:36.009 11241.945 - 11292.357: 10.0873% ( 34) 00:08:36.009 11292.357 - 11342.769: 10.5124% ( 37) 00:08:36.009 11342.769 - 11393.182: 10.9375% ( 37) 00:08:36.009 11393.182 - 11443.594: 11.3511% ( 36) 00:08:36.009 11443.594 - 11494.006: 11.7417% ( 34) 00:08:36.009 11494.006 - 11544.418: 12.1553% ( 36) 00:08:36.009 11544.418 - 11594.831: 12.5000% ( 30) 00:08:36.009 11594.831 - 11645.243: 12.8791% ( 33) 00:08:36.009 11645.243 - 11695.655: 13.4076% ( 46) 00:08:36.009 11695.655 - 11746.068: 13.8212% ( 36) 00:08:36.009 11746.068 - 11796.480: 14.3153% ( 43) 00:08:36.009 11796.480 - 11846.892: 14.7174% ( 35) 00:08:36.009 11846.892 - 11897.305: 15.1999% ( 42) 00:08:36.009 11897.305 - 11947.717: 15.7629% ( 49) 00:08:36.009 11947.717 - 11998.129: 16.2339% ( 41) 00:08:36.009 11998.129 - 12048.542: 16.7165% ( 42) 00:08:36.009 12048.542 - 12098.954: 17.1645% ( 39) 00:08:36.009 12098.954 - 12149.366: 17.6241% ( 40) 00:08:36.009 12149.366 - 12199.778: 18.0951% ( 41) 00:08:36.009 12199.778 - 12250.191: 18.6121% ( 45) 00:08:36.009 12250.191 - 12300.603: 19.2210% ( 53) 00:08:36.009 12300.603 - 12351.015: 19.8300% ( 53) 00:08:36.009 12351.015 - 12401.428: 20.3699% ( 47) 00:08:36.009 12401.428 - 12451.840: 21.0133% ( 56) 00:08:36.009 12451.840 - 12502.252: 21.5993% ( 51) 00:08:36.009 12502.252 - 12552.665: 22.1852% ( 51) 00:08:36.009 12552.665 - 12603.077: 22.8401% ( 57) 00:08:36.009 12603.077 - 12653.489: 23.3686% ( 46) 00:08:36.009 12653.489 - 12703.902: 23.8856% ( 45) 00:08:36.009 12703.902 - 12754.314: 24.3566% ( 41) 00:08:36.009 12754.314 - 12804.726: 24.8392% ( 42) 00:08:36.009 12804.726 - 12855.138: 25.2987% ( 40) 00:08:36.009 12855.138 - 12905.551: 25.7698% ( 41) 00:08:36.009 12905.551 - 13006.375: 27.0106% ( 108) 00:08:36.009 13006.375 - 13107.200: 28.1480% ( 99) 00:08:36.009 13107.200 - 13208.025: 29.4233% ( 111) 00:08:36.009 13208.025 - 13308.849: 30.7330% ( 114) 00:08:36.009 13308.849 - 13409.674: 32.0198% ( 112) 00:08:36.009 13409.674 - 13510.498: 33.3295% ( 114) 00:08:36.009 13510.498 - 13611.323: 34.8001% ( 128) 00:08:36.009 13611.323 - 13712.148: 36.6153% ( 158) 00:08:36.009 13712.148 - 13812.972: 38.4651% ( 161) 00:08:36.009 13812.972 - 13913.797: 40.2574% ( 156) 00:08:36.009 13913.797 - 14014.622: 42.1645% ( 166) 00:08:36.009 14014.622 - 14115.446: 44.1751% ( 175) 00:08:36.009 14115.446 - 14216.271: 46.0593% ( 164) 00:08:36.009 14216.271 - 14317.095: 48.0009% ( 169) 00:08:36.009 14317.095 - 14417.920: 49.7932% ( 156) 00:08:36.009 14417.920 - 14518.745: 51.5740% ( 155) 00:08:36.009 14518.745 - 14619.569: 53.3778% ( 157) 00:08:36.009 14619.569 - 14720.394: 55.0322% ( 144) 00:08:36.009 14720.394 - 14821.218: 56.7900% ( 153) 00:08:36.009 14821.218 - 14922.043: 58.4559% ( 145) 00:08:36.009 14922.043 - 15022.868: 60.0528% ( 139) 00:08:36.009 15022.868 - 15123.692: 61.7188% ( 145) 00:08:36.009 15123.692 - 15224.517: 63.2927% ( 137) 00:08:36.009 15224.517 - 15325.342: 64.6484% ( 118) 00:08:36.009 15325.342 - 15426.166: 65.8318% ( 103) 00:08:36.009 15426.166 - 15526.991: 67.1415% ( 114) 00:08:36.010 15526.991 - 15627.815: 68.5777% ( 125) 00:08:36.010 15627.815 - 15728.640: 70.0712% ( 130) 00:08:36.010 15728.640 - 15829.465: 71.5303% ( 127) 00:08:36.010 15829.465 - 15930.289: 72.9550% ( 124) 00:08:36.010 15930.289 - 16031.114: 74.3336% ( 120) 00:08:36.010 16031.114 - 16131.938: 75.7698% ( 125) 00:08:36.010 16131.938 - 16232.763: 77.2174% ( 126) 00:08:36.010 16232.763 - 16333.588: 78.4697% ( 109) 00:08:36.010 16333.588 - 16434.412: 79.8139% ( 117) 00:08:36.010 16434.412 - 16535.237: 80.9628% ( 100) 00:08:36.010 16535.237 - 16636.062: 82.1347% ( 102) 00:08:36.010 16636.062 - 16736.886: 83.2491% ( 97) 00:08:36.010 16736.886 - 16837.711: 84.1797% ( 81) 00:08:36.010 16837.711 - 16938.535: 85.0299% ( 74) 00:08:36.010 16938.535 - 17039.360: 85.9835% ( 83) 00:08:36.010 17039.360 - 17140.185: 86.8107% ( 72) 00:08:36.010 17140.185 - 17241.009: 87.7528% ( 82) 00:08:36.010 17241.009 - 17341.834: 88.6374% ( 77) 00:08:36.010 17341.834 - 17442.658: 89.3038% ( 58) 00:08:36.010 17442.658 - 17543.483: 89.9701% ( 58) 00:08:36.010 17543.483 - 17644.308: 90.5676% ( 52) 00:08:36.010 17644.308 - 17745.132: 91.1650% ( 52) 00:08:36.010 17745.132 - 17845.957: 91.7624% ( 52) 00:08:36.010 17845.957 - 17946.782: 92.3024% ( 47) 00:08:36.010 17946.782 - 18047.606: 92.8079% ( 44) 00:08:36.010 18047.606 - 18148.431: 93.2675% ( 40) 00:08:36.010 18148.431 - 18249.255: 93.7385% ( 41) 00:08:36.010 18249.255 - 18350.080: 94.1406% ( 35) 00:08:36.010 18350.080 - 18450.905: 94.5312% ( 34) 00:08:36.010 18450.905 - 18551.729: 94.8644% ( 29) 00:08:36.010 18551.729 - 18652.554: 95.2665% ( 35) 00:08:36.010 18652.554 - 18753.378: 95.4848% ( 19) 00:08:36.010 18753.378 - 18854.203: 95.7491% ( 23) 00:08:36.010 18854.203 - 18955.028: 95.9099% ( 14) 00:08:36.010 18955.028 - 19055.852: 96.0708% ( 14) 00:08:36.010 19055.852 - 19156.677: 96.1512% ( 7) 00:08:36.010 19156.677 - 19257.502: 96.2661% ( 10) 00:08:36.010 19257.502 - 19358.326: 96.3235% ( 5) 00:08:36.010 22786.363 - 22887.188: 96.3580% ( 3) 00:08:36.010 22887.188 - 22988.012: 96.4040% ( 4) 00:08:36.010 22988.012 - 23088.837: 96.4729% ( 6) 00:08:36.010 23088.837 - 23189.662: 96.5303% ( 5) 00:08:36.010 23189.662 - 23290.486: 96.6452% ( 10) 00:08:36.010 23290.486 - 23391.311: 96.7601% ( 10) 00:08:36.010 23391.311 - 23492.135: 96.8865% ( 11) 00:08:36.010 23492.135 - 23592.960: 96.9899% ( 9) 00:08:36.010 23592.960 - 23693.785: 97.1163% ( 11) 00:08:36.010 23693.785 - 23794.609: 97.2312% ( 10) 00:08:36.010 23794.609 - 23895.434: 97.3346% ( 9) 00:08:36.010 23895.434 - 23996.258: 97.4609% ( 11) 00:08:36.010 23996.258 - 24097.083: 97.5758% ( 10) 00:08:36.010 24097.083 - 24197.908: 97.6792% ( 9) 00:08:36.010 24197.908 - 24298.732: 97.7367% ( 5) 00:08:36.010 24298.732 - 24399.557: 97.7941% ( 5) 00:08:36.010 25306.978 - 25407.803: 97.8401% ( 4) 00:08:36.010 25407.803 - 25508.628: 97.8975% ( 5) 00:08:36.010 25508.628 - 25609.452: 97.9550% ( 5) 00:08:36.010 25609.452 - 25710.277: 98.0239% ( 6) 00:08:36.010 25710.277 - 25811.102: 98.0813% ( 5) 00:08:36.010 25811.102 - 26012.751: 98.1962% ( 10) 00:08:36.010 26012.751 - 26214.400: 98.3111% ( 10) 00:08:36.010 26214.400 - 26416.049: 98.4375% ( 11) 00:08:36.010 26416.049 - 26617.698: 98.5294% ( 8) 00:08:36.010 28230.892 - 28432.542: 98.6098% ( 7) 00:08:36.010 28432.542 - 28634.191: 98.7132% ( 9) 00:08:36.010 28634.191 - 28835.840: 98.8281% ( 10) 00:08:36.010 28835.840 - 29037.489: 98.9430% ( 10) 00:08:36.010 29037.489 - 29239.138: 99.0579% ( 10) 00:08:36.010 29239.138 - 29440.788: 99.1728% ( 10) 00:08:36.010 29440.788 - 29642.437: 99.2647% ( 8) 00:08:36.010 33675.422 - 33877.071: 99.2992% ( 3) 00:08:36.010 33877.071 - 34078.720: 99.3796% ( 7) 00:08:36.010 34078.720 - 34280.369: 99.4945% ( 10) 00:08:36.010 34280.369 - 34482.018: 99.5979% ( 9) 00:08:36.010 34482.018 - 34683.668: 99.7013% ( 9) 00:08:36.010 34683.668 - 34885.317: 99.8162% ( 10) 00:08:36.010 34885.317 - 35086.966: 99.9426% ( 11) 00:08:36.010 35086.966 - 35288.615: 100.0000% ( 5) 00:08:36.010 00:08:36.010 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:36.010 ============================================================================== 00:08:36.010 Range in us Cumulative IO count 00:08:36.010 6906.486 - 6956.898: 0.0342% ( 3) 00:08:36.010 6956.898 - 7007.311: 0.0798% ( 4) 00:08:36.010 7007.311 - 7057.723: 0.1026% ( 2) 00:08:36.010 7057.723 - 7108.135: 0.1369% ( 3) 00:08:36.010 7108.135 - 7158.548: 0.1711% ( 3) 00:08:36.010 7158.548 - 7208.960: 0.1939% ( 2) 00:08:36.010 7208.960 - 7259.372: 0.2395% ( 4) 00:08:36.010 7259.372 - 7309.785: 0.2737% ( 3) 00:08:36.010 7309.785 - 7360.197: 0.3079% ( 3) 00:08:36.010 7360.197 - 7410.609: 0.3422% ( 3) 00:08:36.010 7410.609 - 7461.022: 0.3764% ( 3) 00:08:36.010 7461.022 - 7511.434: 0.4106% ( 3) 00:08:36.010 7511.434 - 7561.846: 0.4562% ( 4) 00:08:36.010 7561.846 - 7612.258: 0.4904% ( 3) 00:08:36.010 7612.258 - 7662.671: 0.5246% ( 3) 00:08:36.010 7662.671 - 7713.083: 0.5589% ( 3) 00:08:36.010 7713.083 - 7763.495: 0.5931% ( 3) 00:08:36.010 7763.495 - 7813.908: 0.6387% ( 4) 00:08:36.010 7813.908 - 7864.320: 0.6729% ( 3) 00:08:36.010 7864.320 - 7914.732: 0.7071% ( 3) 00:08:36.010 7914.732 - 7965.145: 0.7299% ( 2) 00:08:36.010 9124.628 - 9175.040: 0.7413% ( 1) 00:08:36.010 9175.040 - 9225.452: 0.7984% ( 5) 00:08:36.010 9225.452 - 9275.865: 0.8896% ( 8) 00:08:36.010 9275.865 - 9326.277: 0.9352% ( 4) 00:08:36.010 9326.277 - 9376.689: 1.0036% ( 6) 00:08:36.010 9376.689 - 9427.102: 1.0721% ( 6) 00:08:36.010 9427.102 - 9477.514: 1.1291% ( 5) 00:08:36.010 9477.514 - 9527.926: 1.2089% ( 7) 00:08:36.010 9527.926 - 9578.338: 1.2888% ( 7) 00:08:36.010 9578.338 - 9628.751: 1.3572% ( 6) 00:08:36.010 9628.751 - 9679.163: 1.4028% ( 4) 00:08:36.010 9679.163 - 9729.575: 1.4599% ( 5) 00:08:36.010 9729.575 - 9779.988: 1.5397% ( 7) 00:08:36.010 9779.988 - 9830.400: 1.6994% ( 14) 00:08:36.010 9830.400 - 9880.812: 1.8134% ( 10) 00:08:36.010 9880.812 - 9931.225: 1.9047% ( 8) 00:08:36.010 9931.225 - 9981.637: 1.9845% ( 7) 00:08:36.010 9981.637 - 10032.049: 2.1556% ( 15) 00:08:36.010 10032.049 - 10082.462: 2.3609% ( 18) 00:08:36.010 10082.462 - 10132.874: 2.5205% ( 14) 00:08:36.010 10132.874 - 10183.286: 2.7030% ( 16) 00:08:36.010 10183.286 - 10233.698: 2.9197% ( 19) 00:08:36.010 10233.698 - 10284.111: 3.1250% ( 18) 00:08:36.010 10284.111 - 10334.523: 3.3645% ( 21) 00:08:36.010 10334.523 - 10384.935: 3.5698% ( 18) 00:08:36.010 10384.935 - 10435.348: 3.7979% ( 20) 00:08:36.010 10435.348 - 10485.760: 4.0716% ( 24) 00:08:36.010 10485.760 - 10536.172: 4.3682% ( 26) 00:08:36.010 10536.172 - 10586.585: 4.6305% ( 23) 00:08:36.010 10586.585 - 10636.997: 4.9498% ( 28) 00:08:36.010 10636.997 - 10687.409: 5.2578% ( 27) 00:08:36.010 10687.409 - 10737.822: 5.6113% ( 31) 00:08:36.010 10737.822 - 10788.234: 5.9193% ( 27) 00:08:36.010 10788.234 - 10838.646: 6.2614% ( 30) 00:08:36.010 10838.646 - 10889.058: 6.6150% ( 31) 00:08:36.010 10889.058 - 10939.471: 7.0141% ( 35) 00:08:36.010 10939.471 - 10989.883: 7.3791% ( 32) 00:08:36.010 10989.883 - 11040.295: 7.7897% ( 36) 00:08:36.010 11040.295 - 11090.708: 8.2231% ( 38) 00:08:36.010 11090.708 - 11141.120: 8.6337% ( 36) 00:08:36.010 11141.120 - 11191.532: 9.0328% ( 35) 00:08:36.010 11191.532 - 11241.945: 9.4092% ( 33) 00:08:36.010 11241.945 - 11292.357: 9.7514% ( 30) 00:08:36.010 11292.357 - 11342.769: 10.1049% ( 31) 00:08:36.010 11342.769 - 11393.182: 10.4357% ( 29) 00:08:36.010 11393.182 - 11443.594: 10.7892% ( 31) 00:08:36.010 11443.594 - 11494.006: 11.1884% ( 35) 00:08:36.010 11494.006 - 11544.418: 11.5420% ( 31) 00:08:36.010 11544.418 - 11594.831: 11.9183% ( 33) 00:08:36.010 11594.831 - 11645.243: 12.3289% ( 36) 00:08:36.010 11645.243 - 11695.655: 12.7281% ( 35) 00:08:36.010 11695.655 - 11746.068: 13.0817% ( 31) 00:08:36.010 11746.068 - 11796.480: 13.4466% ( 32) 00:08:36.010 11796.480 - 11846.892: 13.9028% ( 40) 00:08:36.010 11846.892 - 11897.305: 14.2564% ( 31) 00:08:36.010 11897.305 - 11947.717: 14.6442% ( 34) 00:08:36.010 11947.717 - 11998.129: 15.0433% ( 35) 00:08:36.010 11998.129 - 12048.542: 15.3969% ( 31) 00:08:36.010 12048.542 - 12098.954: 15.7961% ( 35) 00:08:36.010 12098.954 - 12149.366: 16.1953% ( 35) 00:08:36.010 12149.366 - 12199.778: 16.7313% ( 47) 00:08:36.010 12199.778 - 12250.191: 17.2103% ( 42) 00:08:36.010 12250.191 - 12300.603: 17.7578% ( 48) 00:08:36.010 12300.603 - 12351.015: 18.2596% ( 44) 00:08:36.010 12351.015 - 12401.428: 18.8755% ( 54) 00:08:36.010 12401.428 - 12451.840: 19.4799% ( 53) 00:08:36.010 12451.840 - 12502.252: 20.1984% ( 63) 00:08:36.010 12502.252 - 12552.665: 20.9398% ( 65) 00:08:36.010 12552.665 - 12603.077: 21.6469% ( 62) 00:08:36.010 12603.077 - 12653.489: 22.3882% ( 65) 00:08:36.010 12653.489 - 12703.902: 23.0611% ( 59) 00:08:36.010 12703.902 - 12754.314: 23.7454% ( 60) 00:08:36.010 12754.314 - 12804.726: 24.4868% ( 65) 00:08:36.010 12804.726 - 12855.138: 25.2395% ( 66) 00:08:36.010 12855.138 - 12905.551: 25.9694% ( 64) 00:08:36.010 12905.551 - 13006.375: 27.4635% ( 131) 00:08:36.010 13006.375 - 13107.200: 28.8207% ( 119) 00:08:36.010 13107.200 - 13208.025: 30.4288% ( 141) 00:08:36.010 13208.025 - 13308.849: 31.9799% ( 136) 00:08:36.010 13308.849 - 13409.674: 33.3828% ( 123) 00:08:36.010 13409.674 - 13510.498: 34.7628% ( 121) 00:08:36.010 13510.498 - 13611.323: 36.1200% ( 119) 00:08:36.011 13611.323 - 13712.148: 37.7281% ( 141) 00:08:36.011 13712.148 - 13812.972: 39.1766% ( 127) 00:08:36.011 13812.972 - 13913.797: 40.7391% ( 137) 00:08:36.011 13913.797 - 14014.622: 42.5182% ( 156) 00:08:36.011 14014.622 - 14115.446: 44.4001% ( 165) 00:08:36.011 14115.446 - 14216.271: 46.1109% ( 150) 00:08:36.011 14216.271 - 14317.095: 47.8216% ( 150) 00:08:36.011 14317.095 - 14417.920: 49.4526% ( 143) 00:08:36.011 14417.920 - 14518.745: 50.9808% ( 134) 00:08:36.011 14518.745 - 14619.569: 52.4749% ( 131) 00:08:36.011 14619.569 - 14720.394: 54.1058% ( 143) 00:08:36.011 14720.394 - 14821.218: 55.4859% ( 121) 00:08:36.011 14821.218 - 14922.043: 56.8773% ( 122) 00:08:36.011 14922.043 - 15022.868: 58.4398% ( 137) 00:08:36.011 15022.868 - 15123.692: 60.2418% ( 158) 00:08:36.011 15123.692 - 15224.517: 62.0780% ( 161) 00:08:36.011 15224.517 - 15325.342: 63.9827% ( 167) 00:08:36.011 15325.342 - 15426.166: 65.7276% ( 153) 00:08:36.011 15426.166 - 15526.991: 67.1191% ( 122) 00:08:36.011 15526.991 - 15627.815: 68.6816% ( 137) 00:08:36.011 15627.815 - 15728.640: 70.0502% ( 120) 00:08:36.011 15728.640 - 15829.465: 71.6697% ( 142) 00:08:36.011 15829.465 - 15930.289: 73.1410% ( 129) 00:08:36.011 15930.289 - 16031.114: 74.4982% ( 119) 00:08:36.011 16031.114 - 16131.938: 75.9466% ( 127) 00:08:36.011 16131.938 - 16232.763: 77.5433% ( 140) 00:08:36.011 16232.763 - 16333.588: 78.9462% ( 123) 00:08:36.011 16333.588 - 16434.412: 80.2806% ( 117) 00:08:36.011 16434.412 - 16535.237: 81.6720% ( 122) 00:08:36.011 16535.237 - 16636.062: 82.9151% ( 109) 00:08:36.011 16636.062 - 16736.886: 84.1241% ( 106) 00:08:36.011 16736.886 - 16837.711: 85.2760% ( 101) 00:08:36.011 16837.711 - 16938.535: 86.2340% ( 84) 00:08:36.011 16938.535 - 17039.360: 87.0438% ( 71) 00:08:36.011 17039.360 - 17140.185: 87.8536% ( 71) 00:08:36.011 17140.185 - 17241.009: 88.7660% ( 80) 00:08:36.011 17241.009 - 17341.834: 89.7468% ( 86) 00:08:36.011 17341.834 - 17442.658: 90.4083% ( 58) 00:08:36.011 17442.658 - 17543.483: 91.0470% ( 56) 00:08:36.011 17543.483 - 17644.308: 91.5602% ( 45) 00:08:36.011 17644.308 - 17745.132: 92.1191% ( 49) 00:08:36.011 17745.132 - 17845.957: 92.7007% ( 51) 00:08:36.011 17845.957 - 17946.782: 93.1911% ( 43) 00:08:36.011 17946.782 - 18047.606: 93.7614% ( 50) 00:08:36.011 18047.606 - 18148.431: 94.3545% ( 52) 00:08:36.011 18148.431 - 18249.255: 94.8791% ( 46) 00:08:36.011 18249.255 - 18350.080: 95.2897% ( 36) 00:08:36.011 18350.080 - 18450.905: 95.5292% ( 21) 00:08:36.011 18450.905 - 18551.729: 95.7687% ( 21) 00:08:36.011 18551.729 - 18652.554: 96.0652% ( 26) 00:08:36.011 18652.554 - 18753.378: 96.2705% ( 18) 00:08:36.011 18753.378 - 18854.203: 96.5328% ( 23) 00:08:36.011 18854.203 - 18955.028: 96.7495% ( 19) 00:08:36.011 18955.028 - 19055.852: 96.8750% ( 11) 00:08:36.011 19055.852 - 19156.677: 96.9548% ( 7) 00:08:36.011 19156.677 - 19257.502: 97.0119% ( 5) 00:08:36.011 19257.502 - 19358.326: 97.0689% ( 5) 00:08:36.011 19358.326 - 19459.151: 97.0803% ( 1) 00:08:36.011 22080.591 - 22181.415: 97.1601% ( 7) 00:08:36.011 22181.415 - 22282.240: 97.2057% ( 4) 00:08:36.011 22282.240 - 22383.065: 97.2514% ( 4) 00:08:36.011 22383.065 - 22483.889: 97.3084% ( 5) 00:08:36.011 22483.889 - 22584.714: 97.3654% ( 5) 00:08:36.011 22584.714 - 22685.538: 97.4224% ( 5) 00:08:36.011 22685.538 - 22786.363: 97.4681% ( 4) 00:08:36.011 22786.363 - 22887.188: 97.5251% ( 5) 00:08:36.011 22887.188 - 22988.012: 97.5935% ( 6) 00:08:36.011 22988.012 - 23088.837: 97.6391% ( 4) 00:08:36.011 23088.837 - 23189.662: 97.7076% ( 6) 00:08:36.011 23189.662 - 23290.486: 97.7532% ( 4) 00:08:36.011 23290.486 - 23391.311: 97.7988% ( 4) 00:08:36.011 23391.311 - 23492.135: 97.8102% ( 1) 00:08:36.011 23895.434 - 23996.258: 97.8216% ( 1) 00:08:36.011 23996.258 - 24097.083: 97.8786% ( 5) 00:08:36.011 24097.083 - 24197.908: 97.9471% ( 6) 00:08:36.011 24197.908 - 24298.732: 98.0155% ( 6) 00:08:36.011 24298.732 - 24399.557: 98.0953% ( 7) 00:08:36.011 24399.557 - 24500.382: 98.1524% ( 5) 00:08:36.011 24500.382 - 24601.206: 98.1980% ( 4) 00:08:36.011 24601.206 - 24702.031: 98.2550% ( 5) 00:08:36.011 24702.031 - 24802.855: 98.3120% ( 5) 00:08:36.011 24802.855 - 24903.680: 98.3691% ( 5) 00:08:36.011 24903.680 - 25004.505: 98.4375% ( 6) 00:08:36.011 25004.505 - 25105.329: 98.4945% ( 5) 00:08:36.011 25105.329 - 25206.154: 98.5401% ( 4) 00:08:36.011 25710.277 - 25811.102: 98.5744% ( 3) 00:08:36.011 25811.102 - 26012.751: 98.6884% ( 10) 00:08:36.011 26012.751 - 26214.400: 98.8139% ( 11) 00:08:36.011 26214.400 - 26416.049: 98.9279% ( 10) 00:08:36.011 26416.049 - 26617.698: 99.0306% ( 9) 00:08:36.011 26617.698 - 26819.348: 99.1332% ( 9) 00:08:36.011 26819.348 - 27020.997: 99.2587% ( 11) 00:08:36.011 27020.997 - 27222.646: 99.2701% ( 1) 00:08:36.011 28835.840 - 29037.489: 99.3385% ( 6) 00:08:36.011 29037.489 - 29239.138: 99.4411% ( 9) 00:08:36.011 29239.138 - 29440.788: 99.5438% ( 9) 00:08:36.011 29440.788 - 29642.437: 99.6464% ( 9) 00:08:36.011 29642.437 - 29844.086: 99.7605% ( 10) 00:08:36.011 29844.086 - 30045.735: 99.8631% ( 9) 00:08:36.011 30045.735 - 30247.385: 99.9772% ( 10) 00:08:36.011 30247.385 - 30449.034: 100.0000% ( 2) 00:08:36.011 00:08:36.011 23:03:55 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:08:36.959 Initializing NVMe Controllers 00:08:36.959 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:36.959 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:36.959 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:36.959 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:36.959 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:36.959 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:36.959 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:36.959 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:36.960 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:36.960 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:36.960 Initialization complete. Launching workers. 00:08:36.960 ======================================================== 00:08:36.960 Latency(us) 00:08:36.960 Device Information : IOPS MiB/s Average min max 00:08:36.960 PCIE (0000:00:13.0) NSID 1 from core 0: 8252.70 96.71 15526.39 10078.06 35428.08 00:08:36.960 PCIE (0000:00:10.0) NSID 1 from core 0: 8252.70 96.71 15510.64 8834.29 35835.63 00:08:36.960 PCIE (0000:00:11.0) NSID 1 from core 0: 8252.70 96.71 15491.33 8423.70 34732.64 00:08:36.960 PCIE (0000:00:12.0) NSID 1 from core 0: 8252.70 96.71 15472.74 6789.96 35064.59 00:08:36.960 PCIE (0000:00:12.0) NSID 2 from core 0: 8252.70 96.71 15454.06 6425.81 34132.66 00:08:36.960 PCIE (0000:00:12.0) NSID 3 from core 0: 8316.67 97.46 15316.92 6167.19 26847.72 00:08:36.960 ======================================================== 00:08:36.960 Total : 49580.17 581.02 15461.82 6167.19 35835.63 00:08:36.960 00:08:36.960 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:36.960 ================================================================================= 00:08:36.960 1.00000% : 11191.532us 00:08:36.960 10.00000% : 12754.314us 00:08:36.960 25.00000% : 13611.323us 00:08:36.960 50.00000% : 14619.569us 00:08:36.960 75.00000% : 17341.834us 00:08:36.960 90.00000% : 19358.326us 00:08:36.960 95.00000% : 19761.625us 00:08:36.960 98.00000% : 20366.572us 00:08:36.960 99.00000% : 28230.892us 00:08:36.960 99.50000% : 34280.369us 00:08:36.960 99.90000% : 35288.615us 00:08:36.960 99.99000% : 35490.265us 00:08:36.960 99.99900% : 35490.265us 00:08:36.960 99.99990% : 35490.265us 00:08:36.960 99.99999% : 35490.265us 00:08:36.960 00:08:36.960 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:36.960 ================================================================================= 00:08:36.960 1.00000% : 10586.585us 00:08:36.960 10.00000% : 12804.726us 00:08:36.960 25.00000% : 13611.323us 00:08:36.960 50.00000% : 14720.394us 00:08:36.960 75.00000% : 17341.834us 00:08:36.960 90.00000% : 19358.326us 00:08:36.960 95.00000% : 19862.449us 00:08:36.960 98.00000% : 20467.397us 00:08:36.960 99.00000% : 27020.997us 00:08:36.960 99.50000% : 34885.317us 00:08:36.960 99.90000% : 35691.914us 00:08:36.960 99.99000% : 35893.563us 00:08:36.960 99.99900% : 35893.563us 00:08:36.960 99.99990% : 35893.563us 00:08:36.960 99.99999% : 35893.563us 00:08:36.960 00:08:36.960 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:36.960 ================================================================================= 00:08:36.960 1.00000% : 10788.234us 00:08:36.960 10.00000% : 12855.138us 00:08:36.960 25.00000% : 13611.323us 00:08:36.960 50.00000% : 14619.569us 00:08:36.960 75.00000% : 17241.009us 00:08:36.960 90.00000% : 19358.326us 00:08:36.960 95.00000% : 19761.625us 00:08:36.960 98.00000% : 20467.397us 00:08:36.960 99.00000% : 26416.049us 00:08:36.960 99.50000% : 33877.071us 00:08:36.960 99.90000% : 34683.668us 00:08:36.960 99.99000% : 34885.317us 00:08:36.960 99.99900% : 34885.317us 00:08:36.960 99.99990% : 34885.317us 00:08:36.960 99.99999% : 34885.317us 00:08:36.960 00:08:36.960 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:36.960 ================================================================================= 00:08:36.960 1.00000% : 10687.409us 00:08:36.960 10.00000% : 12804.726us 00:08:36.960 25.00000% : 13611.323us 00:08:36.960 50.00000% : 14619.569us 00:08:36.960 75.00000% : 17341.834us 00:08:36.960 90.00000% : 19358.326us 00:08:36.960 95.00000% : 19862.449us 00:08:36.960 98.00000% : 20568.222us 00:08:36.960 99.00000% : 27020.997us 00:08:36.960 99.50000% : 34280.369us 00:08:36.960 99.90000% : 34885.317us 00:08:36.960 99.99000% : 35086.966us 00:08:36.960 99.99900% : 35086.966us 00:08:36.960 99.99990% : 35086.966us 00:08:36.960 99.99999% : 35086.966us 00:08:36.960 00:08:36.960 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:36.960 ================================================================================= 00:08:36.960 1.00000% : 11040.295us 00:08:36.960 10.00000% : 12754.314us 00:08:36.960 25.00000% : 13510.498us 00:08:36.960 50.00000% : 14619.569us 00:08:36.960 75.00000% : 17241.009us 00:08:36.960 90.00000% : 19459.151us 00:08:36.960 95.00000% : 19862.449us 00:08:36.960 98.00000% : 20568.222us 00:08:36.960 99.00000% : 26012.751us 00:08:36.960 99.50000% : 33272.123us 00:08:36.960 99.90000% : 34078.720us 00:08:36.960 99.99000% : 34280.369us 00:08:36.960 99.99900% : 34280.369us 00:08:36.960 99.99990% : 34280.369us 00:08:36.960 99.99999% : 34280.369us 00:08:36.960 00:08:36.960 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:36.960 ================================================================================= 00:08:36.960 1.00000% : 10788.234us 00:08:36.960 10.00000% : 12804.726us 00:08:36.960 25.00000% : 13510.498us 00:08:36.960 50.00000% : 14720.394us 00:08:36.960 75.00000% : 17241.009us 00:08:36.960 90.00000% : 19257.502us 00:08:36.960 95.00000% : 19761.625us 00:08:36.960 98.00000% : 20164.923us 00:08:36.960 99.00000% : 20769.871us 00:08:36.960 99.50000% : 26012.751us 00:08:36.960 99.90000% : 26819.348us 00:08:36.960 99.99000% : 27020.997us 00:08:36.960 99.99900% : 27020.997us 00:08:36.960 99.99990% : 27020.997us 00:08:36.960 99.99999% : 27020.997us 00:08:36.960 00:08:36.960 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:36.960 ============================================================================== 00:08:36.960 Range in us Cumulative IO count 00:08:36.960 10032.049 - 10082.462: 0.0121% ( 1) 00:08:36.960 10082.462 - 10132.874: 0.0484% ( 3) 00:08:36.960 10132.874 - 10183.286: 0.1453% ( 8) 00:08:36.960 10183.286 - 10233.698: 0.2180% ( 6) 00:08:36.960 10233.698 - 10284.111: 0.3270% ( 9) 00:08:36.960 10284.111 - 10334.523: 0.3634% ( 3) 00:08:36.960 10334.523 - 10384.935: 0.4845% ( 10) 00:08:36.960 10384.935 - 10435.348: 0.5329% ( 4) 00:08:36.960 10435.348 - 10485.760: 0.5693% ( 3) 00:08:36.960 10485.760 - 10536.172: 0.5935% ( 2) 00:08:36.960 10536.172 - 10586.585: 0.6298% ( 3) 00:08:36.960 10586.585 - 10636.997: 0.6541% ( 2) 00:08:36.960 10636.997 - 10687.409: 0.6662% ( 1) 00:08:36.960 10687.409 - 10737.822: 0.7025% ( 3) 00:08:36.960 10737.822 - 10788.234: 0.7267% ( 2) 00:08:36.960 10788.234 - 10838.646: 0.7510% ( 2) 00:08:36.960 10838.646 - 10889.058: 0.7752% ( 2) 00:08:36.960 10889.058 - 10939.471: 0.7873% ( 1) 00:08:36.960 10939.471 - 10989.883: 0.8115% ( 2) 00:08:36.960 10989.883 - 11040.295: 0.8600% ( 4) 00:08:36.960 11040.295 - 11090.708: 0.9205% ( 5) 00:08:36.960 11090.708 - 11141.120: 0.9569% ( 3) 00:08:36.960 11141.120 - 11191.532: 1.0296% ( 6) 00:08:36.960 11191.532 - 11241.945: 1.1386% ( 9) 00:08:36.960 11241.945 - 11292.357: 1.2718% ( 11) 00:08:36.960 11292.357 - 11342.769: 1.3808% ( 9) 00:08:36.960 11342.769 - 11393.182: 1.5383% ( 13) 00:08:36.960 11393.182 - 11443.594: 1.6836% ( 12) 00:08:36.960 11443.594 - 11494.006: 1.8290% ( 12) 00:08:36.960 11494.006 - 11544.418: 2.2045% ( 31) 00:08:36.960 11544.418 - 11594.831: 2.3498% ( 12) 00:08:36.960 11594.831 - 11645.243: 2.4830% ( 11) 00:08:36.960 11645.243 - 11695.655: 2.6163% ( 11) 00:08:36.960 11695.655 - 11746.068: 2.7495% ( 11) 00:08:36.960 11746.068 - 11796.480: 2.9070% ( 13) 00:08:36.960 11796.480 - 11846.892: 3.0281% ( 10) 00:08:36.960 11846.892 - 11897.305: 3.1492% ( 10) 00:08:36.960 11897.305 - 11947.717: 3.3794% ( 19) 00:08:36.960 11947.717 - 11998.129: 3.6216% ( 20) 00:08:36.960 11998.129 - 12048.542: 3.9365% ( 26) 00:08:36.960 12048.542 - 12098.954: 4.4089% ( 39) 00:08:36.960 12098.954 - 12149.366: 4.7481% ( 28) 00:08:36.960 12149.366 - 12199.778: 5.0388% ( 24) 00:08:36.960 12199.778 - 12250.191: 5.4021% ( 30) 00:08:36.960 12250.191 - 12300.603: 5.7292% ( 27) 00:08:36.960 12300.603 - 12351.015: 6.1289% ( 33) 00:08:36.960 12351.015 - 12401.428: 6.6013% ( 39) 00:08:36.960 12401.428 - 12451.840: 7.1342% ( 44) 00:08:36.960 12451.840 - 12502.252: 7.5703% ( 36) 00:08:36.960 12502.252 - 12552.665: 7.9457% ( 31) 00:08:36.960 12552.665 - 12603.077: 8.3697% ( 35) 00:08:36.960 12603.077 - 12653.489: 8.8663% ( 41) 00:08:36.960 12653.489 - 12703.902: 9.5082% ( 53) 00:08:36.960 12703.902 - 12754.314: 10.0533% ( 45) 00:08:36.960 12754.314 - 12804.726: 10.5741% ( 43) 00:08:36.960 12804.726 - 12855.138: 11.0828% ( 42) 00:08:36.960 12855.138 - 12905.551: 11.6885% ( 50) 00:08:36.960 12905.551 - 13006.375: 13.1298% ( 119) 00:08:36.960 13006.375 - 13107.200: 15.2132% ( 172) 00:08:36.960 13107.200 - 13208.025: 17.2723% ( 170) 00:08:36.960 13208.025 - 13308.849: 19.5858% ( 191) 00:08:36.960 13308.849 - 13409.674: 22.0930% ( 207) 00:08:36.960 13409.674 - 13510.498: 24.3338% ( 185) 00:08:36.960 13510.498 - 13611.323: 26.5141% ( 180) 00:08:36.960 13611.323 - 13712.148: 28.8517% ( 193) 00:08:36.960 13712.148 - 13812.972: 31.2742% ( 200) 00:08:36.960 13812.972 - 13913.797: 33.7694% ( 206) 00:08:36.960 13913.797 - 14014.622: 36.3372% ( 212) 00:08:36.960 14014.622 - 14115.446: 38.8081% ( 204) 00:08:36.960 14115.446 - 14216.271: 41.3517% ( 210) 00:08:36.960 14216.271 - 14317.095: 43.7863% ( 201) 00:08:36.960 14317.095 - 14417.920: 46.0998% ( 191) 00:08:36.960 14417.920 - 14518.745: 48.3648% ( 187) 00:08:36.960 14518.745 - 14619.569: 50.1817% ( 150) 00:08:36.960 14619.569 - 14720.394: 51.8290% ( 136) 00:08:36.960 14720.394 - 14821.218: 53.2219% ( 115) 00:08:36.960 14821.218 - 14922.043: 54.4089% ( 98) 00:08:36.960 14922.043 - 15022.868: 55.7534% ( 111) 00:08:36.960 15022.868 - 15123.692: 57.3159% ( 129) 00:08:36.960 15123.692 - 15224.517: 58.9753% ( 137) 00:08:36.960 15224.517 - 15325.342: 60.1986% ( 101) 00:08:36.960 15325.342 - 15426.166: 61.7854% ( 131) 00:08:36.960 15426.166 - 15526.991: 63.4811% ( 140) 00:08:36.960 15526.991 - 15627.815: 64.6923% ( 100) 00:08:36.960 15627.815 - 15728.640: 65.5523% ( 71) 00:08:36.960 15728.640 - 15829.465: 66.2306% ( 56) 00:08:36.960 15829.465 - 15930.289: 67.2965% ( 88) 00:08:36.960 15930.289 - 16031.114: 68.3382% ( 86) 00:08:36.960 16031.114 - 16131.938: 69.1134% ( 64) 00:08:36.960 16131.938 - 16232.763: 69.9249% ( 67) 00:08:36.960 16232.763 - 16333.588: 70.4578% ( 44) 00:08:36.960 16333.588 - 16434.412: 71.0514% ( 49) 00:08:36.960 16434.412 - 16535.237: 71.6085% ( 46) 00:08:36.960 16535.237 - 16636.062: 72.4079% ( 66) 00:08:36.960 16636.062 - 16736.886: 72.8803% ( 39) 00:08:36.960 16736.886 - 16837.711: 73.1468% ( 22) 00:08:36.960 16837.711 - 16938.535: 73.5223% ( 31) 00:08:36.960 16938.535 - 17039.360: 73.8857% ( 30) 00:08:36.960 17039.360 - 17140.185: 74.1885% ( 25) 00:08:36.960 17140.185 - 17241.009: 74.5640% ( 31) 00:08:36.960 17241.009 - 17341.834: 75.1332% ( 47) 00:08:36.960 17341.834 - 17442.658: 75.8115% ( 56) 00:08:36.960 17442.658 - 17543.483: 76.5746% ( 63) 00:08:36.960 17543.483 - 17644.308: 77.1802% ( 50) 00:08:36.960 17644.308 - 17745.132: 77.8464% ( 55) 00:08:36.960 17745.132 - 17845.957: 78.4036% ( 46) 00:08:36.960 17845.957 - 17946.782: 78.8275% ( 35) 00:08:36.960 17946.782 - 18047.606: 79.2151% ( 32) 00:08:36.960 18047.606 - 18148.431: 79.7359% ( 43) 00:08:36.960 18148.431 - 18249.255: 80.4385% ( 58) 00:08:36.960 18249.255 - 18350.080: 81.0804% ( 53) 00:08:36.960 18350.080 - 18450.905: 81.6134% ( 44) 00:08:36.960 18450.905 - 18551.729: 82.0494% ( 36) 00:08:36.960 18551.729 - 18652.554: 82.6672% ( 51) 00:08:36.960 18652.554 - 18753.378: 83.4666% ( 66) 00:08:36.960 18753.378 - 18854.203: 84.3266% ( 71) 00:08:36.960 18854.203 - 18955.028: 85.2471% ( 76) 00:08:36.960 18955.028 - 19055.852: 86.1555% ( 75) 00:08:36.960 19055.852 - 19156.677: 87.3668% ( 100) 00:08:36.960 19156.677 - 19257.502: 88.6628% ( 107) 00:08:36.960 19257.502 - 19358.326: 90.0799% ( 117) 00:08:36.960 19358.326 - 19459.151: 91.5940% ( 125) 00:08:36.960 19459.151 - 19559.975: 92.7204% ( 93) 00:08:36.960 19559.975 - 19660.800: 93.9801% ( 104) 00:08:36.960 19660.800 - 19761.625: 95.0945% ( 92) 00:08:36.960 19761.625 - 19862.449: 95.9423% ( 70) 00:08:36.960 19862.449 - 19963.274: 96.5601% ( 51) 00:08:36.960 19963.274 - 20064.098: 97.1051% ( 45) 00:08:36.960 20064.098 - 20164.923: 97.5291% ( 35) 00:08:36.960 20164.923 - 20265.748: 97.8924% ( 30) 00:08:36.960 20265.748 - 20366.572: 98.0499% ( 13) 00:08:36.960 20366.572 - 20467.397: 98.1831% ( 11) 00:08:36.960 20467.397 - 20568.222: 98.3164% ( 11) 00:08:36.960 20568.222 - 20669.046: 98.3891% ( 6) 00:08:36.960 20669.046 - 20769.871: 98.4375% ( 4) 00:08:36.960 20769.871 - 20870.695: 98.4496% ( 1) 00:08:36.960 27020.997 - 27222.646: 98.5465% ( 8) 00:08:36.960 27222.646 - 27424.295: 98.6797% ( 11) 00:08:36.960 27424.295 - 27625.945: 98.8009% ( 10) 00:08:36.960 27625.945 - 27827.594: 98.8857% ( 7) 00:08:36.960 27827.594 - 28029.243: 98.9826% ( 8) 00:08:36.960 28029.243 - 28230.892: 99.0916% ( 9) 00:08:36.960 28230.892 - 28432.542: 99.2006% ( 9) 00:08:36.960 28432.542 - 28634.191: 99.2248% ( 2) 00:08:36.960 34078.720 - 34280.369: 99.6003% ( 31) 00:08:36.960 34280.369 - 34482.018: 99.6730% ( 6) 00:08:36.960 34683.668 - 34885.317: 99.7093% ( 3) 00:08:36.960 34885.317 - 35086.966: 99.7456% ( 3) 00:08:36.960 35086.966 - 35288.615: 99.9152% ( 14) 00:08:36.960 35288.615 - 35490.265: 100.0000% ( 7) 00:08:36.960 00:08:36.960 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:36.960 ============================================================================== 00:08:36.960 Range in us Cumulative IO count 00:08:36.960 8822.154 - 8872.566: 0.0242% ( 2) 00:08:36.960 8872.566 - 8922.978: 0.0727% ( 4) 00:08:36.960 8922.978 - 8973.391: 0.1696% ( 8) 00:08:36.960 8973.391 - 9023.803: 0.2059% ( 3) 00:08:36.960 9023.803 - 9074.215: 0.2301% ( 2) 00:08:36.960 9074.215 - 9124.628: 0.2422% ( 1) 00:08:36.960 9124.628 - 9175.040: 0.2544% ( 1) 00:08:36.960 9175.040 - 9225.452: 0.3270% ( 6) 00:08:36.960 9225.452 - 9275.865: 0.3391% ( 1) 00:08:36.960 9275.865 - 9326.277: 0.3513% ( 1) 00:08:36.960 9326.277 - 9376.689: 0.3755% ( 2) 00:08:36.960 9376.689 - 9427.102: 0.3997% ( 2) 00:08:36.961 9427.102 - 9477.514: 0.4360% ( 3) 00:08:36.961 9477.514 - 9527.926: 0.4724% ( 3) 00:08:36.961 9527.926 - 9578.338: 0.4845% ( 1) 00:08:36.961 9578.338 - 9628.751: 0.5087% ( 2) 00:08:36.961 9779.988 - 9830.400: 0.5329% ( 2) 00:08:36.961 9830.400 - 9880.812: 0.5572% ( 2) 00:08:36.961 9880.812 - 9931.225: 0.5814% ( 2) 00:08:36.961 9931.225 - 9981.637: 0.6056% ( 2) 00:08:36.961 9981.637 - 10032.049: 0.6298% ( 2) 00:08:36.961 10032.049 - 10082.462: 0.6541% ( 2) 00:08:36.961 10082.462 - 10132.874: 0.6662% ( 1) 00:08:36.961 10132.874 - 10183.286: 0.6904% ( 2) 00:08:36.961 10183.286 - 10233.698: 0.7146% ( 2) 00:08:36.961 10233.698 - 10284.111: 0.7389% ( 2) 00:08:36.961 10284.111 - 10334.523: 0.7631% ( 2) 00:08:36.961 10334.523 - 10384.935: 0.7752% ( 1) 00:08:36.961 10384.935 - 10435.348: 0.7994% ( 2) 00:08:36.961 10435.348 - 10485.760: 0.8963% ( 8) 00:08:36.961 10485.760 - 10536.172: 0.9327% ( 3) 00:08:36.961 10536.172 - 10586.585: 1.0174% ( 7) 00:08:36.961 10586.585 - 10636.997: 1.0780% ( 5) 00:08:36.961 10636.997 - 10687.409: 1.0901% ( 1) 00:08:36.961 10687.409 - 10737.822: 1.1022% ( 1) 00:08:36.961 10788.234 - 10838.646: 1.1143% ( 1) 00:08:36.961 10838.646 - 10889.058: 1.1507% ( 3) 00:08:36.961 10889.058 - 10939.471: 1.1749% ( 2) 00:08:36.961 10939.471 - 10989.883: 1.1991% ( 2) 00:08:36.961 10989.883 - 11040.295: 1.2234% ( 2) 00:08:36.961 11040.295 - 11090.708: 1.2597% ( 3) 00:08:36.961 11090.708 - 11141.120: 1.2960% ( 3) 00:08:36.961 11141.120 - 11191.532: 1.3203% ( 2) 00:08:36.961 11191.532 - 11241.945: 1.3566% ( 3) 00:08:36.961 11241.945 - 11292.357: 1.4777% ( 10) 00:08:36.961 11292.357 - 11342.769: 1.5625% ( 7) 00:08:36.961 11342.769 - 11393.182: 1.6957% ( 11) 00:08:36.961 11393.182 - 11443.594: 1.9138% ( 18) 00:08:36.961 11443.594 - 11494.006: 2.2166% ( 25) 00:08:36.961 11494.006 - 11544.418: 2.5799% ( 30) 00:08:36.961 11544.418 - 11594.831: 2.7737% ( 16) 00:08:36.961 11594.831 - 11645.243: 3.0039% ( 19) 00:08:36.961 11645.243 - 11695.655: 3.1371% ( 11) 00:08:36.961 11695.655 - 11746.068: 3.2461% ( 9) 00:08:36.961 11746.068 - 11796.480: 3.3309% ( 7) 00:08:36.961 11796.480 - 11846.892: 3.4520% ( 10) 00:08:36.961 11846.892 - 11897.305: 3.6458% ( 16) 00:08:36.961 11897.305 - 11947.717: 3.8760% ( 19) 00:08:36.961 11947.717 - 11998.129: 4.0819% ( 17) 00:08:36.961 11998.129 - 12048.542: 4.2515% ( 14) 00:08:36.961 12048.542 - 12098.954: 4.5179% ( 22) 00:08:36.961 12098.954 - 12149.366: 4.8813% ( 30) 00:08:36.961 12149.366 - 12199.778: 5.1599% ( 23) 00:08:36.961 12199.778 - 12250.191: 5.4264% ( 22) 00:08:36.961 12250.191 - 12300.603: 5.7897% ( 30) 00:08:36.961 12300.603 - 12351.015: 6.0804% ( 24) 00:08:36.961 12351.015 - 12401.428: 6.4438% ( 30) 00:08:36.961 12401.428 - 12451.840: 6.7829% ( 28) 00:08:36.961 12451.840 - 12502.252: 7.1827% ( 33) 00:08:36.961 12502.252 - 12552.665: 7.4370% ( 21) 00:08:36.961 12552.665 - 12603.077: 7.7398% ( 25) 00:08:36.961 12603.077 - 12653.489: 8.0669% ( 27) 00:08:36.961 12653.489 - 12703.902: 8.5029% ( 36) 00:08:36.961 12703.902 - 12754.314: 9.4113% ( 75) 00:08:36.961 12754.314 - 12804.726: 10.2713% ( 71) 00:08:36.961 12804.726 - 12855.138: 11.1919% ( 76) 00:08:36.961 12855.138 - 12905.551: 12.1003% ( 75) 00:08:36.961 12905.551 - 13006.375: 13.9414% ( 152) 00:08:36.961 13006.375 - 13107.200: 16.0247% ( 172) 00:08:36.961 13107.200 - 13208.025: 17.9264% ( 157) 00:08:36.961 13208.025 - 13308.849: 20.5547% ( 217) 00:08:36.961 13308.849 - 13409.674: 22.7108% ( 178) 00:08:36.961 13409.674 - 13510.498: 24.7456% ( 168) 00:08:36.961 13510.498 - 13611.323: 27.0470% ( 190) 00:08:36.961 13611.323 - 13712.148: 29.2393% ( 181) 00:08:36.961 13712.148 - 13812.972: 31.8314% ( 214) 00:08:36.961 13812.972 - 13913.797: 33.8663% ( 168) 00:08:36.961 13913.797 - 14014.622: 35.7437% ( 155) 00:08:36.961 14014.622 - 14115.446: 37.6696% ( 159) 00:08:36.961 14115.446 - 14216.271: 40.0073% ( 193) 00:08:36.961 14216.271 - 14317.095: 42.2238% ( 183) 00:08:36.961 14317.095 - 14417.920: 44.4404% ( 183) 00:08:36.961 14417.920 - 14518.745: 46.3057% ( 154) 00:08:36.961 14518.745 - 14619.569: 48.4738% ( 179) 00:08:36.961 14619.569 - 14720.394: 50.6783% ( 182) 00:08:36.961 14720.394 - 14821.218: 52.3135% ( 135) 00:08:36.961 14821.218 - 14922.043: 53.9123% ( 132) 00:08:36.961 14922.043 - 15022.868: 55.9593% ( 169) 00:08:36.961 15022.868 - 15123.692: 57.6914% ( 143) 00:08:36.961 15123.692 - 15224.517: 59.0237% ( 110) 00:08:36.961 15224.517 - 15325.342: 60.2955% ( 105) 00:08:36.961 15325.342 - 15426.166: 61.7611% ( 121) 00:08:36.961 15426.166 - 15526.991: 63.1662% ( 116) 00:08:36.961 15526.991 - 15627.815: 64.5228% ( 112) 00:08:36.961 15627.815 - 15728.640: 65.6492% ( 93) 00:08:36.961 15728.640 - 15829.465: 66.7515% ( 91) 00:08:36.961 15829.465 - 15930.289: 67.4419% ( 57) 00:08:36.961 15930.289 - 16031.114: 68.0959% ( 54) 00:08:36.961 16031.114 - 16131.938: 68.7984% ( 58) 00:08:36.961 16131.938 - 16232.763: 69.6948% ( 74) 00:08:36.961 16232.763 - 16333.588: 70.6638% ( 80) 00:08:36.961 16333.588 - 16434.412: 71.4026% ( 61) 00:08:36.961 16434.412 - 16535.237: 71.7902% ( 32) 00:08:36.961 16535.237 - 16636.062: 72.3595% ( 47) 00:08:36.961 16636.062 - 16736.886: 73.0257% ( 55) 00:08:36.961 16736.886 - 16837.711: 73.5707% ( 45) 00:08:36.961 16837.711 - 16938.535: 73.9704% ( 33) 00:08:36.961 16938.535 - 17039.360: 74.3217% ( 29) 00:08:36.961 17039.360 - 17140.185: 74.4549% ( 11) 00:08:36.961 17140.185 - 17241.009: 74.8183% ( 30) 00:08:36.961 17241.009 - 17341.834: 75.5451% ( 60) 00:08:36.961 17341.834 - 17442.658: 76.0174% ( 39) 00:08:36.961 17442.658 - 17543.483: 76.6231% ( 50) 00:08:36.961 17543.483 - 17644.308: 77.1923% ( 47) 00:08:36.961 17644.308 - 17745.132: 77.6526% ( 38) 00:08:36.961 17745.132 - 17845.957: 78.1492% ( 41) 00:08:36.961 17845.957 - 17946.782: 78.6943% ( 45) 00:08:36.961 17946.782 - 18047.606: 79.1788% ( 40) 00:08:36.961 18047.606 - 18148.431: 79.6269% ( 37) 00:08:36.961 18148.431 - 18249.255: 80.3900% ( 63) 00:08:36.961 18249.255 - 18350.080: 80.8745% ( 40) 00:08:36.961 18350.080 - 18450.905: 81.5649% ( 57) 00:08:36.961 18450.905 - 18551.729: 82.2674% ( 58) 00:08:36.961 18551.729 - 18652.554: 83.1880% ( 76) 00:08:36.961 18652.554 - 18753.378: 83.9026% ( 59) 00:08:36.961 18753.378 - 18854.203: 84.9927% ( 90) 00:08:36.961 18854.203 - 18955.028: 86.2282% ( 102) 00:08:36.961 18955.028 - 19055.852: 87.3668% ( 94) 00:08:36.961 19055.852 - 19156.677: 88.3358% ( 80) 00:08:36.961 19156.677 - 19257.502: 89.6197% ( 106) 00:08:36.961 19257.502 - 19358.326: 90.6613% ( 86) 00:08:36.961 19358.326 - 19459.151: 91.5940% ( 77) 00:08:36.961 19459.151 - 19559.975: 92.4297% ( 69) 00:08:36.961 19559.975 - 19660.800: 93.4109% ( 81) 00:08:36.961 19660.800 - 19761.625: 94.4283% ( 84) 00:08:36.961 19761.625 - 19862.449: 95.2883% ( 71) 00:08:36.961 19862.449 - 19963.274: 95.9908% ( 58) 00:08:36.961 19963.274 - 20064.098: 96.8023% ( 67) 00:08:36.961 20064.098 - 20164.923: 97.2141% ( 34) 00:08:36.961 20164.923 - 20265.748: 97.5412% ( 27) 00:08:36.961 20265.748 - 20366.572: 97.8682% ( 27) 00:08:36.961 20366.572 - 20467.397: 98.0499% ( 15) 00:08:36.961 20467.397 - 20568.222: 98.1710% ( 10) 00:08:36.961 20568.222 - 20669.046: 98.2922% ( 10) 00:08:36.961 20669.046 - 20769.871: 98.3648% ( 6) 00:08:36.961 20769.871 - 20870.695: 98.4012% ( 3) 00:08:36.961 20870.695 - 20971.520: 98.4496% ( 4) 00:08:36.961 25609.452 - 25710.277: 98.4981% ( 4) 00:08:36.961 25710.277 - 25811.102: 98.5344% ( 3) 00:08:36.961 25811.102 - 26012.751: 98.6313% ( 8) 00:08:36.961 26012.751 - 26214.400: 98.7161% ( 7) 00:08:36.961 26214.400 - 26416.049: 98.8251% ( 9) 00:08:36.961 26416.049 - 26617.698: 98.8857% ( 5) 00:08:36.961 26617.698 - 26819.348: 98.9704% ( 7) 00:08:36.961 26819.348 - 27020.997: 99.0552% ( 7) 00:08:36.961 27020.997 - 27222.646: 99.1764% ( 10) 00:08:36.961 27222.646 - 27424.295: 99.2248% ( 4) 00:08:36.961 34078.720 - 34280.369: 99.2854% ( 5) 00:08:36.961 34280.369 - 34482.018: 99.3702% ( 7) 00:08:36.961 34482.018 - 34683.668: 99.4307% ( 5) 00:08:36.961 34683.668 - 34885.317: 99.5397% ( 9) 00:08:36.961 34885.317 - 35086.966: 99.6487% ( 9) 00:08:36.961 35086.966 - 35288.615: 99.7456% ( 8) 00:08:36.961 35288.615 - 35490.265: 99.8183% ( 6) 00:08:36.961 35490.265 - 35691.914: 99.9516% ( 11) 00:08:36.961 35691.914 - 35893.563: 100.0000% ( 4) 00:08:36.961 00:08:36.961 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:36.961 ============================================================================== 00:08:36.961 Range in us Cumulative IO count 00:08:36.961 8418.855 - 8469.268: 0.0363% ( 3) 00:08:36.961 8469.268 - 8519.680: 0.0606% ( 2) 00:08:36.961 8519.680 - 8570.092: 0.1090% ( 4) 00:08:36.961 8570.092 - 8620.505: 0.2301% ( 10) 00:08:36.961 8620.505 - 8670.917: 0.3270% ( 8) 00:08:36.961 8670.917 - 8721.329: 0.4360% ( 9) 00:08:36.961 8721.329 - 8771.742: 0.4845% ( 4) 00:08:36.961 8771.742 - 8822.154: 0.5208% ( 3) 00:08:36.961 8822.154 - 8872.566: 0.5329% ( 1) 00:08:36.961 8872.566 - 8922.978: 0.5572% ( 2) 00:08:36.961 8922.978 - 8973.391: 0.5935% ( 3) 00:08:36.961 8973.391 - 9023.803: 0.6177% ( 2) 00:08:36.961 9023.803 - 9074.215: 0.6420% ( 2) 00:08:36.961 9074.215 - 9124.628: 0.6662% ( 2) 00:08:36.961 9124.628 - 9175.040: 0.6904% ( 2) 00:08:36.961 9175.040 - 9225.452: 0.7267% ( 3) 00:08:36.961 9225.452 - 9275.865: 0.7510% ( 2) 00:08:36.961 9275.865 - 9326.277: 0.7752% ( 2) 00:08:36.961 10586.585 - 10636.997: 0.7994% ( 2) 00:08:36.961 10636.997 - 10687.409: 0.8721% ( 6) 00:08:36.961 10687.409 - 10737.822: 0.9448% ( 6) 00:08:36.961 10737.822 - 10788.234: 1.0174% ( 6) 00:08:36.961 10788.234 - 10838.646: 1.1022% ( 7) 00:08:36.961 10838.646 - 10889.058: 1.1870% ( 7) 00:08:36.961 10889.058 - 10939.471: 1.2718% ( 7) 00:08:36.961 10939.471 - 10989.883: 1.3324% ( 5) 00:08:36.961 10989.883 - 11040.295: 1.4050% ( 6) 00:08:36.961 11040.295 - 11090.708: 1.4777% ( 6) 00:08:36.961 11090.708 - 11141.120: 1.5383% ( 5) 00:08:36.961 11141.120 - 11191.532: 1.5504% ( 1) 00:08:36.961 11191.532 - 11241.945: 1.5746% ( 2) 00:08:36.961 11241.945 - 11292.357: 1.6109% ( 3) 00:08:36.961 11292.357 - 11342.769: 1.6594% ( 4) 00:08:36.961 11342.769 - 11393.182: 1.6957% ( 3) 00:08:36.961 11393.182 - 11443.594: 1.7321% ( 3) 00:08:36.961 11443.594 - 11494.006: 1.7684% ( 3) 00:08:36.961 11494.006 - 11544.418: 1.8895% ( 10) 00:08:36.961 11544.418 - 11594.831: 2.1076% ( 18) 00:08:36.961 11594.831 - 11645.243: 2.2408% ( 11) 00:08:36.961 11645.243 - 11695.655: 2.4104% ( 14) 00:08:36.961 11695.655 - 11746.068: 2.6163% ( 17) 00:08:36.961 11746.068 - 11796.480: 2.8464% ( 19) 00:08:36.961 11796.480 - 11846.892: 3.1492% ( 25) 00:08:36.961 11846.892 - 11897.305: 3.4884% ( 28) 00:08:36.961 11897.305 - 11947.717: 3.9002% ( 34) 00:08:36.961 11947.717 - 11998.129: 4.2393% ( 28) 00:08:36.961 11998.129 - 12048.542: 4.6148% ( 31) 00:08:36.961 12048.542 - 12098.954: 5.0388% ( 35) 00:08:36.961 12098.954 - 12149.366: 5.3658% ( 27) 00:08:36.961 12149.366 - 12199.778: 5.6565% ( 24) 00:08:36.961 12199.778 - 12250.191: 5.8745% ( 18) 00:08:36.961 12250.191 - 12300.603: 6.1410% ( 22) 00:08:36.961 12300.603 - 12351.015: 6.4075% ( 22) 00:08:36.961 12351.015 - 12401.428: 6.6860% ( 23) 00:08:36.961 12401.428 - 12451.840: 6.9767% ( 24) 00:08:36.961 12451.840 - 12502.252: 7.2432% ( 22) 00:08:36.961 12502.252 - 12552.665: 7.5581% ( 26) 00:08:36.961 12552.665 - 12603.077: 7.8125% ( 21) 00:08:36.961 12603.077 - 12653.489: 8.1880% ( 31) 00:08:36.961 12653.489 - 12703.902: 8.6240% ( 36) 00:08:36.961 12703.902 - 12754.314: 9.0722% ( 37) 00:08:36.961 12754.314 - 12804.726: 9.6778% ( 50) 00:08:36.961 12804.726 - 12855.138: 10.2471% ( 47) 00:08:36.961 12855.138 - 12905.551: 10.9012% ( 54) 00:08:36.961 12905.551 - 13006.375: 12.7180% ( 150) 00:08:36.961 13006.375 - 13107.200: 14.6923% ( 163) 00:08:36.961 13107.200 - 13208.025: 16.8120% ( 175) 00:08:36.961 13208.025 - 13308.849: 19.2951% ( 205) 00:08:36.961 13308.849 - 13409.674: 21.2815% ( 164) 00:08:36.961 13409.674 - 13510.498: 23.3406% ( 170) 00:08:36.961 13510.498 - 13611.323: 25.5572% ( 183) 00:08:36.961 13611.323 - 13712.148: 28.0039% ( 202) 00:08:36.961 13712.148 - 13812.972: 30.1720% ( 179) 00:08:36.961 13812.972 - 13913.797: 32.1948% ( 167) 00:08:36.961 13913.797 - 14014.622: 34.6172% ( 200) 00:08:36.961 14014.622 - 14115.446: 37.3547% ( 226) 00:08:36.961 14115.446 - 14216.271: 40.3343% ( 246) 00:08:36.961 14216.271 - 14317.095: 43.0111% ( 221) 00:08:36.961 14317.095 - 14417.920: 45.8697% ( 236) 00:08:36.961 14417.920 - 14518.745: 48.5223% ( 219) 00:08:36.961 14518.745 - 14619.569: 51.1143% ( 214) 00:08:36.961 14619.569 - 14720.394: 53.1856% ( 171) 00:08:36.961 14720.394 - 14821.218: 54.9055% ( 142) 00:08:36.961 14821.218 - 14922.043: 56.5044% ( 132) 00:08:36.961 14922.043 - 15022.868: 57.6550% ( 95) 00:08:36.961 15022.868 - 15123.692: 58.8905% ( 102) 00:08:36.961 15123.692 - 15224.517: 59.8958% ( 83) 00:08:36.961 15224.517 - 15325.342: 61.2040% ( 108) 00:08:36.961 15325.342 - 15426.166: 62.4758% ( 105) 00:08:36.961 15426.166 - 15526.991: 63.6749% ( 99) 00:08:36.961 15526.991 - 15627.815: 64.4622% ( 65) 00:08:36.961 15627.815 - 15728.640: 65.4554% ( 82) 00:08:36.961 15728.640 - 15829.465: 66.5455% ( 90) 00:08:36.961 15829.465 - 15930.289: 67.5024% ( 79) 00:08:36.961 15930.289 - 16031.114: 68.7379% ( 102) 00:08:36.961 16031.114 - 16131.938: 69.9491% ( 100) 00:08:36.961 16131.938 - 16232.763: 70.6638% ( 59) 00:08:36.961 16232.763 - 16333.588: 71.3057% ( 53) 00:08:36.961 16333.588 - 16434.412: 71.8992% ( 49) 00:08:36.961 16434.412 - 16535.237: 72.1657% ( 22) 00:08:36.961 16535.237 - 16636.062: 72.3958% ( 19) 00:08:36.961 16636.062 - 16736.886: 72.7108% ( 26) 00:08:36.961 16736.886 - 16837.711: 73.1831% ( 39) 00:08:36.961 16837.711 - 16938.535: 73.7040% ( 43) 00:08:36.961 16938.535 - 17039.360: 74.1642% ( 38) 00:08:36.961 17039.360 - 17140.185: 74.8062% ( 53) 00:08:36.961 17140.185 - 17241.009: 75.3270% ( 43) 00:08:36.961 17241.009 - 17341.834: 75.6177% ( 24) 00:08:36.961 17341.834 - 17442.658: 75.8479% ( 19) 00:08:36.961 17442.658 - 17543.483: 76.1022% ( 21) 00:08:36.961 17543.483 - 17644.308: 76.5625% ( 38) 00:08:36.961 17644.308 - 17745.132: 77.1076% ( 45) 00:08:36.961 17745.132 - 17845.957: 77.5799% ( 39) 00:08:36.961 17845.957 - 17946.782: 78.0402% ( 38) 00:08:36.961 17946.782 - 18047.606: 78.6701% ( 52) 00:08:36.961 18047.606 - 18148.431: 79.2515% ( 48) 00:08:36.961 18148.431 - 18249.255: 79.8328% ( 48) 00:08:36.961 18249.255 - 18350.080: 80.4748% ( 53) 00:08:36.961 18350.080 - 18450.905: 80.9835% ( 42) 00:08:36.961 18450.905 - 18551.729: 81.6618% ( 56) 00:08:36.961 18551.729 - 18652.554: 82.5945% ( 77) 00:08:36.961 18652.554 - 18753.378: 83.7088% ( 92) 00:08:36.962 18753.378 - 18854.203: 84.7626% ( 87) 00:08:36.962 18854.203 - 18955.028: 85.7074% ( 78) 00:08:36.962 18955.028 - 19055.852: 86.9428% ( 102) 00:08:36.962 19055.852 - 19156.677: 88.3115% ( 113) 00:08:36.962 19156.677 - 19257.502: 89.3895% ( 89) 00:08:36.962 19257.502 - 19358.326: 90.6613% ( 105) 00:08:36.962 19358.326 - 19459.151: 91.7272% ( 88) 00:08:36.962 19459.151 - 19559.975: 92.8173% ( 90) 00:08:36.962 19559.975 - 19660.800: 93.7500% ( 77) 00:08:36.962 19660.800 - 19761.625: 95.0218% ( 105) 00:08:36.962 19761.625 - 19862.449: 95.9302% ( 75) 00:08:36.962 19862.449 - 19963.274: 96.5722% ( 53) 00:08:36.962 19963.274 - 20064.098: 97.0930% ( 43) 00:08:36.962 20064.098 - 20164.923: 97.4806% ( 32) 00:08:36.962 20164.923 - 20265.748: 97.7108% ( 19) 00:08:36.962 20265.748 - 20366.572: 97.9046% ( 16) 00:08:36.962 20366.572 - 20467.397: 98.0257% ( 10) 00:08:36.962 20467.397 - 20568.222: 98.1468% ( 10) 00:08:36.962 20568.222 - 20669.046: 98.2074% ( 5) 00:08:36.962 20669.046 - 20769.871: 98.2679% ( 5) 00:08:36.962 20769.871 - 20870.695: 98.3406% ( 6) 00:08:36.962 20870.695 - 20971.520: 98.4012% ( 5) 00:08:36.962 20971.520 - 21072.345: 98.4496% ( 4) 00:08:36.962 25206.154 - 25306.978: 98.4738% ( 2) 00:08:36.962 25306.978 - 25407.803: 98.5223% ( 4) 00:08:36.962 25407.803 - 25508.628: 98.5828% ( 5) 00:08:36.962 25508.628 - 25609.452: 98.6434% ( 5) 00:08:36.962 25609.452 - 25710.277: 98.6919% ( 4) 00:08:36.962 25710.277 - 25811.102: 98.7403% ( 4) 00:08:36.962 25811.102 - 26012.751: 98.8614% ( 10) 00:08:36.962 26012.751 - 26214.400: 98.9826% ( 10) 00:08:36.962 26214.400 - 26416.049: 99.0916% ( 9) 00:08:36.962 26416.049 - 26617.698: 99.2006% ( 9) 00:08:36.962 26617.698 - 26819.348: 99.2248% ( 2) 00:08:36.962 33070.474 - 33272.123: 99.2369% ( 1) 00:08:36.962 33272.123 - 33473.772: 99.3459% ( 9) 00:08:36.962 33473.772 - 33675.422: 99.4428% ( 8) 00:08:36.962 33675.422 - 33877.071: 99.5518% ( 9) 00:08:36.962 33877.071 - 34078.720: 99.6366% ( 7) 00:08:36.962 34078.720 - 34280.369: 99.7456% ( 9) 00:08:36.962 34280.369 - 34482.018: 99.8547% ( 9) 00:08:36.962 34482.018 - 34683.668: 99.9637% ( 9) 00:08:36.962 34683.668 - 34885.317: 100.0000% ( 3) 00:08:36.962 00:08:36.962 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:36.962 ============================================================================== 00:08:36.962 Range in us Cumulative IO count 00:08:36.962 6755.249 - 6805.662: 0.0121% ( 1) 00:08:36.962 6805.662 - 6856.074: 0.0484% ( 3) 00:08:36.962 6856.074 - 6906.486: 0.0727% ( 2) 00:08:36.962 6906.486 - 6956.898: 0.0969% ( 2) 00:08:36.962 6956.898 - 7007.311: 0.1211% ( 2) 00:08:36.962 7007.311 - 7057.723: 0.1453% ( 2) 00:08:36.962 7057.723 - 7108.135: 0.1817% ( 3) 00:08:36.962 7108.135 - 7158.548: 0.2059% ( 2) 00:08:36.962 7158.548 - 7208.960: 0.2301% ( 2) 00:08:36.962 7208.960 - 7259.372: 0.2544% ( 2) 00:08:36.962 7259.372 - 7309.785: 0.2907% ( 3) 00:08:36.962 7309.785 - 7360.197: 0.3391% ( 4) 00:08:36.962 7360.197 - 7410.609: 0.4239% ( 7) 00:08:36.962 7410.609 - 7461.022: 0.5087% ( 7) 00:08:36.962 7461.022 - 7511.434: 0.5693% ( 5) 00:08:36.962 7511.434 - 7561.846: 0.6056% ( 3) 00:08:36.962 7561.846 - 7612.258: 0.6420% ( 3) 00:08:36.962 7612.258 - 7662.671: 0.6783% ( 3) 00:08:36.962 7662.671 - 7713.083: 0.7146% ( 3) 00:08:36.962 7713.083 - 7763.495: 0.7389% ( 2) 00:08:36.962 7763.495 - 7813.908: 0.7631% ( 2) 00:08:36.962 7914.732 - 7965.145: 0.7752% ( 1) 00:08:36.962 10384.935 - 10435.348: 0.7873% ( 1) 00:08:36.962 10435.348 - 10485.760: 0.8236% ( 3) 00:08:36.962 10485.760 - 10536.172: 0.8721% ( 4) 00:08:36.962 10536.172 - 10586.585: 0.9084% ( 3) 00:08:36.962 10586.585 - 10636.997: 0.9569% ( 4) 00:08:36.962 10636.997 - 10687.409: 1.0053% ( 4) 00:08:36.962 10687.409 - 10737.822: 1.0417% ( 3) 00:08:36.962 10737.822 - 10788.234: 1.0901% ( 4) 00:08:36.962 10788.234 - 10838.646: 1.2476% ( 13) 00:08:36.962 10838.646 - 10889.058: 1.2960% ( 4) 00:08:36.962 10889.058 - 10939.471: 1.3324% ( 3) 00:08:36.962 10939.471 - 10989.883: 1.3566% ( 2) 00:08:36.962 10989.883 - 11040.295: 1.3929% ( 3) 00:08:36.962 11040.295 - 11090.708: 1.4293% ( 3) 00:08:36.962 11090.708 - 11141.120: 1.4656% ( 3) 00:08:36.962 11141.120 - 11191.532: 1.5262% ( 5) 00:08:36.962 11191.532 - 11241.945: 1.6109% ( 7) 00:08:36.962 11241.945 - 11292.357: 1.6836% ( 6) 00:08:36.962 11292.357 - 11342.769: 1.7563% ( 6) 00:08:36.962 11342.769 - 11393.182: 1.8047% ( 4) 00:08:36.962 11393.182 - 11443.594: 1.9138% ( 9) 00:08:36.962 11443.594 - 11494.006: 2.2287% ( 26) 00:08:36.962 11494.006 - 11544.418: 2.3498% ( 10) 00:08:36.962 11544.418 - 11594.831: 2.4709% ( 10) 00:08:36.962 11594.831 - 11645.243: 2.6526% ( 15) 00:08:36.962 11645.243 - 11695.655: 2.8343% ( 15) 00:08:36.962 11695.655 - 11746.068: 2.9675% ( 11) 00:08:36.962 11746.068 - 11796.480: 3.1129% ( 12) 00:08:36.962 11796.480 - 11846.892: 3.2582% ( 12) 00:08:36.962 11846.892 - 11897.305: 3.4399% ( 15) 00:08:36.962 11897.305 - 11947.717: 3.7185% ( 23) 00:08:36.962 11947.717 - 11998.129: 3.9365% ( 18) 00:08:36.962 11998.129 - 12048.542: 4.0940% ( 13) 00:08:36.962 12048.542 - 12098.954: 4.2757% ( 15) 00:08:36.962 12098.954 - 12149.366: 4.5785% ( 25) 00:08:36.962 12149.366 - 12199.778: 4.7723% ( 16) 00:08:36.962 12199.778 - 12250.191: 5.0145% ( 20) 00:08:36.962 12250.191 - 12300.603: 5.3295% ( 26) 00:08:36.962 12300.603 - 12351.015: 5.8018% ( 39) 00:08:36.962 12351.015 - 12401.428: 6.1652% ( 30) 00:08:36.962 12401.428 - 12451.840: 6.5770% ( 34) 00:08:36.962 12451.840 - 12502.252: 7.1100% ( 44) 00:08:36.962 12502.252 - 12552.665: 7.7519% ( 53) 00:08:36.962 12552.665 - 12603.077: 8.4181% ( 55) 00:08:36.962 12603.077 - 12653.489: 8.9026% ( 40) 00:08:36.962 12653.489 - 12703.902: 9.3629% ( 38) 00:08:36.962 12703.902 - 12754.314: 9.7747% ( 34) 00:08:36.962 12754.314 - 12804.726: 10.3077% ( 44) 00:08:36.962 12804.726 - 12855.138: 10.8285% ( 43) 00:08:36.962 12855.138 - 12905.551: 11.6037% ( 64) 00:08:36.962 12905.551 - 13006.375: 13.3721% ( 146) 00:08:36.962 13006.375 - 13107.200: 15.2616% ( 156) 00:08:36.962 13107.200 - 13208.025: 17.3813% ( 175) 00:08:36.962 13208.025 - 13308.849: 19.9128% ( 209) 00:08:36.962 13308.849 - 13409.674: 22.6139% ( 223) 00:08:36.962 13409.674 - 13510.498: 24.9394% ( 192) 00:08:36.962 13510.498 - 13611.323: 27.3498% ( 199) 00:08:36.962 13611.323 - 13712.148: 30.1599% ( 232) 00:08:36.962 13712.148 - 13812.972: 32.9215% ( 228) 00:08:36.962 13812.972 - 13913.797: 35.3561% ( 201) 00:08:36.962 13913.797 - 14014.622: 37.4637% ( 174) 00:08:36.962 14014.622 - 14115.446: 39.8740% ( 199) 00:08:36.962 14115.446 - 14216.271: 42.0422% ( 179) 00:08:36.962 14216.271 - 14317.095: 44.4889% ( 202) 00:08:36.962 14317.095 - 14417.920: 46.8144% ( 192) 00:08:36.962 14417.920 - 14518.745: 49.4549% ( 218) 00:08:36.962 14518.745 - 14619.569: 51.7442% ( 189) 00:08:36.962 14619.569 - 14720.394: 53.5610% ( 150) 00:08:36.962 14720.394 - 14821.218: 55.1720% ( 133) 00:08:36.962 14821.218 - 14922.043: 56.6739% ( 124) 00:08:36.962 14922.043 - 15022.868: 57.9215% ( 103) 00:08:36.962 15022.868 - 15123.692: 59.2539% ( 110) 00:08:36.962 15123.692 - 15224.517: 60.3561% ( 91) 00:08:36.962 15224.517 - 15325.342: 61.2888% ( 77) 00:08:36.962 15325.342 - 15426.166: 62.2214% ( 77) 00:08:36.962 15426.166 - 15526.991: 63.5174% ( 107) 00:08:36.962 15526.991 - 15627.815: 64.3653% ( 70) 00:08:36.962 15627.815 - 15728.640: 65.1163% ( 62) 00:08:36.962 15728.640 - 15829.465: 66.0610% ( 78) 00:08:36.962 15829.465 - 15930.289: 67.1148% ( 87) 00:08:36.962 15930.289 - 16031.114: 68.2292% ( 92) 00:08:36.962 16031.114 - 16131.938: 69.0165% ( 65) 00:08:36.962 16131.938 - 16232.763: 69.9249% ( 75) 00:08:36.962 16232.763 - 16333.588: 70.8576% ( 77) 00:08:36.962 16333.588 - 16434.412: 71.4026% ( 45) 00:08:36.962 16434.412 - 16535.237: 71.9598% ( 46) 00:08:36.962 16535.237 - 16636.062: 72.6502% ( 57) 00:08:36.962 16636.062 - 16736.886: 73.2195% ( 47) 00:08:36.962 16736.886 - 16837.711: 73.6192% ( 33) 00:08:36.962 16837.711 - 16938.535: 73.8857% ( 22) 00:08:36.962 16938.535 - 17039.360: 74.3096% ( 35) 00:08:36.962 17039.360 - 17140.185: 74.6245% ( 26) 00:08:36.962 17140.185 - 17241.009: 74.9394% ( 26) 00:08:36.962 17241.009 - 17341.834: 75.1575% ( 18) 00:08:36.962 17341.834 - 17442.658: 75.4118% ( 21) 00:08:36.962 17442.658 - 17543.483: 75.7631% ( 29) 00:08:36.962 17543.483 - 17644.308: 76.0659% ( 25) 00:08:36.962 17644.308 - 17745.132: 76.3929% ( 27) 00:08:36.962 17745.132 - 17845.957: 76.9743% ( 48) 00:08:36.962 17845.957 - 17946.782: 77.7011% ( 60) 00:08:36.962 17946.782 - 18047.606: 78.2461% ( 45) 00:08:36.962 18047.606 - 18148.431: 78.8396% ( 49) 00:08:36.962 18148.431 - 18249.255: 79.6512% ( 67) 00:08:36.962 18249.255 - 18350.080: 80.3416% ( 57) 00:08:36.962 18350.080 - 18450.905: 80.9593% ( 51) 00:08:36.962 18450.905 - 18551.729: 81.6013% ( 53) 00:08:36.962 18551.729 - 18652.554: 82.4612% ( 71) 00:08:36.962 18652.554 - 18753.378: 83.5150% ( 87) 00:08:36.962 18753.378 - 18854.203: 84.3629% ( 70) 00:08:36.962 18854.203 - 18955.028: 85.2592% ( 74) 00:08:36.962 18955.028 - 19055.852: 86.6400% ( 114) 00:08:36.962 19055.852 - 19156.677: 87.9118% ( 105) 00:08:36.962 19156.677 - 19257.502: 89.1231% ( 100) 00:08:36.962 19257.502 - 19358.326: 90.3464% ( 101) 00:08:36.962 19358.326 - 19459.151: 91.5940% ( 103) 00:08:36.962 19459.151 - 19559.975: 92.8416% ( 103) 00:08:36.962 19559.975 - 19660.800: 93.9680% ( 93) 00:08:36.962 19660.800 - 19761.625: 94.7917% ( 68) 00:08:36.962 19761.625 - 19862.449: 95.4821% ( 57) 00:08:36.962 19862.449 - 19963.274: 96.0029% ( 43) 00:08:36.962 19963.274 - 20064.098: 96.4995% ( 41) 00:08:36.962 20064.098 - 20164.923: 96.9113% ( 34) 00:08:36.962 20164.923 - 20265.748: 97.2505% ( 28) 00:08:36.962 20265.748 - 20366.572: 97.4927% ( 20) 00:08:36.962 20366.572 - 20467.397: 97.7229% ( 19) 00:08:36.962 20467.397 - 20568.222: 98.0499% ( 27) 00:08:36.962 20568.222 - 20669.046: 98.2195% ( 14) 00:08:36.962 20669.046 - 20769.871: 98.2922% ( 6) 00:08:36.962 20769.871 - 20870.695: 98.3406% ( 4) 00:08:36.962 20870.695 - 20971.520: 98.3769% ( 3) 00:08:36.962 20971.520 - 21072.345: 98.4254% ( 4) 00:08:36.962 21072.345 - 21173.169: 98.4496% ( 2) 00:08:36.962 25710.277 - 25811.102: 98.4859% ( 3) 00:08:36.962 25811.102 - 26012.751: 98.5828% ( 8) 00:08:36.962 26012.751 - 26214.400: 98.6676% ( 7) 00:08:36.962 26214.400 - 26416.049: 98.7645% ( 8) 00:08:36.962 26416.049 - 26617.698: 98.8493% ( 7) 00:08:36.962 26617.698 - 26819.348: 98.9462% ( 8) 00:08:36.962 26819.348 - 27020.997: 99.0552% ( 9) 00:08:36.962 27020.997 - 27222.646: 99.1642% ( 9) 00:08:36.962 27222.646 - 27424.295: 99.2248% ( 5) 00:08:36.962 33675.422 - 33877.071: 99.3338% ( 9) 00:08:36.962 33877.071 - 34078.720: 99.4428% ( 9) 00:08:36.962 34078.720 - 34280.369: 99.5640% ( 10) 00:08:36.962 34280.369 - 34482.018: 99.6609% ( 8) 00:08:36.962 34482.018 - 34683.668: 99.7820% ( 10) 00:08:36.962 34683.668 - 34885.317: 99.9031% ( 10) 00:08:36.962 34885.317 - 35086.966: 100.0000% ( 8) 00:08:36.962 00:08:36.962 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:36.962 ============================================================================== 00:08:36.962 Range in us Cumulative IO count 00:08:36.962 6402.363 - 6427.569: 0.0121% ( 1) 00:08:36.962 6427.569 - 6452.775: 0.0363% ( 2) 00:08:36.962 6452.775 - 6503.188: 0.1211% ( 7) 00:08:36.962 6503.188 - 6553.600: 0.1575% ( 3) 00:08:36.962 6553.600 - 6604.012: 0.2544% ( 8) 00:08:36.962 6604.012 - 6654.425: 0.4239% ( 14) 00:08:36.962 6654.425 - 6704.837: 0.4845% ( 5) 00:08:36.962 6704.837 - 6755.249: 0.5087% ( 2) 00:08:36.962 6755.249 - 6805.662: 0.5329% ( 2) 00:08:36.962 6805.662 - 6856.074: 0.5693% ( 3) 00:08:36.962 6856.074 - 6906.486: 0.5935% ( 2) 00:08:36.962 6906.486 - 6956.898: 0.6177% ( 2) 00:08:36.962 6956.898 - 7007.311: 0.6541% ( 3) 00:08:36.962 7007.311 - 7057.723: 0.6783% ( 2) 00:08:36.962 7057.723 - 7108.135: 0.7025% ( 2) 00:08:36.962 7108.135 - 7158.548: 0.7389% ( 3) 00:08:36.962 7158.548 - 7208.960: 0.7631% ( 2) 00:08:36.962 7208.960 - 7259.372: 0.7752% ( 1) 00:08:36.962 10788.234 - 10838.646: 0.7994% ( 2) 00:08:36.962 10838.646 - 10889.058: 0.8479% ( 4) 00:08:36.962 10889.058 - 10939.471: 0.8963% ( 4) 00:08:36.962 10939.471 - 10989.883: 0.9448% ( 4) 00:08:36.962 10989.883 - 11040.295: 1.0296% ( 7) 00:08:36.962 11040.295 - 11090.708: 1.1265% ( 8) 00:08:36.962 11090.708 - 11141.120: 1.3081% ( 15) 00:08:36.962 11141.120 - 11191.532: 1.4050% ( 8) 00:08:36.962 11191.532 - 11241.945: 1.5383% ( 11) 00:08:36.962 11241.945 - 11292.357: 1.6352% ( 8) 00:08:36.962 11292.357 - 11342.769: 1.7563% ( 10) 00:08:36.962 11342.769 - 11393.182: 2.0349% ( 23) 00:08:36.962 11393.182 - 11443.594: 2.3135% ( 23) 00:08:36.962 11443.594 - 11494.006: 2.6042% ( 24) 00:08:36.962 11494.006 - 11544.418: 2.8101% ( 17) 00:08:36.962 11544.418 - 11594.831: 3.1613% ( 29) 00:08:36.962 11594.831 - 11645.243: 3.4399% ( 23) 00:08:36.962 11645.243 - 11695.655: 3.7064% ( 22) 00:08:36.962 11695.655 - 11746.068: 3.9608% ( 21) 00:08:36.962 11746.068 - 11796.480: 4.1788% ( 18) 00:08:36.962 11796.480 - 11846.892: 4.4574% ( 23) 00:08:36.962 11846.892 - 11897.305: 4.6875% ( 19) 00:08:36.962 11897.305 - 11947.717: 4.9055% ( 18) 00:08:36.962 11947.717 - 11998.129: 5.0630% ( 13) 00:08:36.962 11998.129 - 12048.542: 5.2326% ( 14) 00:08:36.962 12048.542 - 12098.954: 5.3416% ( 9) 00:08:36.962 12098.954 - 12149.366: 5.5233% ( 15) 00:08:36.962 12149.366 - 12199.778: 5.7171% ( 16) 00:08:36.962 12199.778 - 12250.191: 5.9351% ( 18) 00:08:36.962 12250.191 - 12300.603: 6.1773% ( 20) 00:08:36.962 12300.603 - 12351.015: 6.5286% ( 29) 00:08:36.962 12351.015 - 12401.428: 6.9162% ( 32) 00:08:36.962 12401.428 - 12451.840: 7.3886% ( 39) 00:08:36.962 12451.840 - 12502.252: 7.8609% ( 39) 00:08:36.962 12502.252 - 12552.665: 8.2485% ( 32) 00:08:36.962 12552.665 - 12603.077: 8.6846% ( 36) 00:08:36.962 12603.077 - 12653.489: 9.1328% ( 37) 00:08:36.962 12653.489 - 12703.902: 9.7020% ( 47) 00:08:36.962 12703.902 - 12754.314: 10.3682% ( 55) 00:08:36.962 12754.314 - 12804.726: 11.0102% ( 53) 00:08:36.962 12804.726 - 12855.138: 11.8217% ( 67) 00:08:36.962 12855.138 - 12905.551: 12.4516% ( 52) 00:08:36.962 12905.551 - 13006.375: 13.8324% ( 114) 00:08:36.962 13006.375 - 13107.200: 15.5887% ( 145) 00:08:36.962 13107.200 - 13208.025: 17.5145% ( 159) 00:08:36.962 13208.025 - 13308.849: 20.0824% ( 212) 00:08:36.962 13308.849 - 13409.674: 22.9772% ( 239) 00:08:36.962 13409.674 - 13510.498: 25.5814% ( 215) 00:08:36.962 13510.498 - 13611.323: 28.3430% ( 228) 00:08:36.962 13611.323 - 13712.148: 30.9956% ( 219) 00:08:36.962 13712.148 - 13812.972: 33.2364% ( 185) 00:08:36.962 13812.972 - 13913.797: 35.9254% ( 222) 00:08:36.962 13913.797 - 14014.622: 38.1298% ( 182) 00:08:36.962 14014.622 - 14115.446: 40.1647% ( 168) 00:08:36.962 14115.446 - 14216.271: 42.4055% ( 185) 00:08:36.963 14216.271 - 14317.095: 44.3072% ( 157) 00:08:36.963 14317.095 - 14417.920: 46.2452% ( 160) 00:08:36.963 14417.920 - 14518.745: 48.2437% ( 165) 00:08:36.963 14518.745 - 14619.569: 50.0363% ( 148) 00:08:36.963 14619.569 - 14720.394: 52.0833% ( 169) 00:08:36.963 14720.394 - 14821.218: 53.8396% ( 145) 00:08:36.963 14821.218 - 14922.043: 55.2810% ( 119) 00:08:36.963 14922.043 - 15022.868: 56.9162% ( 135) 00:08:36.963 15022.868 - 15123.692: 57.9578% ( 86) 00:08:36.963 15123.692 - 15224.517: 59.0964% ( 94) 00:08:36.963 15224.517 - 15325.342: 60.1986% ( 91) 00:08:36.963 15325.342 - 15426.166: 61.2524% ( 87) 00:08:36.963 15426.166 - 15526.991: 62.1972% ( 78) 00:08:36.963 15526.991 - 15627.815: 63.2994% ( 91) 00:08:36.963 15627.815 - 15728.640: 64.4501% ( 95) 00:08:36.963 15728.640 - 15829.465: 65.6371% ( 98) 00:08:36.963 15829.465 - 15930.289: 67.0058% ( 113) 00:08:36.963 15930.289 - 16031.114: 68.1928% ( 98) 00:08:36.963 16031.114 - 16131.938: 69.0286% ( 69) 00:08:36.963 16131.938 - 16232.763: 69.6705% ( 53) 00:08:36.963 16232.763 - 16333.588: 70.2398% ( 47) 00:08:36.963 16333.588 - 16434.412: 70.7728% ( 44) 00:08:36.963 16434.412 - 16535.237: 71.3057% ( 44) 00:08:36.963 16535.237 - 16636.062: 71.7660% ( 38) 00:08:36.963 16636.062 - 16736.886: 72.2747% ( 42) 00:08:36.963 16736.886 - 16837.711: 72.8682% ( 49) 00:08:36.963 16837.711 - 16938.535: 73.4859% ( 51) 00:08:36.963 16938.535 - 17039.360: 74.1037% ( 51) 00:08:36.963 17039.360 - 17140.185: 74.7093% ( 50) 00:08:36.963 17140.185 - 17241.009: 75.1817% ( 39) 00:08:36.963 17241.009 - 17341.834: 75.4966% ( 26) 00:08:36.963 17341.834 - 17442.658: 75.8115% ( 26) 00:08:36.963 17442.658 - 17543.483: 76.1265% ( 26) 00:08:36.963 17543.483 - 17644.308: 76.4414% ( 26) 00:08:36.963 17644.308 - 17745.132: 76.8169% ( 31) 00:08:36.963 17745.132 - 17845.957: 77.3740% ( 46) 00:08:36.963 17845.957 - 17946.782: 77.9433% ( 47) 00:08:36.963 17946.782 - 18047.606: 78.6701% ( 60) 00:08:36.963 18047.606 - 18148.431: 79.6148% ( 78) 00:08:36.963 18148.431 - 18249.255: 80.3779% ( 63) 00:08:36.963 18249.255 - 18350.080: 81.0441% ( 55) 00:08:36.963 18350.080 - 18450.905: 81.8314% ( 65) 00:08:36.963 18450.905 - 18551.729: 82.4734% ( 53) 00:08:36.963 18551.729 - 18652.554: 83.0184% ( 45) 00:08:36.963 18652.554 - 18753.378: 83.7330% ( 59) 00:08:36.963 18753.378 - 18854.203: 84.4961% ( 63) 00:08:36.963 18854.203 - 18955.028: 85.2955% ( 66) 00:08:36.963 18955.028 - 19055.852: 86.3614% ( 88) 00:08:36.963 19055.852 - 19156.677: 87.3789% ( 84) 00:08:36.963 19156.677 - 19257.502: 88.4569% ( 89) 00:08:36.963 19257.502 - 19358.326: 89.7045% ( 103) 00:08:36.963 19358.326 - 19459.151: 91.1701% ( 121) 00:08:36.963 19459.151 - 19559.975: 92.5145% ( 111) 00:08:36.963 19559.975 - 19660.800: 93.6652% ( 95) 00:08:36.963 19660.800 - 19761.625: 94.5858% ( 76) 00:08:36.963 19761.625 - 19862.449: 95.4215% ( 69) 00:08:36.963 19862.449 - 19963.274: 96.2088% ( 65) 00:08:36.963 19963.274 - 20064.098: 96.6691% ( 38) 00:08:36.963 20064.098 - 20164.923: 97.0809% ( 34) 00:08:36.963 20164.923 - 20265.748: 97.3958% ( 26) 00:08:36.963 20265.748 - 20366.572: 97.6623% ( 22) 00:08:36.963 20366.572 - 20467.397: 97.8803% ( 18) 00:08:36.963 20467.397 - 20568.222: 98.1105% ( 19) 00:08:36.963 20568.222 - 20669.046: 98.2074% ( 8) 00:08:36.963 20669.046 - 20769.871: 98.2922% ( 7) 00:08:36.963 20769.871 - 20870.695: 98.3406% ( 4) 00:08:36.963 20870.695 - 20971.520: 98.3891% ( 4) 00:08:36.963 20971.520 - 21072.345: 98.4375% ( 4) 00:08:36.963 21072.345 - 21173.169: 98.4496% ( 1) 00:08:36.963 24903.680 - 25004.505: 98.4981% ( 4) 00:08:36.963 25004.505 - 25105.329: 98.5465% ( 4) 00:08:36.963 25105.329 - 25206.154: 98.5950% ( 4) 00:08:36.963 25206.154 - 25306.978: 98.6555% ( 5) 00:08:36.963 25306.978 - 25407.803: 98.7161% ( 5) 00:08:36.963 25407.803 - 25508.628: 98.7766% ( 5) 00:08:36.963 25508.628 - 25609.452: 98.8251% ( 4) 00:08:36.963 25609.452 - 25710.277: 98.8857% ( 5) 00:08:36.963 25710.277 - 25811.102: 98.9341% ( 4) 00:08:36.963 25811.102 - 26012.751: 99.0431% ( 9) 00:08:36.963 26012.751 - 26214.400: 99.1521% ( 9) 00:08:36.963 26214.400 - 26416.049: 99.2248% ( 6) 00:08:36.963 32667.175 - 32868.825: 99.3338% ( 9) 00:08:36.963 32868.825 - 33070.474: 99.4307% ( 8) 00:08:36.963 33070.474 - 33272.123: 99.5397% ( 9) 00:08:36.963 33272.123 - 33473.772: 99.6366% ( 8) 00:08:36.963 33473.772 - 33675.422: 99.7578% ( 10) 00:08:36.963 33675.422 - 33877.071: 99.8425% ( 7) 00:08:36.963 33877.071 - 34078.720: 99.9637% ( 10) 00:08:36.963 34078.720 - 34280.369: 100.0000% ( 3) 00:08:36.963 00:08:36.963 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:36.963 ============================================================================== 00:08:36.963 Range in us Cumulative IO count 00:08:36.963 6150.302 - 6175.508: 0.0120% ( 1) 00:08:36.963 6175.508 - 6200.714: 0.0361% ( 2) 00:08:36.963 6200.714 - 6225.920: 0.0721% ( 3) 00:08:36.963 6225.920 - 6251.126: 0.0841% ( 1) 00:08:36.963 6251.126 - 6276.332: 0.1202% ( 3) 00:08:36.963 6276.332 - 6301.538: 0.1562% ( 3) 00:08:36.963 6301.538 - 6326.745: 0.3245% ( 14) 00:08:36.963 6326.745 - 6351.951: 0.3966% ( 6) 00:08:36.963 6351.951 - 6377.157: 0.4327% ( 3) 00:08:36.963 6377.157 - 6402.363: 0.5288% ( 8) 00:08:36.963 6402.363 - 6427.569: 0.5409% ( 1) 00:08:36.963 6427.569 - 6452.775: 0.5529% ( 1) 00:08:36.963 6452.775 - 6503.188: 0.5889% ( 3) 00:08:36.963 6503.188 - 6553.600: 0.6250% ( 3) 00:08:36.963 6553.600 - 6604.012: 0.6611% ( 3) 00:08:36.963 6604.012 - 6654.425: 0.6851% ( 2) 00:08:36.963 6654.425 - 6704.837: 0.7212% ( 3) 00:08:36.963 6704.837 - 6755.249: 0.7572% ( 3) 00:08:36.963 6755.249 - 6805.662: 0.7692% ( 1) 00:08:36.963 10536.172 - 10586.585: 0.7933% ( 2) 00:08:36.963 10586.585 - 10636.997: 0.8173% ( 2) 00:08:36.963 10636.997 - 10687.409: 0.8413% ( 2) 00:08:36.963 10687.409 - 10737.822: 0.9495% ( 9) 00:08:36.963 10737.822 - 10788.234: 1.0337% ( 7) 00:08:36.963 10788.234 - 10838.646: 1.1779% ( 12) 00:08:36.963 10838.646 - 10889.058: 1.2861% ( 9) 00:08:36.963 10889.058 - 10939.471: 1.3822% ( 8) 00:08:36.963 10939.471 - 10989.883: 1.5144% ( 11) 00:08:36.963 10989.883 - 11040.295: 1.6226% ( 9) 00:08:36.963 11040.295 - 11090.708: 1.7428% ( 10) 00:08:36.963 11090.708 - 11141.120: 1.8510% ( 9) 00:08:36.963 11141.120 - 11191.532: 2.0433% ( 16) 00:08:36.963 11191.532 - 11241.945: 2.4399% ( 33) 00:08:36.963 11241.945 - 11292.357: 2.6082% ( 14) 00:08:36.963 11292.357 - 11342.769: 2.7885% ( 15) 00:08:36.963 11342.769 - 11393.182: 2.9688% ( 15) 00:08:36.963 11393.182 - 11443.594: 3.1490% ( 15) 00:08:36.963 11443.594 - 11494.006: 3.3053% ( 13) 00:08:36.963 11494.006 - 11544.418: 3.4495% ( 12) 00:08:36.963 11544.418 - 11594.831: 3.5938% ( 12) 00:08:36.963 11594.831 - 11645.243: 3.7620% ( 14) 00:08:36.963 11645.243 - 11695.655: 3.9183% ( 13) 00:08:36.963 11695.655 - 11746.068: 4.0986% ( 15) 00:08:36.963 11746.068 - 11796.480: 4.2188% ( 10) 00:08:36.963 11796.480 - 11846.892: 4.3510% ( 11) 00:08:36.963 11846.892 - 11897.305: 4.4952% ( 12) 00:08:36.963 11897.305 - 11947.717: 4.7476% ( 21) 00:08:36.963 11947.717 - 11998.129: 4.8558% ( 9) 00:08:36.963 11998.129 - 12048.542: 5.0361% ( 15) 00:08:36.963 12048.542 - 12098.954: 5.2524% ( 18) 00:08:36.963 12098.954 - 12149.366: 5.3966% ( 12) 00:08:36.963 12149.366 - 12199.778: 5.6010% ( 17) 00:08:36.963 12199.778 - 12250.191: 5.8053% ( 17) 00:08:36.963 12250.191 - 12300.603: 6.1178% ( 26) 00:08:36.963 12300.603 - 12351.015: 6.4543% ( 28) 00:08:36.963 12351.015 - 12401.428: 6.8269% ( 31) 00:08:36.963 12401.428 - 12451.840: 7.1635% ( 28) 00:08:36.963 12451.840 - 12502.252: 7.5240% ( 30) 00:08:36.963 12502.252 - 12552.665: 7.7885% ( 22) 00:08:36.963 12552.665 - 12603.077: 8.2091% ( 35) 00:08:36.963 12603.077 - 12653.489: 8.7019% ( 41) 00:08:36.963 12653.489 - 12703.902: 9.1947% ( 41) 00:08:36.963 12703.902 - 12754.314: 9.6514% ( 38) 00:08:36.963 12754.314 - 12804.726: 10.1803% ( 44) 00:08:36.963 12804.726 - 12855.138: 10.6490% ( 39) 00:08:36.963 12855.138 - 12905.551: 11.0938% ( 37) 00:08:36.963 12905.551 - 13006.375: 12.0673% ( 81) 00:08:36.963 13006.375 - 13107.200: 13.7620% ( 141) 00:08:36.963 13107.200 - 13208.025: 16.1178% ( 196) 00:08:36.963 13208.025 - 13308.849: 19.2067% ( 257) 00:08:36.963 13308.849 - 13409.674: 22.8365% ( 302) 00:08:36.963 13409.674 - 13510.498: 26.5144% ( 306) 00:08:36.963 13510.498 - 13611.323: 30.0962% ( 298) 00:08:36.963 13611.323 - 13712.148: 33.1370% ( 253) 00:08:36.963 13712.148 - 13812.972: 35.3846% ( 187) 00:08:36.963 13812.972 - 13913.797: 37.1995% ( 151) 00:08:36.963 13913.797 - 14014.622: 39.0505% ( 154) 00:08:36.963 14014.622 - 14115.446: 40.9135% ( 155) 00:08:36.963 14115.446 - 14216.271: 42.7404% ( 152) 00:08:36.963 14216.271 - 14317.095: 44.1947% ( 121) 00:08:36.963 14317.095 - 14417.920: 45.8534% ( 138) 00:08:36.963 14417.920 - 14518.745: 47.5601% ( 142) 00:08:36.963 14518.745 - 14619.569: 49.5793% ( 168) 00:08:36.963 14619.569 - 14720.394: 51.3702% ( 149) 00:08:36.963 14720.394 - 14821.218: 53.5697% ( 183) 00:08:36.963 14821.218 - 14922.043: 55.8654% ( 191) 00:08:36.963 14922.043 - 15022.868: 57.8365% ( 164) 00:08:36.963 15022.868 - 15123.692: 59.1466% ( 109) 00:08:36.963 15123.692 - 15224.517: 60.2284% ( 90) 00:08:36.963 15224.517 - 15325.342: 61.4423% ( 101) 00:08:36.963 15325.342 - 15426.166: 62.4519% ( 84) 00:08:36.963 15426.166 - 15526.991: 63.3894% ( 78) 00:08:36.963 15526.991 - 15627.815: 64.2548% ( 72) 00:08:36.963 15627.815 - 15728.640: 65.0000% ( 62) 00:08:36.963 15728.640 - 15829.465: 65.7332% ( 61) 00:08:36.963 15829.465 - 15930.289: 66.4423% ( 59) 00:08:36.963 15930.289 - 16031.114: 67.3197% ( 73) 00:08:36.963 16031.114 - 16131.938: 68.2692% ( 79) 00:08:36.963 16131.938 - 16232.763: 69.0144% ( 62) 00:08:36.963 16232.763 - 16333.588: 69.6034% ( 49) 00:08:36.963 16333.588 - 16434.412: 70.5288% ( 77) 00:08:36.963 16434.412 - 16535.237: 71.1899% ( 55) 00:08:36.963 16535.237 - 16636.062: 71.6466% ( 38) 00:08:36.963 16636.062 - 16736.886: 72.0913% ( 37) 00:08:36.963 16736.886 - 16837.711: 72.7404% ( 54) 00:08:36.963 16837.711 - 16938.535: 73.6298% ( 74) 00:08:36.963 16938.535 - 17039.360: 74.3029% ( 56) 00:08:36.963 17039.360 - 17140.185: 74.7115% ( 34) 00:08:36.963 17140.185 - 17241.009: 75.0721% ( 30) 00:08:36.963 17241.009 - 17341.834: 75.7212% ( 54) 00:08:36.963 17341.834 - 17442.658: 76.3341% ( 51) 00:08:36.963 17442.658 - 17543.483: 76.7909% ( 38) 00:08:36.963 17543.483 - 17644.308: 77.2356% ( 37) 00:08:36.963 17644.308 - 17745.132: 77.7284% ( 41) 00:08:36.963 17745.132 - 17845.957: 78.1490% ( 35) 00:08:36.963 17845.957 - 17946.782: 78.6899% ( 45) 00:08:36.963 17946.782 - 18047.606: 79.3750% ( 57) 00:08:36.963 18047.606 - 18148.431: 79.9038% ( 44) 00:08:36.963 18148.431 - 18249.255: 80.7572% ( 71) 00:08:36.963 18249.255 - 18350.080: 81.4663% ( 59) 00:08:36.963 18350.080 - 18450.905: 82.1635% ( 58) 00:08:36.963 18450.905 - 18551.729: 83.0288% ( 72) 00:08:36.963 18551.729 - 18652.554: 83.5938% ( 47) 00:08:36.963 18652.554 - 18753.378: 84.4591% ( 72) 00:08:36.963 18753.378 - 18854.203: 85.2404% ( 65) 00:08:36.963 18854.203 - 18955.028: 86.4543% ( 101) 00:08:36.963 18955.028 - 19055.852: 87.7284% ( 106) 00:08:36.963 19055.852 - 19156.677: 89.0264% ( 108) 00:08:36.963 19156.677 - 19257.502: 90.1562% ( 94) 00:08:36.963 19257.502 - 19358.326: 91.4543% ( 108) 00:08:36.963 19358.326 - 19459.151: 92.5962% ( 95) 00:08:36.963 19459.151 - 19559.975: 93.7380% ( 95) 00:08:36.963 19559.975 - 19660.800: 94.7957% ( 88) 00:08:36.963 19660.800 - 19761.625: 95.5889% ( 66) 00:08:36.963 19761.625 - 19862.449: 96.5144% ( 77) 00:08:36.963 19862.449 - 19963.274: 97.1394% ( 52) 00:08:36.963 19963.274 - 20064.098: 97.6442% ( 42) 00:08:36.963 20064.098 - 20164.923: 98.1731% ( 44) 00:08:36.963 20164.923 - 20265.748: 98.5337% ( 30) 00:08:36.963 20265.748 - 20366.572: 98.7139% ( 15) 00:08:36.963 20366.572 - 20467.397: 98.8101% ( 8) 00:08:36.963 20467.397 - 20568.222: 98.8942% ( 7) 00:08:36.963 20568.222 - 20669.046: 98.9663% ( 6) 00:08:36.963 20669.046 - 20769.871: 99.0144% ( 4) 00:08:36.963 20769.871 - 20870.695: 99.0625% ( 4) 00:08:36.963 20870.695 - 20971.520: 99.1106% ( 4) 00:08:36.963 20971.520 - 21072.345: 99.1587% ( 4) 00:08:36.963 21072.345 - 21173.169: 99.1947% ( 3) 00:08:36.963 21173.169 - 21273.994: 99.2308% ( 3) 00:08:36.963 25206.154 - 25306.978: 99.2548% ( 2) 00:08:36.963 25306.978 - 25407.803: 99.2909% ( 3) 00:08:36.963 25407.803 - 25508.628: 99.3389% ( 4) 00:08:36.963 25508.628 - 25609.452: 99.3870% ( 4) 00:08:36.963 25609.452 - 25710.277: 99.4351% ( 4) 00:08:36.963 25710.277 - 25811.102: 99.4712% ( 3) 00:08:36.963 25811.102 - 26012.751: 99.5673% ( 8) 00:08:36.963 26012.751 - 26214.400: 99.6635% ( 8) 00:08:36.963 26214.400 - 26416.049: 99.7476% ( 7) 00:08:36.963 26416.049 - 26617.698: 99.8678% ( 10) 00:08:36.963 26617.698 - 26819.348: 99.9760% ( 9) 00:08:36.963 26819.348 - 27020.997: 100.0000% ( 2) 00:08:36.963 00:08:37.224 23:03:56 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:08:37.224 00:08:37.224 real 0m2.512s 00:08:37.224 user 0m2.157s 00:08:37.224 sys 0m0.223s 00:08:37.224 23:03:56 nvme.nvme_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:37.224 23:03:56 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:08:37.224 ************************************ 00:08:37.224 END TEST nvme_perf 00:08:37.224 ************************************ 00:08:37.224 23:03:56 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:37.224 23:03:56 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:37.224 23:03:56 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:37.224 23:03:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:37.224 ************************************ 00:08:37.224 START TEST nvme_hello_world 00:08:37.224 ************************************ 00:08:37.224 23:03:56 nvme.nvme_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:37.485 Initializing NVMe Controllers 00:08:37.485 Attached to 0000:00:13.0 00:08:37.485 Namespace ID: 1 size: 1GB 00:08:37.485 Attached to 0000:00:10.0 00:08:37.485 Namespace ID: 1 size: 6GB 00:08:37.485 Attached to 0000:00:11.0 00:08:37.485 Namespace ID: 1 size: 5GB 00:08:37.485 Attached to 0000:00:12.0 00:08:37.485 Namespace ID: 1 size: 4GB 00:08:37.485 Namespace ID: 2 size: 4GB 00:08:37.485 Namespace ID: 3 size: 4GB 00:08:37.485 Initialization complete. 00:08:37.485 INFO: using host memory buffer for IO 00:08:37.485 Hello world! 00:08:37.485 INFO: using host memory buffer for IO 00:08:37.485 Hello world! 00:08:37.485 INFO: using host memory buffer for IO 00:08:37.485 Hello world! 00:08:37.485 INFO: using host memory buffer for IO 00:08:37.485 Hello world! 00:08:37.485 INFO: using host memory buffer for IO 00:08:37.485 Hello world! 00:08:37.485 INFO: using host memory buffer for IO 00:08:37.485 Hello world! 00:08:37.485 00:08:37.485 real 0m0.225s 00:08:37.485 user 0m0.075s 00:08:37.485 sys 0m0.107s 00:08:37.485 23:03:56 nvme.nvme_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:37.485 23:03:56 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:37.485 ************************************ 00:08:37.485 END TEST nvme_hello_world 00:08:37.485 ************************************ 00:08:37.485 23:03:56 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:37.485 23:03:56 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:37.485 23:03:56 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:37.485 23:03:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:37.485 ************************************ 00:08:37.485 START TEST nvme_sgl 00:08:37.485 ************************************ 00:08:37.485 23:03:56 nvme.nvme_sgl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:37.751 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:08:37.751 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:08:37.751 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:08:37.751 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:08:37.751 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:08:37.751 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:08:37.751 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:08:37.751 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:08:37.751 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:08:37.751 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:08:37.751 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:08:37.751 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:08:37.751 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:08:37.751 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:08:37.751 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:08:37.751 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:08:37.751 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:08:37.751 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:08:37.751 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:08:37.751 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:08:37.751 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:08:37.751 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:08:37.751 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:08:37.751 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:08:37.751 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:08:37.751 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:08:37.751 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:08:37.751 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:08:37.751 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:08:37.751 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:08:37.751 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:08:37.751 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:08:37.751 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:08:37.751 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:08:37.751 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:08:37.751 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:08:37.751 NVMe Readv/Writev Request test 00:08:37.751 Attached to 0000:00:13.0 00:08:37.751 Attached to 0000:00:10.0 00:08:37.751 Attached to 0000:00:11.0 00:08:37.751 Attached to 0000:00:12.0 00:08:37.751 0000:00:10.0: build_io_request_2 test passed 00:08:37.751 0000:00:10.0: build_io_request_4 test passed 00:08:37.751 0000:00:10.0: build_io_request_5 test passed 00:08:37.751 0000:00:10.0: build_io_request_6 test passed 00:08:37.751 0000:00:10.0: build_io_request_7 test passed 00:08:37.751 0000:00:10.0: build_io_request_10 test passed 00:08:37.751 0000:00:11.0: build_io_request_2 test passed 00:08:37.751 0000:00:11.0: build_io_request_4 test passed 00:08:37.751 0000:00:11.0: build_io_request_5 test passed 00:08:37.751 0000:00:11.0: build_io_request_6 test passed 00:08:37.751 0000:00:11.0: build_io_request_7 test passed 00:08:37.751 0000:00:11.0: build_io_request_10 test passed 00:08:37.751 Cleaning up... 00:08:37.751 00:08:37.751 real 0m0.282s 00:08:37.751 user 0m0.130s 00:08:37.751 sys 0m0.108s 00:08:37.751 23:03:56 nvme.nvme_sgl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:37.751 ************************************ 00:08:37.751 23:03:56 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:08:37.751 END TEST nvme_sgl 00:08:37.751 ************************************ 00:08:37.751 23:03:57 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:37.751 23:03:57 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:37.751 23:03:57 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:37.751 23:03:57 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:37.751 ************************************ 00:08:37.751 START TEST nvme_e2edp 00:08:37.751 ************************************ 00:08:37.751 23:03:57 nvme.nvme_e2edp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:38.012 NVMe Write/Read with End-to-End data protection test 00:08:38.012 Attached to 0000:00:13.0 00:08:38.012 Attached to 0000:00:10.0 00:08:38.012 Attached to 0000:00:11.0 00:08:38.012 Attached to 0000:00:12.0 00:08:38.012 Cleaning up... 00:08:38.012 00:08:38.012 real 0m0.210s 00:08:38.012 user 0m0.064s 00:08:38.012 sys 0m0.101s 00:08:38.012 ************************************ 00:08:38.012 END TEST nvme_e2edp 00:08:38.012 ************************************ 00:08:38.012 23:03:57 nvme.nvme_e2edp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:38.012 23:03:57 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:08:38.012 23:03:57 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:38.012 23:03:57 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:38.012 23:03:57 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:38.012 23:03:57 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:38.012 ************************************ 00:08:38.012 START TEST nvme_reserve 00:08:38.012 ************************************ 00:08:38.012 23:03:57 nvme.nvme_reserve -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:38.273 ===================================================== 00:08:38.273 NVMe Controller at PCI bus 0, device 19, function 0 00:08:38.273 ===================================================== 00:08:38.273 Reservations: Not Supported 00:08:38.273 ===================================================== 00:08:38.273 NVMe Controller at PCI bus 0, device 16, function 0 00:08:38.273 ===================================================== 00:08:38.273 Reservations: Not Supported 00:08:38.273 ===================================================== 00:08:38.273 NVMe Controller at PCI bus 0, device 17, function 0 00:08:38.273 ===================================================== 00:08:38.273 Reservations: Not Supported 00:08:38.273 ===================================================== 00:08:38.273 NVMe Controller at PCI bus 0, device 18, function 0 00:08:38.273 ===================================================== 00:08:38.273 Reservations: Not Supported 00:08:38.273 Reservation test passed 00:08:38.273 00:08:38.273 real 0m0.210s 00:08:38.273 user 0m0.065s 00:08:38.273 sys 0m0.096s 00:08:38.273 ************************************ 00:08:38.273 END TEST nvme_reserve 00:08:38.273 ************************************ 00:08:38.273 23:03:57 nvme.nvme_reserve -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:38.273 23:03:57 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:08:38.273 23:03:57 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:38.273 23:03:57 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:38.273 23:03:57 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:38.273 23:03:57 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:38.273 ************************************ 00:08:38.273 START TEST nvme_err_injection 00:08:38.273 ************************************ 00:08:38.273 23:03:57 nvme.nvme_err_injection -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:38.662 NVMe Error Injection test 00:08:38.662 Attached to 0000:00:13.0 00:08:38.662 Attached to 0000:00:10.0 00:08:38.662 Attached to 0000:00:11.0 00:08:38.662 Attached to 0000:00:12.0 00:08:38.662 0000:00:13.0: get features failed as expected 00:08:38.662 0000:00:10.0: get features failed as expected 00:08:38.662 0000:00:11.0: get features failed as expected 00:08:38.662 0000:00:12.0: get features failed as expected 00:08:38.662 0000:00:12.0: get features successfully as expected 00:08:38.662 0000:00:13.0: get features successfully as expected 00:08:38.662 0000:00:10.0: get features successfully as expected 00:08:38.662 0000:00:11.0: get features successfully as expected 00:08:38.662 0000:00:12.0: read failed as expected 00:08:38.662 0000:00:13.0: read failed as expected 00:08:38.662 0000:00:10.0: read failed as expected 00:08:38.662 0000:00:11.0: read failed as expected 00:08:38.662 0000:00:12.0: read successfully as expected 00:08:38.662 0000:00:13.0: read successfully as expected 00:08:38.662 0000:00:10.0: read successfully as expected 00:08:38.662 0000:00:11.0: read successfully as expected 00:08:38.662 Cleaning up... 00:08:38.662 ************************************ 00:08:38.662 END TEST nvme_err_injection 00:08:38.662 ************************************ 00:08:38.662 00:08:38.662 real 0m0.216s 00:08:38.662 user 0m0.067s 00:08:38.662 sys 0m0.105s 00:08:38.662 23:03:57 nvme.nvme_err_injection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:38.662 23:03:57 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:08:38.662 23:03:57 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:38.662 23:03:57 nvme -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:08:38.662 23:03:57 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:38.662 23:03:57 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:38.662 ************************************ 00:08:38.662 START TEST nvme_overhead 00:08:38.662 ************************************ 00:08:38.662 23:03:57 nvme.nvme_overhead -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:40.052 Initializing NVMe Controllers 00:08:40.052 Attached to 0000:00:13.0 00:08:40.052 Attached to 0000:00:10.0 00:08:40.052 Attached to 0000:00:11.0 00:08:40.052 Attached to 0000:00:12.0 00:08:40.052 Initialization complete. Launching workers. 00:08:40.052 submit (in ns) avg, min, max = 16700.1, 13008.5, 156175.4 00:08:40.052 complete (in ns) avg, min, max = 9437.5, 8090.0, 506229.2 00:08:40.052 00:08:40.052 Submit histogram 00:08:40.052 ================ 00:08:40.052 Range in us Cumulative Count 00:08:40.052 12.997 - 13.095: 0.0333% ( 1) 00:08:40.052 13.588 - 13.686: 0.0999% ( 2) 00:08:40.052 13.883 - 13.982: 0.1664% ( 2) 00:08:40.052 13.982 - 14.080: 0.1997% ( 1) 00:08:40.052 14.080 - 14.178: 0.2330% ( 1) 00:08:40.052 14.178 - 14.277: 0.2663% ( 1) 00:08:40.052 14.277 - 14.375: 0.3329% ( 2) 00:08:40.052 14.375 - 14.474: 0.7989% ( 14) 00:08:40.052 14.474 - 14.572: 1.8642% ( 32) 00:08:40.052 14.572 - 14.671: 4.2943% ( 73) 00:08:40.052 14.671 - 14.769: 8.9880% ( 141) 00:08:40.052 14.769 - 14.868: 15.4794% ( 195) 00:08:40.052 14.868 - 14.966: 22.2037% ( 202) 00:08:40.052 14.966 - 15.065: 29.4607% ( 218) 00:08:40.052 15.065 - 15.163: 36.1518% ( 201) 00:08:40.052 15.163 - 15.262: 42.3103% ( 185) 00:08:40.052 15.262 - 15.360: 47.9694% ( 170) 00:08:40.052 15.360 - 15.458: 52.2304% ( 128) 00:08:40.052 15.458 - 15.557: 56.1252% ( 117) 00:08:40.052 15.557 - 15.655: 58.4554% ( 70) 00:08:40.052 15.655 - 15.754: 60.8522% ( 72) 00:08:40.052 15.754 - 15.852: 62.9827% ( 64) 00:08:40.052 15.852 - 15.951: 65.4461% ( 74) 00:08:40.052 15.951 - 16.049: 67.4767% ( 61) 00:08:40.052 16.049 - 16.148: 69.4075% ( 58) 00:08:40.052 16.148 - 16.246: 71.5379% ( 64) 00:08:40.052 16.246 - 16.345: 72.8695% ( 40) 00:08:40.052 16.345 - 16.443: 74.7670% ( 57) 00:08:40.052 16.443 - 16.542: 76.3316% ( 47) 00:08:40.052 16.542 - 16.640: 77.6631% ( 40) 00:08:40.052 16.640 - 16.738: 78.8615% ( 36) 00:08:40.052 16.738 - 16.837: 79.8935% ( 31) 00:08:40.052 16.837 - 16.935: 80.8589% ( 29) 00:08:40.052 16.935 - 17.034: 81.8575% ( 30) 00:08:40.052 17.034 - 17.132: 82.3569% ( 15) 00:08:40.052 17.132 - 17.231: 82.9561% ( 18) 00:08:40.052 17.231 - 17.329: 83.1891% ( 7) 00:08:40.052 17.329 - 17.428: 83.5220% ( 10) 00:08:40.052 17.428 - 17.526: 83.9547% ( 13) 00:08:40.052 17.526 - 17.625: 84.0879% ( 4) 00:08:40.052 17.625 - 17.723: 84.5206% ( 13) 00:08:40.052 17.723 - 17.822: 84.7204% ( 6) 00:08:40.052 17.822 - 17.920: 85.0533% ( 10) 00:08:40.052 17.920 - 18.018: 85.1531% ( 3) 00:08:40.052 18.018 - 18.117: 85.2530% ( 3) 00:08:40.052 18.117 - 18.215: 85.3529% ( 3) 00:08:40.052 18.215 - 18.314: 85.6858% ( 10) 00:08:40.052 18.314 - 18.412: 85.7856% ( 3) 00:08:40.052 18.412 - 18.511: 85.9521% ( 5) 00:08:40.052 18.511 - 18.609: 86.1518% ( 6) 00:08:40.052 18.609 - 18.708: 86.4514% ( 9) 00:08:40.052 18.708 - 18.806: 86.7177% ( 8) 00:08:40.052 18.806 - 18.905: 86.8842% ( 5) 00:08:40.052 18.905 - 19.003: 87.2503% ( 11) 00:08:40.052 19.003 - 19.102: 87.5499% ( 9) 00:08:40.052 19.102 - 19.200: 87.8162% ( 8) 00:08:40.052 19.200 - 19.298: 87.8828% ( 2) 00:08:40.052 19.298 - 19.397: 88.2157% ( 10) 00:08:40.052 19.397 - 19.495: 88.4154% ( 6) 00:08:40.052 19.495 - 19.594: 88.5486% ( 4) 00:08:40.052 19.594 - 19.692: 88.8149% ( 8) 00:08:40.052 19.692 - 19.791: 89.1145% ( 9) 00:08:40.052 19.791 - 19.889: 89.2810% ( 5) 00:08:40.052 19.889 - 19.988: 89.5473% ( 8) 00:08:40.052 19.988 - 20.086: 90.0133% ( 14) 00:08:40.052 20.086 - 20.185: 90.2463% ( 7) 00:08:40.052 20.185 - 20.283: 90.5459% ( 9) 00:08:40.052 20.283 - 20.382: 90.9121% ( 11) 00:08:40.052 20.382 - 20.480: 91.0786% ( 5) 00:08:40.052 20.480 - 20.578: 91.6112% ( 16) 00:08:40.052 20.578 - 20.677: 92.0107% ( 12) 00:08:40.052 20.677 - 20.775: 92.3435% ( 10) 00:08:40.052 20.775 - 20.874: 92.8429% ( 15) 00:08:40.052 20.874 - 20.972: 93.2756% ( 13) 00:08:40.052 20.972 - 21.071: 93.6085% ( 10) 00:08:40.052 21.071 - 21.169: 93.8415% ( 7) 00:08:40.052 21.169 - 21.268: 94.1079% ( 8) 00:08:40.052 21.268 - 21.366: 94.3409% ( 7) 00:08:40.052 21.366 - 21.465: 94.7071% ( 11) 00:08:40.052 21.465 - 21.563: 94.9401% ( 7) 00:08:40.052 21.563 - 21.662: 95.0399% ( 3) 00:08:40.052 21.662 - 21.760: 95.1065% ( 2) 00:08:40.052 21.760 - 21.858: 95.2064% ( 3) 00:08:40.052 21.858 - 21.957: 95.3063% ( 3) 00:08:40.052 21.957 - 22.055: 95.4727% ( 5) 00:08:40.052 22.055 - 22.154: 95.5393% ( 2) 00:08:40.052 22.252 - 22.351: 95.6724% ( 4) 00:08:40.052 22.351 - 22.449: 95.7723% ( 3) 00:08:40.052 22.449 - 22.548: 95.8389% ( 2) 00:08:40.052 22.548 - 22.646: 95.9720% ( 4) 00:08:40.052 22.646 - 22.745: 96.0719% ( 3) 00:08:40.052 22.745 - 22.843: 96.1718% ( 3) 00:08:40.052 22.843 - 22.942: 96.2051% ( 1) 00:08:40.052 22.942 - 23.040: 96.4048% ( 6) 00:08:40.052 23.040 - 23.138: 96.4381% ( 1) 00:08:40.052 23.138 - 23.237: 96.5379% ( 3) 00:08:40.052 23.237 - 23.335: 96.6378% ( 3) 00:08:40.052 23.335 - 23.434: 96.8375% ( 6) 00:08:40.052 23.434 - 23.532: 96.9374% ( 3) 00:08:40.052 23.532 - 23.631: 96.9707% ( 1) 00:08:40.052 23.631 - 23.729: 97.1039% ( 4) 00:08:40.052 23.729 - 23.828: 97.2703% ( 5) 00:08:40.052 23.828 - 23.926: 97.3369% ( 2) 00:08:40.052 23.926 - 24.025: 97.5033% ( 5) 00:08:40.052 24.025 - 24.123: 97.5699% ( 2) 00:08:40.052 24.123 - 24.222: 97.6698% ( 3) 00:08:40.052 24.222 - 24.320: 97.7696% ( 3) 00:08:40.052 24.320 - 24.418: 97.9028% ( 4) 00:08:40.052 24.418 - 24.517: 98.0360% ( 4) 00:08:40.052 24.615 - 24.714: 98.1025% ( 2) 00:08:40.052 24.714 - 24.812: 98.1358% ( 1) 00:08:40.052 24.812 - 24.911: 98.2690% ( 4) 00:08:40.052 24.911 - 25.009: 98.3356% ( 2) 00:08:40.052 25.009 - 25.108: 98.3688% ( 1) 00:08:40.052 25.108 - 25.206: 98.4021% ( 1) 00:08:40.052 25.206 - 25.403: 98.5020% ( 3) 00:08:40.052 25.600 - 25.797: 98.6684% ( 5) 00:08:40.052 25.797 - 25.994: 98.8016% ( 4) 00:08:40.052 25.994 - 26.191: 98.8349% ( 1) 00:08:40.052 26.191 - 26.388: 98.9015% ( 2) 00:08:40.052 26.388 - 26.585: 98.9348% ( 1) 00:08:40.052 27.372 - 27.569: 98.9680% ( 1) 00:08:40.052 27.963 - 28.160: 99.0013% ( 1) 00:08:40.052 28.357 - 28.554: 99.0346% ( 1) 00:08:40.052 28.554 - 28.751: 99.0679% ( 1) 00:08:40.052 28.751 - 28.948: 99.1012% ( 1) 00:08:40.052 29.735 - 29.932: 99.1345% ( 1) 00:08:40.052 30.326 - 30.523: 99.1678% ( 1) 00:08:40.052 30.523 - 30.720: 99.2011% ( 1) 00:08:40.052 31.311 - 31.508: 99.2344% ( 1) 00:08:40.052 32.295 - 32.492: 99.2676% ( 1) 00:08:40.052 32.492 - 32.689: 99.3009% ( 1) 00:08:40.052 34.068 - 34.265: 99.3342% ( 1) 00:08:40.053 34.265 - 34.462: 99.3675% ( 1) 00:08:40.053 34.462 - 34.658: 99.4341% ( 2) 00:08:40.053 35.052 - 35.249: 99.4674% ( 1) 00:08:40.053 36.234 - 36.431: 99.5007% ( 1) 00:08:40.053 37.809 - 38.006: 99.5340% ( 1) 00:08:40.053 40.763 - 40.960: 99.5672% ( 1) 00:08:40.053 48.049 - 48.246: 99.6005% ( 1) 00:08:40.053 52.382 - 52.775: 99.6338% ( 1) 00:08:40.053 63.015 - 63.409: 99.6671% ( 1) 00:08:40.053 66.954 - 67.348: 99.7004% ( 1) 00:08:40.053 73.649 - 74.043: 99.7337% ( 1) 00:08:40.053 79.557 - 79.951: 99.7670% ( 1) 00:08:40.053 90.978 - 91.372: 99.8003% ( 1) 00:08:40.053 109.489 - 110.277: 99.8336% ( 1) 00:08:40.053 113.428 - 114.215: 99.8668% ( 1) 00:08:40.053 121.305 - 122.092: 99.9001% ( 1) 00:08:40.053 126.818 - 127.606: 99.9334% ( 1) 00:08:40.053 137.058 - 137.846: 99.9667% ( 1) 00:08:40.053 155.963 - 156.751: 100.0000% ( 1) 00:08:40.053 00:08:40.053 Complete histogram 00:08:40.053 ================== 00:08:40.053 Range in us Cumulative Count 00:08:40.053 8.074 - 8.123: 0.2663% ( 8) 00:08:40.053 8.123 - 8.172: 2.1971% ( 58) 00:08:40.053 8.172 - 8.222: 6.8242% ( 139) 00:08:40.053 8.222 - 8.271: 14.3808% ( 227) 00:08:40.053 8.271 - 8.320: 23.7017% ( 280) 00:08:40.053 8.320 - 8.369: 33.5220% ( 295) 00:08:40.053 8.369 - 8.418: 41.7111% ( 246) 00:08:40.053 8.418 - 8.468: 48.6684% ( 209) 00:08:40.053 8.468 - 8.517: 53.8282% ( 155) 00:08:40.053 8.517 - 8.566: 57.7563% ( 118) 00:08:40.053 8.566 - 8.615: 61.2184% ( 104) 00:08:40.053 8.615 - 8.665: 64.0812% ( 86) 00:08:40.053 8.665 - 8.714: 66.5113% ( 73) 00:08:40.053 8.714 - 8.763: 68.1758% ( 50) 00:08:40.053 8.763 - 8.812: 69.4407% ( 38) 00:08:40.053 8.812 - 8.862: 70.7057% ( 38) 00:08:40.053 8.862 - 8.911: 71.4381% ( 22) 00:08:40.053 8.911 - 8.960: 72.1704% ( 22) 00:08:40.053 8.960 - 9.009: 72.6698% ( 15) 00:08:40.053 9.009 - 9.058: 73.1025% ( 13) 00:08:40.053 9.058 - 9.108: 73.6352% ( 16) 00:08:40.053 9.108 - 9.157: 73.9680% ( 10) 00:08:40.053 9.157 - 9.206: 74.2011% ( 7) 00:08:40.053 9.206 - 9.255: 74.4674% ( 8) 00:08:40.053 9.255 - 9.305: 74.7004% ( 7) 00:08:40.053 9.305 - 9.354: 74.9001% ( 6) 00:08:40.053 9.354 - 9.403: 75.0666% ( 5) 00:08:40.053 9.403 - 9.452: 75.1332% ( 2) 00:08:40.053 9.452 - 9.502: 75.2996% ( 5) 00:08:40.053 9.502 - 9.551: 75.3995% ( 3) 00:08:40.053 9.551 - 9.600: 75.5659% ( 5) 00:08:40.053 9.600 - 9.649: 75.6991% ( 4) 00:08:40.053 9.649 - 9.698: 75.7989% ( 3) 00:08:40.053 9.698 - 9.748: 75.8988% ( 3) 00:08:40.053 9.748 - 9.797: 76.1318% ( 7) 00:08:40.053 9.797 - 9.846: 76.3981% ( 8) 00:08:40.053 9.846 - 9.895: 76.7643% ( 11) 00:08:40.053 9.895 - 9.945: 77.5632% ( 24) 00:08:40.053 9.945 - 9.994: 78.0626% ( 15) 00:08:40.053 9.994 - 10.043: 79.1278% ( 32) 00:08:40.053 10.043 - 10.092: 80.1598% ( 31) 00:08:40.053 10.092 - 10.142: 81.3915% ( 37) 00:08:40.053 10.142 - 10.191: 82.8562% ( 44) 00:08:40.053 10.191 - 10.240: 84.5206% ( 50) 00:08:40.053 10.240 - 10.289: 86.1851% ( 50) 00:08:40.053 10.289 - 10.338: 87.5832% ( 42) 00:08:40.053 10.338 - 10.388: 89.1811% ( 48) 00:08:40.053 10.388 - 10.437: 90.6125% ( 43) 00:08:40.053 10.437 - 10.486: 91.5779% ( 29) 00:08:40.053 10.486 - 10.535: 92.3103% ( 22) 00:08:40.053 10.535 - 10.585: 93.3755% ( 32) 00:08:40.053 10.585 - 10.634: 94.0746% ( 21) 00:08:40.053 10.634 - 10.683: 94.7403% ( 20) 00:08:40.053 10.683 - 10.732: 95.2730% ( 16) 00:08:40.053 10.732 - 10.782: 95.6724% ( 12) 00:08:40.053 10.782 - 10.831: 95.9387% ( 8) 00:08:40.053 10.831 - 10.880: 96.3382% ( 12) 00:08:40.053 10.880 - 10.929: 96.3715% ( 1) 00:08:40.053 10.929 - 10.978: 96.5712% ( 6) 00:08:40.053 10.978 - 11.028: 96.6378% ( 2) 00:08:40.053 11.028 - 11.077: 96.7044% ( 2) 00:08:40.053 11.077 - 11.126: 96.9041% ( 6) 00:08:40.053 11.126 - 11.175: 97.0040% ( 3) 00:08:40.053 11.372 - 11.422: 97.0373% ( 1) 00:08:40.053 11.422 - 11.471: 97.0706% ( 1) 00:08:40.053 11.520 - 11.569: 97.1039% ( 1) 00:08:40.053 11.815 - 11.865: 97.1372% ( 1) 00:08:40.053 11.865 - 11.914: 97.1704% ( 1) 00:08:40.053 11.963 - 12.012: 97.2037% ( 1) 00:08:40.053 12.209 - 12.258: 97.2703% ( 2) 00:08:40.053 12.308 - 12.357: 97.3036% ( 1) 00:08:40.053 12.406 - 12.455: 97.3369% ( 1) 00:08:40.053 12.505 - 12.554: 97.4035% ( 2) 00:08:40.053 12.554 - 12.603: 97.4700% ( 2) 00:08:40.053 12.603 - 12.702: 97.5699% ( 3) 00:08:40.053 12.702 - 12.800: 97.6032% ( 1) 00:08:40.053 12.800 - 12.898: 97.6365% ( 1) 00:08:40.053 13.095 - 13.194: 97.6698% ( 1) 00:08:40.053 14.375 - 14.474: 97.7031% ( 1) 00:08:40.053 14.474 - 14.572: 97.7364% ( 1) 00:08:40.053 14.966 - 15.065: 97.7696% ( 1) 00:08:40.053 15.262 - 15.360: 97.8029% ( 1) 00:08:40.053 15.852 - 15.951: 97.9028% ( 3) 00:08:40.053 16.049 - 16.148: 98.0027% ( 3) 00:08:40.053 16.148 - 16.246: 98.0692% ( 2) 00:08:40.053 16.443 - 16.542: 98.1358% ( 2) 00:08:40.053 16.542 - 16.640: 98.2024% ( 2) 00:08:40.053 16.640 - 16.738: 98.3356% ( 4) 00:08:40.053 16.738 - 16.837: 98.3688% ( 1) 00:08:40.053 16.837 - 16.935: 98.4354% ( 2) 00:08:40.053 16.935 - 17.034: 98.4687% ( 1) 00:08:40.053 17.034 - 17.132: 98.5020% ( 1) 00:08:40.053 17.132 - 17.231: 98.5353% ( 1) 00:08:40.053 17.231 - 17.329: 98.5686% ( 1) 00:08:40.053 17.329 - 17.428: 98.6019% ( 1) 00:08:40.053 17.526 - 17.625: 98.7017% ( 3) 00:08:40.053 17.625 - 17.723: 98.7350% ( 1) 00:08:40.053 17.723 - 17.822: 98.8016% ( 2) 00:08:40.053 17.822 - 17.920: 98.9015% ( 3) 00:08:40.053 17.920 - 18.018: 98.9348% ( 1) 00:08:40.053 18.018 - 18.117: 98.9680% ( 1) 00:08:40.053 18.215 - 18.314: 99.0013% ( 1) 00:08:40.053 18.412 - 18.511: 99.0346% ( 1) 00:08:40.053 18.511 - 18.609: 99.0679% ( 1) 00:08:40.053 18.708 - 18.806: 99.1012% ( 1) 00:08:40.053 18.806 - 18.905: 99.2011% ( 3) 00:08:40.053 19.003 - 19.102: 99.2344% ( 1) 00:08:40.053 19.102 - 19.200: 99.3009% ( 2) 00:08:40.053 19.495 - 19.594: 99.3342% ( 1) 00:08:40.053 19.594 - 19.692: 99.3675% ( 1) 00:08:40.053 19.692 - 19.791: 99.4008% ( 1) 00:08:40.053 20.677 - 20.775: 99.4341% ( 1) 00:08:40.053 21.760 - 21.858: 99.4674% ( 1) 00:08:40.053 23.237 - 23.335: 99.5007% ( 1) 00:08:40.053 23.335 - 23.434: 99.5340% ( 1) 00:08:40.053 24.025 - 24.123: 99.5672% ( 1) 00:08:40.053 25.403 - 25.600: 99.6005% ( 1) 00:08:40.053 25.797 - 25.994: 99.6338% ( 1) 00:08:40.053 25.994 - 26.191: 99.7004% ( 2) 00:08:40.053 27.963 - 28.160: 99.7337% ( 1) 00:08:40.053 28.160 - 28.357: 99.7670% ( 1) 00:08:40.053 29.735 - 29.932: 99.8003% ( 1) 00:08:40.053 30.917 - 31.114: 99.8336% ( 1) 00:08:40.053 39.975 - 40.172: 99.8668% ( 1) 00:08:40.053 153.600 - 154.388: 99.9001% ( 1) 00:08:40.053 157.538 - 158.326: 99.9334% ( 1) 00:08:40.053 174.080 - 174.868: 99.9667% ( 1) 00:08:40.053 504.123 - 507.274: 100.0000% ( 1) 00:08:40.053 00:08:40.053 00:08:40.053 real 0m1.221s 00:08:40.053 user 0m1.058s 00:08:40.053 sys 0m0.102s 00:08:40.053 23:03:59 nvme.nvme_overhead -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:40.053 23:03:59 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:08:40.053 ************************************ 00:08:40.053 END TEST nvme_overhead 00:08:40.053 ************************************ 00:08:40.053 23:03:59 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:40.053 23:03:59 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:08:40.053 23:03:59 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:40.053 23:03:59 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:40.053 ************************************ 00:08:40.053 START TEST nvme_arbitration 00:08:40.053 ************************************ 00:08:40.053 23:03:59 nvme.nvme_arbitration -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:43.384 Initializing NVMe Controllers 00:08:43.384 Attached to 0000:00:13.0 00:08:43.384 Attached to 0000:00:10.0 00:08:43.384 Attached to 0000:00:11.0 00:08:43.384 Attached to 0000:00:12.0 00:08:43.384 Associating QEMU NVMe Ctrl (12343 ) with lcore 0 00:08:43.384 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:08:43.384 Associating QEMU NVMe Ctrl (12341 ) with lcore 2 00:08:43.384 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:08:43.384 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:08:43.384 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:08:43.384 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:08:43.384 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:08:43.384 Initialization complete. Launching workers. 00:08:43.384 Starting thread on core 1 with urgent priority queue 00:08:43.384 Starting thread on core 2 with urgent priority queue 00:08:43.384 Starting thread on core 3 with urgent priority queue 00:08:43.384 Starting thread on core 0 with urgent priority queue 00:08:43.384 QEMU NVMe Ctrl (12343 ) core 0: 4458.67 IO/s 22.43 secs/100000 ios 00:08:43.384 QEMU NVMe Ctrl (12342 ) core 0: 4458.67 IO/s 22.43 secs/100000 ios 00:08:43.384 QEMU NVMe Ctrl (12340 ) core 1: 4586.67 IO/s 21.80 secs/100000 ios 00:08:43.384 QEMU NVMe Ctrl (12342 ) core 1: 4586.67 IO/s 21.80 secs/100000 ios 00:08:43.384 QEMU NVMe Ctrl (12341 ) core 2: 4010.67 IO/s 24.93 secs/100000 ios 00:08:43.384 QEMU NVMe Ctrl (12342 ) core 3: 4117.33 IO/s 24.29 secs/100000 ios 00:08:43.384 ======================================================== 00:08:43.384 00:08:43.384 00:08:43.384 real 0m3.249s 00:08:43.384 user 0m9.008s 00:08:43.384 sys 0m0.132s 00:08:43.384 ************************************ 00:08:43.385 23:04:02 nvme.nvme_arbitration -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:43.385 23:04:02 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:08:43.385 END TEST nvme_arbitration 00:08:43.385 ************************************ 00:08:43.385 23:04:02 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:43.385 23:04:02 nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:43.385 23:04:02 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:43.385 23:04:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:43.385 ************************************ 00:08:43.385 START TEST nvme_single_aen 00:08:43.385 ************************************ 00:08:43.385 23:04:02 nvme.nvme_single_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:43.385 Asynchronous Event Request test 00:08:43.385 Attached to 0000:00:13.0 00:08:43.385 Attached to 0000:00:10.0 00:08:43.385 Attached to 0000:00:11.0 00:08:43.385 Attached to 0000:00:12.0 00:08:43.385 Reset controller to setup AER completions for this process 00:08:43.385 Registering asynchronous event callbacks... 00:08:43.385 Getting orig temperature thresholds of all controllers 00:08:43.385 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:43.385 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:43.385 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:43.385 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:43.385 Setting all controllers temperature threshold low to trigger AER 00:08:43.385 Waiting for all controllers temperature threshold to be set lower 00:08:43.385 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:43.385 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:43.385 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:43.385 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:43.386 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:43.386 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:43.386 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:43.386 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:43.386 Waiting for all controllers to trigger AER and reset threshold 00:08:43.386 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:43.386 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:43.386 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:43.386 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:43.386 Cleaning up... 00:08:43.386 ************************************ 00:08:43.386 END TEST nvme_single_aen 00:08:43.386 ************************************ 00:08:43.386 00:08:43.386 real 0m0.216s 00:08:43.386 user 0m0.073s 00:08:43.386 sys 0m0.096s 00:08:43.386 23:04:02 nvme.nvme_single_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:43.386 23:04:02 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:08:43.653 23:04:02 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:08:43.653 23:04:02 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:43.653 23:04:02 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:43.653 23:04:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:43.653 ************************************ 00:08:43.653 START TEST nvme_doorbell_aers 00:08:43.653 ************************************ 00:08:43.653 23:04:02 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1125 -- # nvme_doorbell_aers 00:08:43.653 23:04:02 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:08:43.653 23:04:02 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:08:43.653 23:04:02 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:08:43.653 23:04:02 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:08:43.653 23:04:02 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:43.653 23:04:02 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # local bdfs 00:08:43.653 23:04:02 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:43.653 23:04:02 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:43.653 23:04:02 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:43.653 23:04:02 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:08:43.653 23:04:02 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:43.653 23:04:02 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:43.653 23:04:02 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:43.914 [2024-11-18 23:04:03.069596] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75564) is not found. Dropping the request. 00:08:53.914 Executing: test_write_invalid_db 00:08:53.914 Waiting for AER completion... 00:08:53.914 Failure: test_write_invalid_db 00:08:53.914 00:08:53.914 Executing: test_invalid_db_write_overflow_sq 00:08:53.914 Waiting for AER completion... 00:08:53.914 Failure: test_invalid_db_write_overflow_sq 00:08:53.914 00:08:53.914 Executing: test_invalid_db_write_overflow_cq 00:08:53.914 Waiting for AER completion... 00:08:53.914 Failure: test_invalid_db_write_overflow_cq 00:08:53.914 00:08:53.914 23:04:12 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:53.914 23:04:12 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:53.914 [2024-11-18 23:04:13.112098] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75564) is not found. Dropping the request. 00:09:03.883 Executing: test_write_invalid_db 00:09:03.883 Waiting for AER completion... 00:09:03.883 Failure: test_write_invalid_db 00:09:03.883 00:09:03.883 Executing: test_invalid_db_write_overflow_sq 00:09:03.883 Waiting for AER completion... 00:09:03.883 Failure: test_invalid_db_write_overflow_sq 00:09:03.883 00:09:03.883 Executing: test_invalid_db_write_overflow_cq 00:09:03.883 Waiting for AER completion... 00:09:03.883 Failure: test_invalid_db_write_overflow_cq 00:09:03.883 00:09:03.883 23:04:22 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:03.883 23:04:22 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:03.883 [2024-11-18 23:04:23.143049] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75564) is not found. Dropping the request. 00:09:13.998 Executing: test_write_invalid_db 00:09:13.998 Waiting for AER completion... 00:09:13.998 Failure: test_write_invalid_db 00:09:13.998 00:09:13.998 Executing: test_invalid_db_write_overflow_sq 00:09:13.998 Waiting for AER completion... 00:09:13.998 Failure: test_invalid_db_write_overflow_sq 00:09:13.998 00:09:13.998 Executing: test_invalid_db_write_overflow_cq 00:09:13.998 Waiting for AER completion... 00:09:13.998 Failure: test_invalid_db_write_overflow_cq 00:09:13.998 00:09:13.998 23:04:32 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:13.998 23:04:32 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:13.998 [2024-11-18 23:04:33.169841] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75564) is not found. Dropping the request. 00:09:23.984 Executing: test_write_invalid_db 00:09:23.984 Waiting for AER completion... 00:09:23.984 Failure: test_write_invalid_db 00:09:23.984 00:09:23.984 Executing: test_invalid_db_write_overflow_sq 00:09:23.984 Waiting for AER completion... 00:09:23.984 Failure: test_invalid_db_write_overflow_sq 00:09:23.984 00:09:23.984 Executing: test_invalid_db_write_overflow_cq 00:09:23.984 Waiting for AER completion... 00:09:23.984 Failure: test_invalid_db_write_overflow_cq 00:09:23.984 00:09:23.984 00:09:23.984 real 0m40.213s 00:09:23.984 user 0m34.230s 00:09:23.984 sys 0m5.575s 00:09:23.984 ************************************ 00:09:23.984 END TEST nvme_doorbell_aers 00:09:23.984 ************************************ 00:09:23.984 23:04:43 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:23.984 23:04:43 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:09:23.984 23:04:43 nvme -- nvme/nvme.sh@97 -- # uname 00:09:23.984 23:04:43 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:09:23.984 23:04:43 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:09:23.984 23:04:43 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:09:23.984 23:04:43 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:23.984 23:04:43 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:23.984 ************************************ 00:09:23.984 START TEST nvme_multi_aen 00:09:23.984 ************************************ 00:09:23.984 23:04:43 nvme.nvme_multi_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:09:23.984 [2024-11-18 23:04:43.212538] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75564) is not found. Dropping the request. 00:09:23.984 [2024-11-18 23:04:43.212599] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75564) is not found. Dropping the request. 00:09:23.984 [2024-11-18 23:04:43.212609] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75564) is not found. Dropping the request. 00:09:23.984 [2024-11-18 23:04:43.213650] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75564) is not found. Dropping the request. 00:09:23.984 [2024-11-18 23:04:43.213675] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75564) is not found. Dropping the request. 00:09:23.984 [2024-11-18 23:04:43.213684] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75564) is not found. Dropping the request. 00:09:23.984 [2024-11-18 23:04:43.214569] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75564) is not found. Dropping the request. 00:09:23.984 [2024-11-18 23:04:43.214593] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75564) is not found. Dropping the request. 00:09:23.984 [2024-11-18 23:04:43.214602] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75564) is not found. Dropping the request. 00:09:23.984 [2024-11-18 23:04:43.215511] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75564) is not found. Dropping the request. 00:09:23.984 [2024-11-18 23:04:43.215605] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75564) is not found. Dropping the request. 00:09:23.984 [2024-11-18 23:04:43.215673] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75564) is not found. Dropping the request. 00:09:23.984 Child process pid: 76092 00:09:24.243 [Child] Asynchronous Event Request test 00:09:24.243 [Child] Attached to 0000:00:13.0 00:09:24.243 [Child] Attached to 0000:00:10.0 00:09:24.243 [Child] Attached to 0000:00:11.0 00:09:24.243 [Child] Attached to 0000:00:12.0 00:09:24.243 [Child] Registering asynchronous event callbacks... 00:09:24.243 [Child] Getting orig temperature thresholds of all controllers 00:09:24.243 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:24.243 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:24.243 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:24.243 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:24.243 [Child] Waiting for all controllers to trigger AER and reset threshold 00:09:24.243 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:24.243 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:24.243 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:24.243 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:24.243 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:24.243 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:24.243 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:24.243 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:24.243 [Child] Cleaning up... 00:09:24.243 Asynchronous Event Request test 00:09:24.243 Attached to 0000:00:13.0 00:09:24.243 Attached to 0000:00:10.0 00:09:24.243 Attached to 0000:00:11.0 00:09:24.243 Attached to 0000:00:12.0 00:09:24.243 Reset controller to setup AER completions for this process 00:09:24.243 Registering asynchronous event callbacks... 00:09:24.243 Getting orig temperature thresholds of all controllers 00:09:24.243 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:24.243 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:24.243 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:24.243 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:24.243 Setting all controllers temperature threshold low to trigger AER 00:09:24.243 Waiting for all controllers temperature threshold to be set lower 00:09:24.243 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:24.243 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:09:24.243 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:24.243 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:09:24.243 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:24.243 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:09:24.243 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:24.243 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:09:24.243 Waiting for all controllers to trigger AER and reset threshold 00:09:24.243 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:24.243 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:24.243 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:24.243 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:24.243 Cleaning up... 00:09:24.243 00:09:24.243 real 0m0.388s 00:09:24.243 user 0m0.118s 00:09:24.243 sys 0m0.165s 00:09:24.243 23:04:43 nvme.nvme_multi_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:24.243 23:04:43 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:09:24.243 ************************************ 00:09:24.243 END TEST nvme_multi_aen 00:09:24.243 ************************************ 00:09:24.244 23:04:43 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:24.244 23:04:43 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:24.244 23:04:43 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:24.244 23:04:43 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:24.244 ************************************ 00:09:24.244 START TEST nvme_startup 00:09:24.244 ************************************ 00:09:24.244 23:04:43 nvme.nvme_startup -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:24.502 Initializing NVMe Controllers 00:09:24.502 Attached to 0000:00:13.0 00:09:24.502 Attached to 0000:00:10.0 00:09:24.502 Attached to 0000:00:11.0 00:09:24.502 Attached to 0000:00:12.0 00:09:24.502 Initialization complete. 00:09:24.502 Time used:113284.922 (us). 00:09:24.502 00:09:24.502 real 0m0.165s 00:09:24.502 user 0m0.055s 00:09:24.502 sys 0m0.074s 00:09:24.502 23:04:43 nvme.nvme_startup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:24.502 23:04:43 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:09:24.502 ************************************ 00:09:24.502 END TEST nvme_startup 00:09:24.502 ************************************ 00:09:24.502 23:04:43 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:09:24.502 23:04:43 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:24.502 23:04:43 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:24.502 23:04:43 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:24.502 ************************************ 00:09:24.502 START TEST nvme_multi_secondary 00:09:24.502 ************************************ 00:09:24.502 23:04:43 nvme.nvme_multi_secondary -- common/autotest_common.sh@1125 -- # nvme_multi_secondary 00:09:24.502 23:04:43 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=76137 00:09:24.502 23:04:43 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:09:24.502 23:04:43 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=76138 00:09:24.502 23:04:43 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:09:24.502 23:04:43 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:27.805 Initializing NVMe Controllers 00:09:27.805 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:27.805 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:27.805 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:27.805 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:27.805 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:27.805 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:27.805 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:27.805 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:27.805 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:27.805 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:27.805 Initialization complete. Launching workers. 00:09:27.805 ======================================================== 00:09:27.805 Latency(us) 00:09:27.805 Device Information : IOPS MiB/s Average min max 00:09:27.805 PCIE (0000:00:13.0) NSID 1 from core 2: 2687.15 10.50 5953.84 904.42 13018.52 00:09:27.805 PCIE (0000:00:10.0) NSID 1 from core 2: 2687.15 10.50 5952.35 907.85 13877.23 00:09:27.805 PCIE (0000:00:11.0) NSID 1 from core 2: 2687.15 10.50 5954.22 924.32 13405.78 00:09:27.805 PCIE (0000:00:12.0) NSID 1 from core 2: 2687.15 10.50 5954.51 992.50 16881.12 00:09:27.805 PCIE (0000:00:12.0) NSID 2 from core 2: 2687.15 10.50 5962.36 951.02 13448.01 00:09:27.805 PCIE (0000:00:12.0) NSID 3 from core 2: 2687.15 10.50 5959.80 944.80 13227.53 00:09:27.805 ======================================================== 00:09:27.805 Total : 16122.91 62.98 5956.18 904.42 16881.12 00:09:27.805 00:09:27.805 Initializing NVMe Controllers 00:09:27.805 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:27.805 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:27.805 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:27.805 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:27.805 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:27.805 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:27.805 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:27.805 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:27.805 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:27.805 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:27.805 Initialization complete. Launching workers. 00:09:27.805 ======================================================== 00:09:27.805 Latency(us) 00:09:27.805 Device Information : IOPS MiB/s Average min max 00:09:27.805 PCIE (0000:00:13.0) NSID 1 from core 1: 7364.12 28.77 2172.27 862.14 5820.21 00:09:27.805 PCIE (0000:00:10.0) NSID 1 from core 1: 7364.12 28.77 2171.53 843.07 5968.37 00:09:27.805 PCIE (0000:00:11.0) NSID 1 from core 1: 7364.12 28.77 2172.71 863.95 6304.26 00:09:27.805 PCIE (0000:00:12.0) NSID 1 from core 1: 7364.12 28.77 2172.80 847.60 6645.12 00:09:27.805 PCIE (0000:00:12.0) NSID 2 from core 1: 7364.12 28.77 2172.90 857.09 6234.20 00:09:27.805 PCIE (0000:00:12.0) NSID 3 from core 1: 7364.12 28.77 2173.01 862.81 6096.53 00:09:27.805 ======================================================== 00:09:27.805 Total : 44184.72 172.60 2172.54 843.07 6645.12 00:09:27.805 00:09:27.805 23:04:46 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 76137 00:09:29.705 Initializing NVMe Controllers 00:09:29.705 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:29.705 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:29.705 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:29.705 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:29.705 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:29.705 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:29.705 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:29.705 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:29.705 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:29.705 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:29.705 Initialization complete. Launching workers. 00:09:29.705 ======================================================== 00:09:29.705 Latency(us) 00:09:29.705 Device Information : IOPS MiB/s Average min max 00:09:29.705 PCIE (0000:00:13.0) NSID 1 from core 0: 10607.42 41.44 1508.00 765.63 6421.24 00:09:29.705 PCIE (0000:00:10.0) NSID 1 from core 0: 10607.42 41.44 1507.11 746.53 7053.21 00:09:29.705 PCIE (0000:00:11.0) NSID 1 from core 0: 10607.42 41.44 1507.93 758.27 6475.82 00:09:29.705 PCIE (0000:00:12.0) NSID 1 from core 0: 10607.42 41.44 1507.88 568.08 6895.82 00:09:29.705 PCIE (0000:00:12.0) NSID 2 from core 0: 10607.42 41.44 1507.85 501.31 7464.06 00:09:29.705 PCIE (0000:00:12.0) NSID 3 from core 0: 10607.42 41.44 1507.81 422.18 6599.87 00:09:29.705 ======================================================== 00:09:29.705 Total : 63644.51 248.61 1507.76 422.18 7464.06 00:09:29.705 00:09:29.705 23:04:49 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 76138 00:09:29.705 23:04:49 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=76213 00:09:29.705 23:04:49 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=76214 00:09:29.705 23:04:49 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:09:29.705 23:04:49 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:09:29.705 23:04:49 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:32.997 Initializing NVMe Controllers 00:09:32.997 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:32.997 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:32.997 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:32.997 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:32.997 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:32.997 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:32.997 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:32.997 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:32.997 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:32.997 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:32.997 Initialization complete. Launching workers. 00:09:32.997 ======================================================== 00:09:32.997 Latency(us) 00:09:32.997 Device Information : IOPS MiB/s Average min max 00:09:32.997 PCIE (0000:00:13.0) NSID 1 from core 1: 6219.20 24.29 2572.25 781.64 7933.59 00:09:32.997 PCIE (0000:00:10.0) NSID 1 from core 1: 6219.20 24.29 2571.39 744.12 8365.87 00:09:32.997 PCIE (0000:00:11.0) NSID 1 from core 1: 6219.20 24.29 2572.26 767.65 8776.83 00:09:32.997 PCIE (0000:00:12.0) NSID 1 from core 1: 6219.20 24.29 2572.08 781.36 8278.65 00:09:32.997 PCIE (0000:00:12.0) NSID 2 from core 1: 6219.20 24.29 2572.01 790.79 8100.76 00:09:32.997 PCIE (0000:00:12.0) NSID 3 from core 1: 6219.20 24.29 2572.22 779.94 8517.43 00:09:32.997 ======================================================== 00:09:32.997 Total : 37315.21 145.76 2572.03 744.12 8776.83 00:09:32.997 00:09:32.997 Initializing NVMe Controllers 00:09:32.997 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:32.997 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:32.997 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:32.997 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:32.997 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:32.997 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:32.997 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:32.997 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:32.997 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:32.997 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:32.997 Initialization complete. Launching workers. 00:09:32.997 ======================================================== 00:09:32.997 Latency(us) 00:09:32.997 Device Information : IOPS MiB/s Average min max 00:09:32.997 PCIE (0000:00:13.0) NSID 1 from core 0: 5294.03 20.68 3021.81 1150.33 9542.78 00:09:32.997 PCIE (0000:00:10.0) NSID 1 from core 0: 5294.03 20.68 3020.68 982.68 9622.86 00:09:32.997 PCIE (0000:00:11.0) NSID 1 from core 0: 5294.03 20.68 3021.63 1128.50 9895.58 00:09:32.997 PCIE (0000:00:12.0) NSID 1 from core 0: 5294.03 20.68 3021.52 1112.44 10541.82 00:09:32.997 PCIE (0000:00:12.0) NSID 2 from core 0: 5294.03 20.68 3021.42 1096.78 11539.38 00:09:32.997 PCIE (0000:00:12.0) NSID 3 from core 0: 5294.03 20.68 3021.31 533.48 9650.01 00:09:32.997 ======================================================== 00:09:32.997 Total : 31764.18 124.08 3021.40 533.48 11539.38 00:09:32.997 00:09:35.565 Initializing NVMe Controllers 00:09:35.565 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:35.565 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:35.565 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:35.565 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:35.565 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:35.565 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:35.565 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:35.565 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:35.565 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:35.565 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:35.565 Initialization complete. Launching workers. 00:09:35.565 ======================================================== 00:09:35.565 Latency(us) 00:09:35.565 Device Information : IOPS MiB/s Average min max 00:09:35.565 PCIE (0000:00:13.0) NSID 1 from core 2: 2828.15 11.05 5656.99 1047.96 21185.98 00:09:35.565 PCIE (0000:00:10.0) NSID 1 from core 2: 2828.15 11.05 5655.47 1025.74 21947.63 00:09:35.565 PCIE (0000:00:11.0) NSID 1 from core 2: 2828.15 11.05 5656.89 1063.69 21622.94 00:09:35.565 PCIE (0000:00:12.0) NSID 1 from core 2: 2828.15 11.05 5656.51 1020.66 19742.66 00:09:35.565 PCIE (0000:00:12.0) NSID 2 from core 2: 2828.15 11.05 5656.73 1032.43 20599.57 00:09:35.565 PCIE (0000:00:12.0) NSID 3 from core 2: 2828.15 11.05 5656.92 713.93 20724.01 00:09:35.565 ======================================================== 00:09:35.565 Total : 16968.89 66.28 5656.59 713.93 21947.63 00:09:35.565 00:09:35.565 ************************************ 00:09:35.565 END TEST nvme_multi_secondary 00:09:35.565 ************************************ 00:09:35.565 23:04:54 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 76213 00:09:35.565 23:04:54 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 76214 00:09:35.565 00:09:35.565 real 0m10.778s 00:09:35.565 user 0m18.273s 00:09:35.565 sys 0m0.603s 00:09:35.565 23:04:54 nvme.nvme_multi_secondary -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:35.565 23:04:54 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:09:35.565 23:04:54 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:09:35.565 23:04:54 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:09:35.565 23:04:54 nvme -- common/autotest_common.sh@1089 -- # [[ -e /proc/75167 ]] 00:09:35.565 23:04:54 nvme -- common/autotest_common.sh@1090 -- # kill 75167 00:09:35.565 23:04:54 nvme -- common/autotest_common.sh@1091 -- # wait 75167 00:09:35.565 [2024-11-18 23:04:54.512220] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76091) is not found. Dropping the request. 00:09:35.565 [2024-11-18 23:04:54.512289] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76091) is not found. Dropping the request. 00:09:35.565 [2024-11-18 23:04:54.512305] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76091) is not found. Dropping the request. 00:09:35.565 [2024-11-18 23:04:54.512321] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76091) is not found. Dropping the request. 00:09:35.565 [2024-11-18 23:04:54.512794] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76091) is not found. Dropping the request. 00:09:35.565 [2024-11-18 23:04:54.512825] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76091) is not found. Dropping the request. 00:09:35.565 [2024-11-18 23:04:54.512839] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76091) is not found. Dropping the request. 00:09:35.565 [2024-11-18 23:04:54.512855] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76091) is not found. Dropping the request. 00:09:35.565 [2024-11-18 23:04:54.513422] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76091) is not found. Dropping the request. 00:09:35.565 [2024-11-18 23:04:54.513467] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76091) is not found. Dropping the request. 00:09:35.565 [2024-11-18 23:04:54.513482] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76091) is not found. Dropping the request. 00:09:35.565 [2024-11-18 23:04:54.513499] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76091) is not found. Dropping the request. 00:09:35.565 [2024-11-18 23:04:54.514070] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76091) is not found. Dropping the request. 00:09:35.565 [2024-11-18 23:04:54.514116] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76091) is not found. Dropping the request. 00:09:35.565 [2024-11-18 23:04:54.514130] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76091) is not found. Dropping the request. 00:09:35.565 [2024-11-18 23:04:54.514149] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76091) is not found. Dropping the request. 00:09:35.565 23:04:54 nvme -- common/autotest_common.sh@1093 -- # rm -f /var/run/spdk_stub0 00:09:35.565 23:04:54 nvme -- common/autotest_common.sh@1097 -- # echo 2 00:09:35.565 23:04:54 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:35.565 23:04:54 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:35.565 23:04:54 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:35.565 23:04:54 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:35.565 ************************************ 00:09:35.565 START TEST bdev_nvme_reset_stuck_adm_cmd 00:09:35.565 ************************************ 00:09:35.565 23:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:35.565 * Looking for test storage... 00:09:35.565 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:35.565 23:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:35.565 23:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # lcov --version 00:09:35.565 23:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:35.565 23:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:35.565 23:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:35.565 23:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:35.565 23:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:35.565 23:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:09:35.565 23:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:09:35.565 23:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:09:35.565 23:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:09:35.565 23:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:09:35.565 23:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:09:35.565 23:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:09:35.565 23:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:35.565 23:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:09:35.565 23:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:09:35.565 23:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:35.565 23:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:35.565 23:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:09:35.565 23:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:09:35.566 23:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:35.566 23:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:09:35.566 23:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:09:35.566 23:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:09:35.566 23:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:09:35.566 23:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:35.566 23:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:09:35.566 23:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:09:35.566 23:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:35.566 23:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:35.566 23:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:09:35.566 23:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:35.566 23:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:35.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.566 --rc genhtml_branch_coverage=1 00:09:35.566 --rc genhtml_function_coverage=1 00:09:35.566 --rc genhtml_legend=1 00:09:35.566 --rc geninfo_all_blocks=1 00:09:35.566 --rc geninfo_unexecuted_blocks=1 00:09:35.566 00:09:35.566 ' 00:09:35.566 23:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:35.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.566 --rc genhtml_branch_coverage=1 00:09:35.566 --rc genhtml_function_coverage=1 00:09:35.566 --rc genhtml_legend=1 00:09:35.566 --rc geninfo_all_blocks=1 00:09:35.566 --rc geninfo_unexecuted_blocks=1 00:09:35.566 00:09:35.566 ' 00:09:35.566 23:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:35.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.566 --rc genhtml_branch_coverage=1 00:09:35.566 --rc genhtml_function_coverage=1 00:09:35.566 --rc genhtml_legend=1 00:09:35.566 --rc geninfo_all_blocks=1 00:09:35.566 --rc geninfo_unexecuted_blocks=1 00:09:35.566 00:09:35.566 ' 00:09:35.566 23:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:35.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.566 --rc genhtml_branch_coverage=1 00:09:35.566 --rc genhtml_function_coverage=1 00:09:35.566 --rc genhtml_legend=1 00:09:35.566 --rc geninfo_all_blocks=1 00:09:35.566 --rc geninfo_unexecuted_blocks=1 00:09:35.566 00:09:35.566 ' 00:09:35.566 23:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:09:35.566 23:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:09:35.566 23:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:09:35.566 23:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:09:35.566 23:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:09:35.566 23:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:09:35.566 23:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # bdfs=() 00:09:35.566 23:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # local bdfs 00:09:35.566 23:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:09:35.566 23:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:09:35.566 23:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:35.566 23:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # local bdfs 00:09:35.566 23:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:35.566 23:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:35.566 23:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:35.566 23:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:09:35.566 23:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:35.566 23:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:09:35.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:35.566 23:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:09:35.566 23:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:09:35.566 23:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=76375 00:09:35.566 23:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:35.566 23:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 76375 00:09:35.566 23:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@831 -- # '[' -z 76375 ']' 00:09:35.566 23:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:09:35.566 23:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:35.566 23:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:35.566 23:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:35.566 23:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:35.566 23:04:54 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:35.566 [2024-11-18 23:04:54.867615] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:35.566 [2024-11-18 23:04:54.867706] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76375 ] 00:09:35.824 [2024-11-18 23:04:55.017215] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:35.824 [2024-11-18 23:04:55.061835] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:35.824 [2024-11-18 23:04:55.062058] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:09:35.824 [2024-11-18 23:04:55.062307] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.824 [2024-11-18 23:04:55.062395] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:09:36.392 23:04:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:36.392 23:04:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # return 0 00:09:36.392 23:04:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:09:36.392 23:04:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.392 23:04:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:36.392 nvme0n1 00:09:36.392 23:04:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.392 23:04:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:09:36.392 23:04:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_JSq1V.txt 00:09:36.392 23:04:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:09:36.392 23:04:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.392 23:04:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:36.392 true 00:09:36.392 23:04:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.392 23:04:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:09:36.392 23:04:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1731971095 00:09:36.392 23:04:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=76398 00:09:36.392 23:04:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:36.392 23:04:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:09:36.393 23:04:55 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:09:38.922 23:04:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:09:38.922 23:04:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.922 23:04:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:38.922 [2024-11-18 23:04:57.760535] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:09:38.922 [2024-11-18 23:04:57.760809] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:09:38.922 [2024-11-18 23:04:57.760833] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:38.922 [2024-11-18 23:04:57.760849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:38.922 [2024-11-18 23:04:57.762984] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:38.922 23:04:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.922 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 76398 00:09:38.922 23:04:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 76398 00:09:38.923 23:04:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 76398 00:09:38.923 23:04:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:09:38.923 23:04:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:09:38.923 23:04:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:09:38.923 23:04:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.923 23:04:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:38.923 23:04:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.923 23:04:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:09:38.923 23:04:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_JSq1V.txt 00:09:38.923 23:04:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:09:38.923 23:04:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:09:38.923 23:04:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:38.923 23:04:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:38.923 23:04:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:38.923 23:04:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:38.923 23:04:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:38.923 23:04:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:38.923 23:04:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:09:38.923 23:04:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:09:38.923 23:04:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:09:38.923 23:04:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:38.923 23:04:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:38.923 23:04:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:38.923 23:04:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:38.923 23:04:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:38.923 23:04:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:38.923 23:04:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:09:38.923 23:04:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:09:38.923 23:04:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_JSq1V.txt 00:09:38.923 23:04:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 76375 00:09:38.923 23:04:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@950 -- # '[' -z 76375 ']' 00:09:38.923 23:04:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # kill -0 76375 00:09:38.923 23:04:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # uname 00:09:38.923 23:04:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:38.923 23:04:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76375 00:09:38.923 killing process with pid 76375 00:09:38.923 23:04:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:38.923 23:04:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:38.923 23:04:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76375' 00:09:38.923 23:04:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@969 -- # kill 76375 00:09:38.923 23:04:57 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@974 -- # wait 76375 00:09:38.923 23:04:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:09:38.923 23:04:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:09:38.923 00:09:38.923 real 0m3.609s 00:09:38.923 user 0m12.704s 00:09:38.923 sys 0m0.527s 00:09:38.923 ************************************ 00:09:38.923 END TEST bdev_nvme_reset_stuck_adm_cmd 00:09:38.923 ************************************ 00:09:38.923 23:04:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:38.923 23:04:58 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:38.923 23:04:58 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:09:38.923 23:04:58 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:09:38.923 23:04:58 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:38.923 23:04:58 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:38.923 23:04:58 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:38.923 ************************************ 00:09:38.923 START TEST nvme_fio 00:09:38.923 ************************************ 00:09:38.923 23:04:58 nvme.nvme_fio -- common/autotest_common.sh@1125 -- # nvme_fio_test 00:09:38.923 23:04:58 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:09:38.923 23:04:58 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:09:38.923 23:04:58 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:09:38.923 23:04:58 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:38.923 23:04:58 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # local bdfs 00:09:38.923 23:04:58 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:38.923 23:04:58 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:38.923 23:04:58 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:39.181 23:04:58 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:09:39.181 23:04:58 nvme.nvme_fio -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:39.181 23:04:58 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:09:39.181 23:04:58 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:09:39.181 23:04:58 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:39.181 23:04:58 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:39.181 23:04:58 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:39.181 23:04:58 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:39.181 23:04:58 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:39.447 23:04:58 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:39.448 23:04:58 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:39.448 23:04:58 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:39.448 23:04:58 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:39.448 23:04:58 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:39.448 23:04:58 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:39.448 23:04:58 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:39.448 23:04:58 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:39.448 23:04:58 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:39.448 23:04:58 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:39.448 23:04:58 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:39.448 23:04:58 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:39.448 23:04:58 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:39.448 23:04:58 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:39.448 23:04:58 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:39.448 23:04:58 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:39.448 23:04:58 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:39.448 23:04:58 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:39.706 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:39.706 fio-3.35 00:09:39.706 Starting 1 thread 00:09:44.981 00:09:44.981 test: (groupid=0, jobs=1): err= 0: pid=76523: Mon Nov 18 23:05:03 2024 00:09:44.981 read: IOPS=20.6k, BW=80.4MiB/s (84.3MB/s)(161MiB/2001msec) 00:09:44.981 slat (nsec): min=3901, max=81744, avg=5983.34, stdev=2808.83 00:09:44.981 clat (usec): min=237, max=12560, avg=3099.69, stdev=1047.28 00:09:44.981 lat (usec): min=242, max=12613, avg=3105.67, stdev=1048.87 00:09:44.981 clat percentiles (usec): 00:09:44.981 | 1.00th=[ 2278], 5.00th=[ 2442], 10.00th=[ 2474], 20.00th=[ 2507], 00:09:44.981 | 30.00th=[ 2573], 40.00th=[ 2606], 50.00th=[ 2638], 60.00th=[ 2737], 00:09:44.981 | 70.00th=[ 2933], 80.00th=[ 3359], 90.00th=[ 4621], 95.00th=[ 5669], 00:09:44.981 | 99.00th=[ 6915], 99.50th=[ 7177], 99.90th=[ 8291], 99.95th=[10683], 00:09:44.981 | 99.99th=[12387] 00:09:44.981 bw ( KiB/s): min=73304, max=85464, per=98.86%, avg=81344.00, stdev=6963.56, samples=3 00:09:44.981 iops : min=18324, max=21368, avg=20336.00, stdev=1742.64, samples=3 00:09:44.981 write: IOPS=20.5k, BW=80.1MiB/s (84.0MB/s)(160MiB/2001msec); 0 zone resets 00:09:44.981 slat (nsec): min=4080, max=79737, avg=6344.93, stdev=2797.40 00:09:44.981 clat (usec): min=210, max=12410, avg=3109.49, stdev=1048.73 00:09:44.981 lat (usec): min=216, max=12424, avg=3115.84, stdev=1050.32 00:09:44.981 clat percentiles (usec): 00:09:44.981 | 1.00th=[ 2278], 5.00th=[ 2442], 10.00th=[ 2474], 20.00th=[ 2540], 00:09:44.981 | 30.00th=[ 2573], 40.00th=[ 2606], 50.00th=[ 2671], 60.00th=[ 2769], 00:09:44.981 | 70.00th=[ 2966], 80.00th=[ 3392], 90.00th=[ 4621], 95.00th=[ 5669], 00:09:44.981 | 99.00th=[ 6915], 99.50th=[ 7242], 99.90th=[ 9372], 99.95th=[10814], 00:09:44.981 | 99.99th=[12125] 00:09:44.981 bw ( KiB/s): min=73856, max=85496, per=99.29%, avg=81461.33, stdev=6590.50, samples=3 00:09:44.981 iops : min=18464, max=21374, avg=20365.33, stdev=1647.62, samples=3 00:09:44.981 lat (usec) : 250=0.01%, 500=0.02%, 750=0.01%, 1000=0.02% 00:09:44.981 lat (msec) : 2=0.22%, 4=85.29%, 10=14.36%, 20=0.07% 00:09:44.981 cpu : usr=99.05%, sys=0.10%, ctx=4, majf=0, minf=626 00:09:44.981 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:44.981 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.981 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:44.981 issued rwts: total=41163,41044,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:44.981 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:44.981 00:09:44.981 Run status group 0 (all jobs): 00:09:44.981 READ: bw=80.4MiB/s (84.3MB/s), 80.4MiB/s-80.4MiB/s (84.3MB/s-84.3MB/s), io=161MiB (169MB), run=2001-2001msec 00:09:44.981 WRITE: bw=80.1MiB/s (84.0MB/s), 80.1MiB/s-80.1MiB/s (84.0MB/s-84.0MB/s), io=160MiB (168MB), run=2001-2001msec 00:09:44.981 ----------------------------------------------------- 00:09:44.981 Suppressions used: 00:09:44.981 count bytes template 00:09:44.981 1 32 /usr/src/fio/parse.c 00:09:44.981 1 8 libtcmalloc_minimal.so 00:09:44.981 ----------------------------------------------------- 00:09:44.981 00:09:44.981 23:05:04 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:44.981 23:05:04 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:44.981 23:05:04 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:44.981 23:05:04 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:45.240 23:05:04 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:45.240 23:05:04 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:45.240 23:05:04 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:45.240 23:05:04 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:45.240 23:05:04 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:45.240 23:05:04 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:45.240 23:05:04 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:45.240 23:05:04 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:45.240 23:05:04 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:45.240 23:05:04 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:45.240 23:05:04 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:45.240 23:05:04 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:45.240 23:05:04 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:45.240 23:05:04 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:45.240 23:05:04 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:45.240 23:05:04 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:45.240 23:05:04 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:45.240 23:05:04 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:45.240 23:05:04 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:45.240 23:05:04 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:45.498 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:45.498 fio-3.35 00:09:45.498 Starting 1 thread 00:09:52.061 00:09:52.061 test: (groupid=0, jobs=1): err= 0: pid=76579: Mon Nov 18 23:05:10 2024 00:09:52.061 read: IOPS=22.2k, BW=86.7MiB/s (90.9MB/s)(173MiB/2001msec) 00:09:52.061 slat (nsec): min=3907, max=62575, avg=5575.69, stdev=2136.09 00:09:52.061 clat (usec): min=234, max=10373, avg=2882.42, stdev=750.20 00:09:52.061 lat (usec): min=239, max=10421, avg=2888.00, stdev=751.38 00:09:52.061 clat percentiles (usec): 00:09:52.061 | 1.00th=[ 2409], 5.00th=[ 2474], 10.00th=[ 2507], 20.00th=[ 2540], 00:09:52.061 | 30.00th=[ 2573], 40.00th=[ 2606], 50.00th=[ 2638], 60.00th=[ 2671], 00:09:52.061 | 70.00th=[ 2704], 80.00th=[ 2868], 90.00th=[ 3589], 95.00th=[ 4424], 00:09:52.061 | 99.00th=[ 6325], 99.50th=[ 6521], 99.90th=[ 8356], 99.95th=[ 8848], 00:09:52.061 | 99.99th=[10159] 00:09:52.061 bw ( KiB/s): min=88944, max=91056, per=100.00%, avg=90290.67, stdev=1169.87, samples=3 00:09:52.061 iops : min=22236, max=22764, avg=22572.67, stdev=292.47, samples=3 00:09:52.061 write: IOPS=22.0k, BW=86.1MiB/s (90.3MB/s)(172MiB/2001msec); 0 zone resets 00:09:52.061 slat (usec): min=4, max=138, avg= 5.94, stdev= 2.19 00:09:52.061 clat (usec): min=219, max=10235, avg=2885.56, stdev=749.91 00:09:52.061 lat (usec): min=223, max=10249, avg=2891.50, stdev=751.09 00:09:52.061 clat percentiles (usec): 00:09:52.061 | 1.00th=[ 2409], 5.00th=[ 2474], 10.00th=[ 2507], 20.00th=[ 2573], 00:09:52.061 | 30.00th=[ 2573], 40.00th=[ 2606], 50.00th=[ 2638], 60.00th=[ 2671], 00:09:52.061 | 70.00th=[ 2704], 80.00th=[ 2868], 90.00th=[ 3589], 95.00th=[ 4490], 00:09:52.061 | 99.00th=[ 6325], 99.50th=[ 6521], 99.90th=[ 8455], 99.95th=[ 8848], 00:09:52.061 | 99.99th=[ 9896] 00:09:52.061 bw ( KiB/s): min=90216, max=90864, per=100.00%, avg=90541.33, stdev=324.01, samples=3 00:09:52.061 iops : min=22554, max=22716, avg=22635.33, stdev=81.00, samples=3 00:09:52.061 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:09:52.061 lat (msec) : 2=0.14%, 4=92.42%, 10=7.38%, 20=0.01% 00:09:52.061 cpu : usr=99.00%, sys=0.25%, ctx=3, majf=0, minf=627 00:09:52.061 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:52.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.061 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:52.061 issued rwts: total=44403,44091,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:52.061 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:52.061 00:09:52.061 Run status group 0 (all jobs): 00:09:52.061 READ: bw=86.7MiB/s (90.9MB/s), 86.7MiB/s-86.7MiB/s (90.9MB/s-90.9MB/s), io=173MiB (182MB), run=2001-2001msec 00:09:52.061 WRITE: bw=86.1MiB/s (90.3MB/s), 86.1MiB/s-86.1MiB/s (90.3MB/s-90.3MB/s), io=172MiB (181MB), run=2001-2001msec 00:09:52.061 ----------------------------------------------------- 00:09:52.061 Suppressions used: 00:09:52.061 count bytes template 00:09:52.061 1 32 /usr/src/fio/parse.c 00:09:52.061 1 8 libtcmalloc_minimal.so 00:09:52.061 ----------------------------------------------------- 00:09:52.061 00:09:52.061 23:05:11 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:52.061 23:05:11 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:52.061 23:05:11 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:52.061 23:05:11 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:52.061 23:05:11 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:52.061 23:05:11 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:52.321 23:05:11 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:52.321 23:05:11 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:52.321 23:05:11 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:52.321 23:05:11 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:52.321 23:05:11 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:52.321 23:05:11 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:52.321 23:05:11 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:52.321 23:05:11 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:52.321 23:05:11 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:52.321 23:05:11 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:52.321 23:05:11 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:52.321 23:05:11 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:52.321 23:05:11 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:52.321 23:05:11 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:52.321 23:05:11 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:52.321 23:05:11 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:52.322 23:05:11 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:52.322 23:05:11 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:52.582 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:52.582 fio-3.35 00:09:52.582 Starting 1 thread 00:09:59.172 00:09:59.172 test: (groupid=0, jobs=1): err= 0: pid=76646: Mon Nov 18 23:05:18 2024 00:09:59.172 read: IOPS=22.3k, BW=86.9MiB/s (91.1MB/s)(174MiB/2001msec) 00:09:59.172 slat (nsec): min=3325, max=73160, avg=5083.27, stdev=2222.14 00:09:59.172 clat (usec): min=330, max=9817, avg=2869.27, stdev=882.18 00:09:59.172 lat (usec): min=335, max=9843, avg=2874.35, stdev=883.41 00:09:59.172 clat percentiles (usec): 00:09:59.172 | 1.00th=[ 2073], 5.00th=[ 2212], 10.00th=[ 2311], 20.00th=[ 2376], 00:09:59.172 | 30.00th=[ 2474], 40.00th=[ 2540], 50.00th=[ 2606], 60.00th=[ 2671], 00:09:59.172 | 70.00th=[ 2769], 80.00th=[ 2966], 90.00th=[ 3785], 95.00th=[ 5014], 00:09:59.172 | 99.00th=[ 6521], 99.50th=[ 6718], 99.90th=[ 6980], 99.95th=[ 7177], 00:09:59.172 | 99.99th=[ 8979] 00:09:59.172 bw ( KiB/s): min=82856, max=90840, per=98.34%, avg=87528.00, stdev=4162.12, samples=3 00:09:59.172 iops : min=20714, max=22710, avg=21882.00, stdev=1040.53, samples=3 00:09:59.172 write: IOPS=22.1k, BW=86.3MiB/s (90.5MB/s)(173MiB/2001msec); 0 zone resets 00:09:59.172 slat (nsec): min=3530, max=93393, avg=5481.48, stdev=2328.99 00:09:59.172 clat (usec): min=354, max=9046, avg=2885.76, stdev=898.53 00:09:59.172 lat (usec): min=359, max=9054, avg=2891.24, stdev=899.80 00:09:59.172 clat percentiles (usec): 00:09:59.172 | 1.00th=[ 2073], 5.00th=[ 2245], 10.00th=[ 2311], 20.00th=[ 2409], 00:09:59.172 | 30.00th=[ 2474], 40.00th=[ 2540], 50.00th=[ 2606], 60.00th=[ 2671], 00:09:59.172 | 70.00th=[ 2769], 80.00th=[ 2999], 90.00th=[ 3884], 95.00th=[ 5080], 00:09:59.172 | 99.00th=[ 6587], 99.50th=[ 6783], 99.90th=[ 6980], 99.95th=[ 7177], 00:09:59.172 | 99.99th=[ 8979] 00:09:59.172 bw ( KiB/s): min=82856, max=91616, per=99.19%, avg=87682.67, stdev=4447.80, samples=3 00:09:59.172 iops : min=20714, max=22904, avg=21920.67, stdev=1111.95, samples=3 00:09:59.172 lat (usec) : 500=0.02%, 750=0.01%, 1000=0.01% 00:09:59.172 lat (msec) : 2=0.37%, 4=90.39%, 10=9.19% 00:09:59.172 cpu : usr=99.05%, sys=0.20%, ctx=3, majf=0, minf=626 00:09:59.172 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:59.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.172 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:59.172 issued rwts: total=44525,44219,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:59.172 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:59.172 00:09:59.172 Run status group 0 (all jobs): 00:09:59.172 READ: bw=86.9MiB/s (91.1MB/s), 86.9MiB/s-86.9MiB/s (91.1MB/s-91.1MB/s), io=174MiB (182MB), run=2001-2001msec 00:09:59.172 WRITE: bw=86.3MiB/s (90.5MB/s), 86.3MiB/s-86.3MiB/s (90.5MB/s-90.5MB/s), io=173MiB (181MB), run=2001-2001msec 00:09:59.172 ----------------------------------------------------- 00:09:59.172 Suppressions used: 00:09:59.172 count bytes template 00:09:59.172 1 32 /usr/src/fio/parse.c 00:09:59.172 1 8 libtcmalloc_minimal.so 00:09:59.172 ----------------------------------------------------- 00:09:59.172 00:09:59.172 23:05:18 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:59.172 23:05:18 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:59.172 23:05:18 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:59.172 23:05:18 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:59.172 23:05:18 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:59.172 23:05:18 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:59.434 23:05:18 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:59.434 23:05:18 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:59.434 23:05:18 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:59.434 23:05:18 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:59.434 23:05:18 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:59.434 23:05:18 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:59.434 23:05:18 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:59.434 23:05:18 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:59.434 23:05:18 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:59.434 23:05:18 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:59.434 23:05:18 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:59.434 23:05:18 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:59.434 23:05:18 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:59.434 23:05:18 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:59.434 23:05:18 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:59.434 23:05:18 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:59.434 23:05:18 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:59.434 23:05:18 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:59.695 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:59.695 fio-3.35 00:09:59.695 Starting 1 thread 00:10:06.282 00:10:06.282 test: (groupid=0, jobs=1): err= 0: pid=76701: Mon Nov 18 23:05:25 2024 00:10:06.282 read: IOPS=21.3k, BW=83.4MiB/s (87.4MB/s)(167MiB/2001msec) 00:10:06.282 slat (nsec): min=3329, max=60109, avg=5188.02, stdev=2369.70 00:10:06.282 clat (usec): min=248, max=9469, avg=2994.83, stdev=934.65 00:10:06.282 lat (usec): min=253, max=9512, avg=3000.02, stdev=935.97 00:10:06.282 clat percentiles (usec): 00:10:06.282 | 1.00th=[ 2114], 5.00th=[ 2278], 10.00th=[ 2343], 20.00th=[ 2442], 00:10:06.282 | 30.00th=[ 2540], 40.00th=[ 2606], 50.00th=[ 2671], 60.00th=[ 2769], 00:10:06.282 | 70.00th=[ 2900], 80.00th=[ 3228], 90.00th=[ 4178], 95.00th=[ 5342], 00:10:06.282 | 99.00th=[ 6587], 99.50th=[ 6849], 99.90th=[ 7635], 99.95th=[ 7898], 00:10:06.282 | 99.99th=[ 9241] 00:10:06.282 bw ( KiB/s): min=82032, max=88048, per=100.00%, avg=85618.67, stdev=3170.59, samples=3 00:10:06.282 iops : min=20508, max=22012, avg=21404.67, stdev=792.65, samples=3 00:10:06.282 write: IOPS=21.2k, BW=82.8MiB/s (86.8MB/s)(166MiB/2001msec); 0 zone resets 00:10:06.282 slat (nsec): min=3547, max=79900, avg=5599.17, stdev=2496.24 00:10:06.282 clat (usec): min=205, max=9311, avg=3006.08, stdev=941.89 00:10:06.282 lat (usec): min=211, max=9319, avg=3011.68, stdev=943.23 00:10:06.282 clat percentiles (usec): 00:10:06.282 | 1.00th=[ 2114], 5.00th=[ 2278], 10.00th=[ 2343], 20.00th=[ 2474], 00:10:06.282 | 30.00th=[ 2540], 40.00th=[ 2606], 50.00th=[ 2704], 60.00th=[ 2769], 00:10:06.282 | 70.00th=[ 2900], 80.00th=[ 3228], 90.00th=[ 4228], 95.00th=[ 5342], 00:10:06.282 | 99.00th=[ 6587], 99.50th=[ 6915], 99.90th=[ 7767], 99.95th=[ 8094], 00:10:06.282 | 99.99th=[ 9110] 00:10:06.282 bw ( KiB/s): min=82272, max=88536, per=100.00%, avg=85752.00, stdev=3189.47, samples=3 00:10:06.282 iops : min=20568, max=22134, avg=21438.00, stdev=797.37, samples=3 00:10:06.282 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.01% 00:10:06.282 lat (msec) : 2=0.28%, 4=88.20%, 10=11.48% 00:10:06.282 cpu : usr=99.00%, sys=0.15%, ctx=14, majf=0, minf=625 00:10:06.282 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:06.282 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.282 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:06.282 issued rwts: total=42708,42397,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:06.282 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:06.282 00:10:06.282 Run status group 0 (all jobs): 00:10:06.282 READ: bw=83.4MiB/s (87.4MB/s), 83.4MiB/s-83.4MiB/s (87.4MB/s-87.4MB/s), io=167MiB (175MB), run=2001-2001msec 00:10:06.282 WRITE: bw=82.8MiB/s (86.8MB/s), 82.8MiB/s-82.8MiB/s (86.8MB/s-86.8MB/s), io=166MiB (174MB), run=2001-2001msec 00:10:06.282 ----------------------------------------------------- 00:10:06.282 Suppressions used: 00:10:06.282 count bytes template 00:10:06.282 1 32 /usr/src/fio/parse.c 00:10:06.282 1 8 libtcmalloc_minimal.so 00:10:06.282 ----------------------------------------------------- 00:10:06.282 00:10:06.282 ************************************ 00:10:06.282 END TEST nvme_fio 00:10:06.282 ************************************ 00:10:06.282 23:05:25 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:06.282 23:05:25 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:10:06.282 00:10:06.282 real 0m27.186s 00:10:06.282 user 0m16.272s 00:10:06.282 sys 0m20.083s 00:10:06.282 23:05:25 nvme.nvme_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:06.282 23:05:25 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:10:06.282 00:10:06.282 real 1m35.321s 00:10:06.282 user 3m32.571s 00:10:06.282 sys 0m30.549s 00:10:06.282 23:05:25 nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:06.282 ************************************ 00:10:06.282 END TEST nvme 00:10:06.282 ************************************ 00:10:06.282 23:05:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:06.282 23:05:25 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:10:06.282 23:05:25 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:10:06.282 23:05:25 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:06.282 23:05:25 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:06.282 23:05:25 -- common/autotest_common.sh@10 -- # set +x 00:10:06.282 ************************************ 00:10:06.282 START TEST nvme_scc 00:10:06.282 ************************************ 00:10:06.282 23:05:25 nvme_scc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:10:06.282 * Looking for test storage... 00:10:06.282 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:06.282 23:05:25 nvme_scc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:06.282 23:05:25 nvme_scc -- common/autotest_common.sh@1681 -- # lcov --version 00:10:06.282 23:05:25 nvme_scc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:06.282 23:05:25 nvme_scc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:06.282 23:05:25 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:06.282 23:05:25 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:06.282 23:05:25 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:06.282 23:05:25 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:10:06.282 23:05:25 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:10:06.282 23:05:25 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:10:06.282 23:05:25 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:10:06.282 23:05:25 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:10:06.282 23:05:25 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:10:06.282 23:05:25 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:10:06.282 23:05:25 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:06.282 23:05:25 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:10:06.282 23:05:25 nvme_scc -- scripts/common.sh@345 -- # : 1 00:10:06.282 23:05:25 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:06.282 23:05:25 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:06.282 23:05:25 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:10:06.543 23:05:25 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:10:06.543 23:05:25 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:06.543 23:05:25 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:10:06.543 23:05:25 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:06.543 23:05:25 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:10:06.543 23:05:25 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:10:06.543 23:05:25 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:06.543 23:05:25 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:10:06.543 23:05:25 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:06.543 23:05:25 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:06.543 23:05:25 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:06.543 23:05:25 nvme_scc -- scripts/common.sh@368 -- # return 0 00:10:06.543 23:05:25 nvme_scc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:06.544 23:05:25 nvme_scc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:06.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.544 --rc genhtml_branch_coverage=1 00:10:06.544 --rc genhtml_function_coverage=1 00:10:06.544 --rc genhtml_legend=1 00:10:06.544 --rc geninfo_all_blocks=1 00:10:06.544 --rc geninfo_unexecuted_blocks=1 00:10:06.544 00:10:06.544 ' 00:10:06.544 23:05:25 nvme_scc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:06.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.544 --rc genhtml_branch_coverage=1 00:10:06.544 --rc genhtml_function_coverage=1 00:10:06.544 --rc genhtml_legend=1 00:10:06.544 --rc geninfo_all_blocks=1 00:10:06.544 --rc geninfo_unexecuted_blocks=1 00:10:06.544 00:10:06.544 ' 00:10:06.544 23:05:25 nvme_scc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:06.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.544 --rc genhtml_branch_coverage=1 00:10:06.544 --rc genhtml_function_coverage=1 00:10:06.544 --rc genhtml_legend=1 00:10:06.544 --rc geninfo_all_blocks=1 00:10:06.544 --rc geninfo_unexecuted_blocks=1 00:10:06.544 00:10:06.544 ' 00:10:06.544 23:05:25 nvme_scc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:06.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.544 --rc genhtml_branch_coverage=1 00:10:06.544 --rc genhtml_function_coverage=1 00:10:06.544 --rc genhtml_legend=1 00:10:06.544 --rc geninfo_all_blocks=1 00:10:06.544 --rc geninfo_unexecuted_blocks=1 00:10:06.544 00:10:06.544 ' 00:10:06.544 23:05:25 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:06.544 23:05:25 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:06.544 23:05:25 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:06.544 23:05:25 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:06.544 23:05:25 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:06.544 23:05:25 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:10:06.544 23:05:25 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:06.544 23:05:25 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:06.544 23:05:25 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:06.544 23:05:25 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.544 23:05:25 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.544 23:05:25 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.544 23:05:25 nvme_scc -- paths/export.sh@5 -- # export PATH 00:10:06.544 23:05:25 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.544 23:05:25 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:10:06.544 23:05:25 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:06.544 23:05:25 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:10:06.544 23:05:25 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:06.544 23:05:25 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:10:06.544 23:05:25 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:06.544 23:05:25 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:06.544 23:05:25 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:06.544 23:05:25 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:10:06.544 23:05:25 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:06.544 23:05:25 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:10:06.544 23:05:25 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:10:06.544 23:05:25 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:10:06.544 23:05:25 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:06.804 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:06.804 Waiting for block devices as requested 00:10:07.064 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:07.064 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:07.064 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:07.064 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:12.357 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:12.357 23:05:31 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:10:12.357 23:05:31 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:12.357 23:05:31 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:12.357 23:05:31 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:12.357 23:05:31 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:10:12.357 23:05:31 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:10:12.358 23:05:31 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:12.358 23:05:31 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:12.358 23:05:31 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:12.358 23:05:31 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.358 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:12.359 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.360 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.361 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:12.362 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:10:12.363 23:05:31 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:12.363 23:05:31 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:12.363 23:05:31 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:12.363 23:05:31 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.363 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.364 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:12.365 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:10:12.366 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:12.367 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:10:12.368 23:05:31 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:12.368 23:05:31 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:12.368 23:05:31 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:12.368 23:05:31 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.368 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.369 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.370 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:12.371 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:12.372 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.372 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.372 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:12.372 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:12.372 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:12.372 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.372 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.372 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:12.372 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:12.372 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:12.372 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.372 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.372 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:12.372 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:12.372 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:12.372 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.372 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.372 23:05:31 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:12.372 23:05:31 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:12.372 23:05:31 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:12.372 23:05:31 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:12.372 23:05:31 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:12.372 23:05:31 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:12.372 23:05:31 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:12.372 23:05:31 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:12.372 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.372 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.372 23:05:31 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:12.636 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:12.636 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.636 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.636 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:12.636 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:10:12.636 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:10:12.636 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.636 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.636 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:12.636 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:10:12.636 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:10:12.636 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.636 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.636 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:12.636 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:10:12.636 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:10:12.636 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.636 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.636 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:12.636 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:12.636 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:12.636 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.636 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.636 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:12.636 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:12.636 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:12.636 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.636 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.636 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:12.636 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:10:12.636 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:10:12.636 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.636 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.636 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:12.636 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:12.636 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:12.636 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.636 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.636 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:12.636 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:12.636 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:12.636 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.636 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.636 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.636 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:12.636 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:12.636 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.636 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.636 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.636 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:12.636 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:12.636 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.636 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.636 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.636 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:12.636 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:12.636 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.636 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.636 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:10:12.637 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:12.638 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.639 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:10:12.640 23:05:31 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:12.640 23:05:31 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:12.640 23:05:31 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:12.640 23:05:31 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.640 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:12.641 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.642 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:12.643 23:05:31 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:10:12.643 23:05:31 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:10:12.643 23:05:31 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:10:12.643 23:05:31 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:10:12.643 23:05:31 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:13.215 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:13.476 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:13.476 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:13.476 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:13.738 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:13.738 23:05:32 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:10:13.738 23:05:32 nvme_scc -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:13.738 23:05:32 nvme_scc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:13.738 23:05:32 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:10:13.738 ************************************ 00:10:13.738 START TEST nvme_simple_copy 00:10:13.738 ************************************ 00:10:13.738 23:05:32 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:10:13.998 Initializing NVMe Controllers 00:10:13.998 Attaching to 0000:00:10.0 00:10:13.998 Controller supports SCC. Attached to 0000:00:10.0 00:10:13.998 Namespace ID: 1 size: 6GB 00:10:13.998 Initialization complete. 00:10:13.998 00:10:13.998 Controller QEMU NVMe Ctrl (12340 ) 00:10:13.998 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:10:13.998 Namespace Block Size:4096 00:10:13.998 Writing LBAs 0 to 63 with Random Data 00:10:13.998 Copied LBAs from 0 - 63 to the Destination LBA 256 00:10:13.998 LBAs matching Written Data: 64 00:10:13.998 00:10:13.998 real 0m0.236s 00:10:13.998 ************************************ 00:10:13.998 user 0m0.072s 00:10:13.998 sys 0m0.061s 00:10:13.998 23:05:33 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:13.998 23:05:33 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:10:13.998 END TEST nvme_simple_copy 00:10:13.998 ************************************ 00:10:13.998 ************************************ 00:10:13.998 END TEST nvme_scc 00:10:13.998 ************************************ 00:10:13.998 00:10:13.998 real 0m7.732s 00:10:13.998 user 0m1.038s 00:10:13.998 sys 0m1.427s 00:10:13.998 23:05:33 nvme_scc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:13.998 23:05:33 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:10:13.998 23:05:33 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:10:13.998 23:05:33 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:10:13.998 23:05:33 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:10:13.998 23:05:33 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:10:13.998 23:05:33 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:10:13.998 23:05:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:13.998 23:05:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:13.998 23:05:33 -- common/autotest_common.sh@10 -- # set +x 00:10:13.998 ************************************ 00:10:13.998 START TEST nvme_fdp 00:10:13.998 ************************************ 00:10:13.998 23:05:33 nvme_fdp -- common/autotest_common.sh@1125 -- # test/nvme/nvme_fdp.sh 00:10:14.259 * Looking for test storage... 00:10:14.259 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:14.259 23:05:33 nvme_fdp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:14.259 23:05:33 nvme_fdp -- common/autotest_common.sh@1681 -- # lcov --version 00:10:14.259 23:05:33 nvme_fdp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:14.259 23:05:33 nvme_fdp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:14.259 23:05:33 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:14.259 23:05:33 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:14.259 23:05:33 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:14.259 23:05:33 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:10:14.259 23:05:33 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:10:14.259 23:05:33 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:10:14.259 23:05:33 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:10:14.259 23:05:33 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:10:14.259 23:05:33 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:10:14.259 23:05:33 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:10:14.259 23:05:33 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:14.259 23:05:33 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:10:14.259 23:05:33 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:10:14.259 23:05:33 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:14.259 23:05:33 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:14.259 23:05:33 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:10:14.259 23:05:33 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:10:14.259 23:05:33 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:14.259 23:05:33 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:10:14.259 23:05:33 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:10:14.259 23:05:33 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:10:14.259 23:05:33 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:10:14.259 23:05:33 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:14.259 23:05:33 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:10:14.259 23:05:33 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:10:14.259 23:05:33 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:14.259 23:05:33 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:14.259 23:05:33 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:10:14.259 23:05:33 nvme_fdp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:14.260 23:05:33 nvme_fdp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:14.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.260 --rc genhtml_branch_coverage=1 00:10:14.260 --rc genhtml_function_coverage=1 00:10:14.260 --rc genhtml_legend=1 00:10:14.260 --rc geninfo_all_blocks=1 00:10:14.260 --rc geninfo_unexecuted_blocks=1 00:10:14.260 00:10:14.260 ' 00:10:14.260 23:05:33 nvme_fdp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:14.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.260 --rc genhtml_branch_coverage=1 00:10:14.260 --rc genhtml_function_coverage=1 00:10:14.260 --rc genhtml_legend=1 00:10:14.260 --rc geninfo_all_blocks=1 00:10:14.260 --rc geninfo_unexecuted_blocks=1 00:10:14.260 00:10:14.260 ' 00:10:14.260 23:05:33 nvme_fdp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:14.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.260 --rc genhtml_branch_coverage=1 00:10:14.260 --rc genhtml_function_coverage=1 00:10:14.260 --rc genhtml_legend=1 00:10:14.260 --rc geninfo_all_blocks=1 00:10:14.260 --rc geninfo_unexecuted_blocks=1 00:10:14.260 00:10:14.260 ' 00:10:14.260 23:05:33 nvme_fdp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:14.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.260 --rc genhtml_branch_coverage=1 00:10:14.260 --rc genhtml_function_coverage=1 00:10:14.260 --rc genhtml_legend=1 00:10:14.260 --rc geninfo_all_blocks=1 00:10:14.260 --rc geninfo_unexecuted_blocks=1 00:10:14.260 00:10:14.260 ' 00:10:14.260 23:05:33 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:14.260 23:05:33 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:14.260 23:05:33 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:14.260 23:05:33 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:14.260 23:05:33 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:14.260 23:05:33 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:10:14.260 23:05:33 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:14.260 23:05:33 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:14.260 23:05:33 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:14.260 23:05:33 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.260 23:05:33 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.260 23:05:33 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.260 23:05:33 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:10:14.260 23:05:33 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.260 23:05:33 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:10:14.260 23:05:33 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:14.260 23:05:33 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:10:14.260 23:05:33 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:14.260 23:05:33 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:10:14.260 23:05:33 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:14.260 23:05:33 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:14.260 23:05:33 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:14.260 23:05:33 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:10:14.260 23:05:33 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:14.260 23:05:33 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:14.521 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:14.782 Waiting for block devices as requested 00:10:14.782 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:14.782 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:14.782 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:15.044 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:20.344 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:20.344 23:05:39 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:10:20.344 23:05:39 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:20.344 23:05:39 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:20.344 23:05:39 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:20.344 23:05:39 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:20.344 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:20.345 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:20.346 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:10:20.347 23:05:39 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:20.347 23:05:39 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:20.347 23:05:39 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:20.347 23:05:39 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.347 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.348 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:10:20.349 23:05:39 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:20.349 23:05:39 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:20.349 23:05:39 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:20.349 23:05:39 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.349 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.350 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.351 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.352 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:10:20.353 23:05:39 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:20.353 23:05:39 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:20.353 23:05:39 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:20.353 23:05:39 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:20.353 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.354 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:20.355 23:05:39 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:10:20.355 23:05:39 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:10:20.355 23:05:39 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:10:20.355 23:05:39 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:10:20.355 23:05:39 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:20.927 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:21.499 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:21.499 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:21.499 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:21.499 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:21.499 23:05:40 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:10:21.499 23:05:40 nvme_fdp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:21.499 23:05:40 nvme_fdp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:21.499 23:05:40 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:10:21.499 ************************************ 00:10:21.499 START TEST nvme_flexible_data_placement 00:10:21.499 ************************************ 00:10:21.499 23:05:40 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:10:21.760 Initializing NVMe Controllers 00:10:21.760 Attaching to 0000:00:13.0 00:10:21.760 Controller supports FDP Attached to 0000:00:13.0 00:10:21.760 Namespace ID: 1 Endurance Group ID: 1 00:10:21.760 Initialization complete. 00:10:21.760 00:10:21.760 ================================== 00:10:21.760 == FDP tests for Namespace: #01 == 00:10:21.760 ================================== 00:10:21.760 00:10:21.760 Get Feature: FDP: 00:10:21.760 ================= 00:10:21.760 Enabled: Yes 00:10:21.760 FDP configuration Index: 0 00:10:21.760 00:10:21.760 FDP configurations log page 00:10:21.760 =========================== 00:10:21.760 Number of FDP configurations: 1 00:10:21.760 Version: 0 00:10:21.760 Size: 112 00:10:21.760 FDP Configuration Descriptor: 0 00:10:21.760 Descriptor Size: 96 00:10:21.760 Reclaim Group Identifier format: 2 00:10:21.760 FDP Volatile Write Cache: Not Present 00:10:21.760 FDP Configuration: Valid 00:10:21.760 Vendor Specific Size: 0 00:10:21.760 Number of Reclaim Groups: 2 00:10:21.760 Number of Recalim Unit Handles: 8 00:10:21.760 Max Placement Identifiers: 128 00:10:21.760 Number of Namespaces Suppprted: 256 00:10:21.760 Reclaim unit Nominal Size: 6000000 bytes 00:10:21.760 Estimated Reclaim Unit Time Limit: Not Reported 00:10:21.760 RUH Desc #000: RUH Type: Initially Isolated 00:10:21.760 RUH Desc #001: RUH Type: Initially Isolated 00:10:21.760 RUH Desc #002: RUH Type: Initially Isolated 00:10:21.760 RUH Desc #003: RUH Type: Initially Isolated 00:10:21.760 RUH Desc #004: RUH Type: Initially Isolated 00:10:21.760 RUH Desc #005: RUH Type: Initially Isolated 00:10:21.760 RUH Desc #006: RUH Type: Initially Isolated 00:10:21.760 RUH Desc #007: RUH Type: Initially Isolated 00:10:21.760 00:10:21.760 FDP reclaim unit handle usage log page 00:10:21.760 ====================================== 00:10:21.760 Number of Reclaim Unit Handles: 8 00:10:21.760 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:21.760 RUH Usage Desc #001: RUH Attributes: Unused 00:10:21.760 RUH Usage Desc #002: RUH Attributes: Unused 00:10:21.760 RUH Usage Desc #003: RUH Attributes: Unused 00:10:21.760 RUH Usage Desc #004: RUH Attributes: Unused 00:10:21.760 RUH Usage Desc #005: RUH Attributes: Unused 00:10:21.760 RUH Usage Desc #006: RUH Attributes: Unused 00:10:21.760 RUH Usage Desc #007: RUH Attributes: Unused 00:10:21.760 00:10:21.760 FDP statistics log page 00:10:21.760 ======================= 00:10:21.760 Host bytes with metadata written: 1700495360 00:10:21.760 Media bytes with metadata written: 1700765696 00:10:21.760 Media bytes erased: 0 00:10:21.760 00:10:21.760 FDP Reclaim unit handle status 00:10:21.760 ============================== 00:10:21.760 Number of RUHS descriptors: 2 00:10:21.760 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000000a48 00:10:21.760 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:10:21.760 00:10:21.760 FDP write on placement id: 0 success 00:10:21.760 00:10:21.760 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:10:21.760 00:10:21.760 IO mgmt send: RUH update for Placement ID: #0 Success 00:10:21.760 00:10:21.760 Get Feature: FDP Events for Placement handle: #0 00:10:21.760 ======================== 00:10:21.760 Number of FDP Events: 6 00:10:21.760 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:10:21.760 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:10:21.760 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:10:21.760 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:10:21.760 FDP Event: #4 Type: Media Reallocated Enabled: No 00:10:21.760 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:10:21.760 00:10:21.760 FDP events log page 00:10:21.760 =================== 00:10:21.760 Number of FDP events: 1 00:10:21.760 FDP Event #0: 00:10:21.760 Event Type: RU Not Written to Capacity 00:10:21.760 Placement Identifier: Valid 00:10:21.760 NSID: Valid 00:10:21.760 Location: Valid 00:10:21.760 Placement Identifier: 0 00:10:21.760 Event Timestamp: 2 00:10:21.760 Namespace Identifier: 1 00:10:21.760 Reclaim Group Identifier: 0 00:10:21.760 Reclaim Unit Handle Identifier: 0 00:10:21.760 00:10:21.760 FDP test passed 00:10:21.760 00:10:21.760 real 0m0.221s 00:10:21.760 user 0m0.058s 00:10:21.760 sys 0m0.061s 00:10:21.760 ************************************ 00:10:21.760 END TEST nvme_flexible_data_placement 00:10:21.760 ************************************ 00:10:21.760 23:05:40 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:21.760 23:05:40 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:10:21.760 ************************************ 00:10:21.760 END TEST nvme_fdp 00:10:21.760 ************************************ 00:10:21.760 00:10:21.760 real 0m7.718s 00:10:21.760 user 0m1.029s 00:10:21.760 sys 0m1.412s 00:10:21.760 23:05:41 nvme_fdp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:21.760 23:05:41 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:10:21.760 23:05:41 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:10:21.760 23:05:41 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:21.760 23:05:41 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:21.760 23:05:41 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:21.760 23:05:41 -- common/autotest_common.sh@10 -- # set +x 00:10:21.761 ************************************ 00:10:21.761 START TEST nvme_rpc 00:10:21.761 ************************************ 00:10:21.761 23:05:41 nvme_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:22.021 * Looking for test storage... 00:10:22.021 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:22.021 23:05:41 nvme_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:22.021 23:05:41 nvme_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:10:22.021 23:05:41 nvme_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:22.021 23:05:41 nvme_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:22.021 23:05:41 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:22.021 23:05:41 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:22.021 23:05:41 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:22.021 23:05:41 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:22.021 23:05:41 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:22.021 23:05:41 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:22.021 23:05:41 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:22.021 23:05:41 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:22.021 23:05:41 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:22.021 23:05:41 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:22.021 23:05:41 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:22.021 23:05:41 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:22.021 23:05:41 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:10:22.021 23:05:41 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:22.021 23:05:41 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:22.021 23:05:41 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:22.021 23:05:41 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:10:22.021 23:05:41 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:22.021 23:05:41 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:10:22.021 23:05:41 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:22.021 23:05:41 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:22.021 23:05:41 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:10:22.021 23:05:41 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:22.021 23:05:41 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:10:22.021 23:05:41 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:22.021 23:05:41 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:22.021 23:05:41 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:22.021 23:05:41 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:10:22.021 23:05:41 nvme_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:22.021 23:05:41 nvme_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:22.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.021 --rc genhtml_branch_coverage=1 00:10:22.021 --rc genhtml_function_coverage=1 00:10:22.021 --rc genhtml_legend=1 00:10:22.021 --rc geninfo_all_blocks=1 00:10:22.021 --rc geninfo_unexecuted_blocks=1 00:10:22.021 00:10:22.021 ' 00:10:22.021 23:05:41 nvme_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:22.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.021 --rc genhtml_branch_coverage=1 00:10:22.021 --rc genhtml_function_coverage=1 00:10:22.021 --rc genhtml_legend=1 00:10:22.021 --rc geninfo_all_blocks=1 00:10:22.021 --rc geninfo_unexecuted_blocks=1 00:10:22.021 00:10:22.021 ' 00:10:22.021 23:05:41 nvme_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:22.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.021 --rc genhtml_branch_coverage=1 00:10:22.021 --rc genhtml_function_coverage=1 00:10:22.021 --rc genhtml_legend=1 00:10:22.021 --rc geninfo_all_blocks=1 00:10:22.021 --rc geninfo_unexecuted_blocks=1 00:10:22.021 00:10:22.021 ' 00:10:22.021 23:05:41 nvme_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:22.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.021 --rc genhtml_branch_coverage=1 00:10:22.021 --rc genhtml_function_coverage=1 00:10:22.021 --rc genhtml_legend=1 00:10:22.021 --rc geninfo_all_blocks=1 00:10:22.021 --rc geninfo_unexecuted_blocks=1 00:10:22.021 00:10:22.021 ' 00:10:22.021 23:05:41 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:22.021 23:05:41 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:10:22.021 23:05:41 nvme_rpc -- common/autotest_common.sh@1507 -- # bdfs=() 00:10:22.021 23:05:41 nvme_rpc -- common/autotest_common.sh@1507 -- # local bdfs 00:10:22.021 23:05:41 nvme_rpc -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:10:22.021 23:05:41 nvme_rpc -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:10:22.021 23:05:41 nvme_rpc -- common/autotest_common.sh@1496 -- # bdfs=() 00:10:22.021 23:05:41 nvme_rpc -- common/autotest_common.sh@1496 -- # local bdfs 00:10:22.021 23:05:41 nvme_rpc -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:22.021 23:05:41 nvme_rpc -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:22.021 23:05:41 nvme_rpc -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:10:22.021 23:05:41 nvme_rpc -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:10:22.021 23:05:41 nvme_rpc -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:22.021 23:05:41 nvme_rpc -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:10:22.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:22.021 23:05:41 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:10:22.021 23:05:41 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=78061 00:10:22.021 23:05:41 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:22.021 23:05:41 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:10:22.021 23:05:41 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 78061 00:10:22.021 23:05:41 nvme_rpc -- common/autotest_common.sh@831 -- # '[' -z 78061 ']' 00:10:22.021 23:05:41 nvme_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:22.021 23:05:41 nvme_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:22.022 23:05:41 nvme_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:22.022 23:05:41 nvme_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:22.022 23:05:41 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:22.022 [2024-11-18 23:05:41.361550] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:22.022 [2024-11-18 23:05:41.361830] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78061 ] 00:10:22.282 [2024-11-18 23:05:41.501786] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:22.282 [2024-11-18 23:05:41.534878] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:22.282 [2024-11-18 23:05:41.534913] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.853 23:05:42 nvme_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:22.853 23:05:42 nvme_rpc -- common/autotest_common.sh@864 -- # return 0 00:10:22.853 23:05:42 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:10:23.115 Nvme0n1 00:10:23.115 23:05:42 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:10:23.115 23:05:42 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:10:23.375 request: 00:10:23.375 { 00:10:23.375 "bdev_name": "Nvme0n1", 00:10:23.375 "filename": "non_existing_file", 00:10:23.375 "method": "bdev_nvme_apply_firmware", 00:10:23.375 "req_id": 1 00:10:23.375 } 00:10:23.375 Got JSON-RPC error response 00:10:23.375 response: 00:10:23.375 { 00:10:23.375 "code": -32603, 00:10:23.375 "message": "open file failed." 00:10:23.375 } 00:10:23.375 23:05:42 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:10:23.375 23:05:42 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:10:23.375 23:05:42 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:10:23.636 23:05:42 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:23.636 23:05:42 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 78061 00:10:23.636 23:05:42 nvme_rpc -- common/autotest_common.sh@950 -- # '[' -z 78061 ']' 00:10:23.636 23:05:42 nvme_rpc -- common/autotest_common.sh@954 -- # kill -0 78061 00:10:23.636 23:05:42 nvme_rpc -- common/autotest_common.sh@955 -- # uname 00:10:23.636 23:05:42 nvme_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:23.636 23:05:42 nvme_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78061 00:10:23.636 killing process with pid 78061 00:10:23.636 23:05:42 nvme_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:23.637 23:05:42 nvme_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:23.637 23:05:42 nvme_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78061' 00:10:23.637 23:05:42 nvme_rpc -- common/autotest_common.sh@969 -- # kill 78061 00:10:23.637 23:05:42 nvme_rpc -- common/autotest_common.sh@974 -- # wait 78061 00:10:23.899 ************************************ 00:10:23.899 END TEST nvme_rpc 00:10:23.899 ************************************ 00:10:23.899 00:10:23.899 real 0m2.085s 00:10:23.899 user 0m4.031s 00:10:23.899 sys 0m0.496s 00:10:23.899 23:05:43 nvme_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:23.899 23:05:43 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:23.899 23:05:43 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:23.899 23:05:43 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:23.899 23:05:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:23.899 23:05:43 -- common/autotest_common.sh@10 -- # set +x 00:10:23.899 ************************************ 00:10:23.899 START TEST nvme_rpc_timeouts 00:10:23.899 ************************************ 00:10:23.899 23:05:43 nvme_rpc_timeouts -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:24.160 * Looking for test storage... 00:10:24.160 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:24.160 23:05:43 nvme_rpc_timeouts -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:24.160 23:05:43 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # lcov --version 00:10:24.160 23:05:43 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:24.160 23:05:43 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:24.160 23:05:43 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:24.160 23:05:43 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:24.160 23:05:43 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:24.160 23:05:43 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:10:24.160 23:05:43 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:10:24.160 23:05:43 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:10:24.161 23:05:43 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:10:24.161 23:05:43 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:10:24.161 23:05:43 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:10:24.161 23:05:43 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:10:24.161 23:05:43 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:24.161 23:05:43 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:10:24.161 23:05:43 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:10:24.161 23:05:43 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:24.161 23:05:43 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:24.161 23:05:43 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:10:24.161 23:05:43 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:10:24.161 23:05:43 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:24.161 23:05:43 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:10:24.161 23:05:43 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:10:24.161 23:05:43 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:10:24.161 23:05:43 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:10:24.161 23:05:43 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:24.161 23:05:43 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:10:24.161 23:05:43 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:10:24.161 23:05:43 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:24.161 23:05:43 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:24.161 23:05:43 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:10:24.161 23:05:43 nvme_rpc_timeouts -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:24.161 23:05:43 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:24.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.161 --rc genhtml_branch_coverage=1 00:10:24.161 --rc genhtml_function_coverage=1 00:10:24.161 --rc genhtml_legend=1 00:10:24.161 --rc geninfo_all_blocks=1 00:10:24.161 --rc geninfo_unexecuted_blocks=1 00:10:24.161 00:10:24.161 ' 00:10:24.161 23:05:43 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:24.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.161 --rc genhtml_branch_coverage=1 00:10:24.161 --rc genhtml_function_coverage=1 00:10:24.161 --rc genhtml_legend=1 00:10:24.161 --rc geninfo_all_blocks=1 00:10:24.161 --rc geninfo_unexecuted_blocks=1 00:10:24.161 00:10:24.161 ' 00:10:24.161 23:05:43 nvme_rpc_timeouts -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:24.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.161 --rc genhtml_branch_coverage=1 00:10:24.161 --rc genhtml_function_coverage=1 00:10:24.161 --rc genhtml_legend=1 00:10:24.161 --rc geninfo_all_blocks=1 00:10:24.161 --rc geninfo_unexecuted_blocks=1 00:10:24.161 00:10:24.161 ' 00:10:24.161 23:05:43 nvme_rpc_timeouts -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:24.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.161 --rc genhtml_branch_coverage=1 00:10:24.161 --rc genhtml_function_coverage=1 00:10:24.161 --rc genhtml_legend=1 00:10:24.161 --rc geninfo_all_blocks=1 00:10:24.161 --rc geninfo_unexecuted_blocks=1 00:10:24.161 00:10:24.161 ' 00:10:24.161 23:05:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:24.161 23:05:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_78115 00:10:24.161 23:05:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_78115 00:10:24.161 23:05:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=78147 00:10:24.161 23:05:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:10:24.161 23:05:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 78147 00:10:24.161 23:05:43 nvme_rpc_timeouts -- common/autotest_common.sh@831 -- # '[' -z 78147 ']' 00:10:24.161 23:05:43 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:24.161 23:05:43 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:24.161 23:05:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:24.161 23:05:43 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:24.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:24.161 23:05:43 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:24.161 23:05:43 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:24.161 [2024-11-18 23:05:43.449979] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:24.161 [2024-11-18 23:05:43.450098] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78147 ] 00:10:24.422 [2024-11-18 23:05:43.589965] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:24.422 [2024-11-18 23:05:43.623394] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.422 [2024-11-18 23:05:43.623458] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:24.994 23:05:44 nvme_rpc_timeouts -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:24.994 Checking default timeout settings: 00:10:24.994 23:05:44 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # return 0 00:10:24.994 23:05:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:10:24.994 23:05:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:25.254 Making settings changes with rpc: 00:10:25.254 23:05:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:10:25.254 23:05:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:10:25.515 Check default vs. modified settings: 00:10:25.515 23:05:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:10:25.515 23:05:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:25.776 23:05:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:10:25.776 23:05:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:25.776 23:05:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_78115 00:10:25.776 23:05:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:25.776 23:05:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:25.776 23:05:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:10:25.776 23:05:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_78115 00:10:25.776 23:05:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:25.776 23:05:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:26.037 Setting action_on_timeout is changed as expected. 00:10:26.037 23:05:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:10:26.037 23:05:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:10:26.037 23:05:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:10:26.037 23:05:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:26.037 23:05:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:26.037 23:05:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:26.037 23:05:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_78115 00:10:26.037 23:05:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:26.037 23:05:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_78115 00:10:26.037 23:05:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:26.037 23:05:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:26.037 Setting timeout_us is changed as expected. 00:10:26.037 23:05:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:10:26.037 23:05:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:10:26.037 23:05:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:10:26.037 23:05:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:26.037 23:05:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_78115 00:10:26.037 23:05:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:26.037 23:05:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:26.037 23:05:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:26.037 23:05:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_78115 00:10:26.037 23:05:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:26.037 23:05:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:26.037 Setting timeout_admin_us is changed as expected. 00:10:26.037 23:05:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:10:26.037 23:05:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:10:26.037 23:05:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:10:26.037 23:05:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:10:26.037 23:05:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_78115 /tmp/settings_modified_78115 00:10:26.037 23:05:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 78147 00:10:26.037 23:05:45 nvme_rpc_timeouts -- common/autotest_common.sh@950 -- # '[' -z 78147 ']' 00:10:26.037 23:05:45 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # kill -0 78147 00:10:26.037 23:05:45 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # uname 00:10:26.037 23:05:45 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:26.037 23:05:45 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78147 00:10:26.037 killing process with pid 78147 00:10:26.037 23:05:45 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:26.037 23:05:45 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:26.037 23:05:45 nvme_rpc_timeouts -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78147' 00:10:26.037 23:05:45 nvme_rpc_timeouts -- common/autotest_common.sh@969 -- # kill 78147 00:10:26.037 23:05:45 nvme_rpc_timeouts -- common/autotest_common.sh@974 -- # wait 78147 00:10:26.298 RPC TIMEOUT SETTING TEST PASSED. 00:10:26.298 23:05:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:10:26.298 ************************************ 00:10:26.298 END TEST nvme_rpc_timeouts 00:10:26.298 ************************************ 00:10:26.298 00:10:26.298 real 0m2.253s 00:10:26.298 user 0m4.514s 00:10:26.298 sys 0m0.458s 00:10:26.298 23:05:45 nvme_rpc_timeouts -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:26.298 23:05:45 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:26.298 23:05:45 -- spdk/autotest.sh@239 -- # uname -s 00:10:26.298 23:05:45 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:10:26.298 23:05:45 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:26.298 23:05:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:26.298 23:05:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:26.298 23:05:45 -- common/autotest_common.sh@10 -- # set +x 00:10:26.298 ************************************ 00:10:26.298 START TEST sw_hotplug 00:10:26.298 ************************************ 00:10:26.298 23:05:45 sw_hotplug -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:26.298 * Looking for test storage... 00:10:26.298 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:26.298 23:05:45 sw_hotplug -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:26.298 23:05:45 sw_hotplug -- common/autotest_common.sh@1681 -- # lcov --version 00:10:26.298 23:05:45 sw_hotplug -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:26.560 23:05:45 sw_hotplug -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:26.560 23:05:45 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:26.560 23:05:45 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:26.560 23:05:45 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:26.560 23:05:45 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:10:26.560 23:05:45 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:10:26.560 23:05:45 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:10:26.560 23:05:45 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:10:26.560 23:05:45 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:10:26.560 23:05:45 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:10:26.560 23:05:45 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:10:26.560 23:05:45 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:26.560 23:05:45 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:10:26.560 23:05:45 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:10:26.560 23:05:45 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:26.560 23:05:45 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:26.560 23:05:45 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:10:26.560 23:05:45 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:10:26.560 23:05:45 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:26.560 23:05:45 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:10:26.560 23:05:45 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:10:26.560 23:05:45 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:10:26.560 23:05:45 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:10:26.560 23:05:45 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:26.560 23:05:45 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:10:26.560 23:05:45 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:10:26.560 23:05:45 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:26.560 23:05:45 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:26.560 23:05:45 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:10:26.560 23:05:45 sw_hotplug -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:26.560 23:05:45 sw_hotplug -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:26.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.560 --rc genhtml_branch_coverage=1 00:10:26.560 --rc genhtml_function_coverage=1 00:10:26.560 --rc genhtml_legend=1 00:10:26.560 --rc geninfo_all_blocks=1 00:10:26.560 --rc geninfo_unexecuted_blocks=1 00:10:26.560 00:10:26.560 ' 00:10:26.560 23:05:45 sw_hotplug -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:26.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.560 --rc genhtml_branch_coverage=1 00:10:26.561 --rc genhtml_function_coverage=1 00:10:26.561 --rc genhtml_legend=1 00:10:26.561 --rc geninfo_all_blocks=1 00:10:26.561 --rc geninfo_unexecuted_blocks=1 00:10:26.561 00:10:26.561 ' 00:10:26.561 23:05:45 sw_hotplug -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:26.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.561 --rc genhtml_branch_coverage=1 00:10:26.561 --rc genhtml_function_coverage=1 00:10:26.561 --rc genhtml_legend=1 00:10:26.561 --rc geninfo_all_blocks=1 00:10:26.561 --rc geninfo_unexecuted_blocks=1 00:10:26.561 00:10:26.561 ' 00:10:26.561 23:05:45 sw_hotplug -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:26.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.561 --rc genhtml_branch_coverage=1 00:10:26.561 --rc genhtml_function_coverage=1 00:10:26.561 --rc genhtml_legend=1 00:10:26.561 --rc geninfo_all_blocks=1 00:10:26.561 --rc geninfo_unexecuted_blocks=1 00:10:26.561 00:10:26.561 ' 00:10:26.561 23:05:45 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:26.820 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:26.820 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:26.820 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:26.820 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:26.820 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:26.820 23:05:46 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:10:26.820 23:05:46 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:10:26.820 23:05:46 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:10:26.820 23:05:46 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:10:26.820 23:05:46 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:10:26.820 23:05:46 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:10:26.820 23:05:46 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:10:26.820 23:05:46 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:10:26.820 23:05:46 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:10:26.820 23:05:46 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:10:26.820 23:05:46 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:10:26.820 23:05:46 sw_hotplug -- scripts/common.sh@233 -- # local class 00:10:26.820 23:05:46 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:10:26.820 23:05:46 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:10:26.820 23:05:46 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:10:26.820 23:05:46 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:10:26.820 23:05:46 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:10:26.820 23:05:46 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:10:26.820 23:05:46 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:10:26.820 23:05:46 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:10:26.820 23:05:46 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:10:26.820 23:05:46 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:10:26.820 23:05:46 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:10:26.820 23:05:46 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:10:26.820 23:05:46 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:10:26.820 23:05:46 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:10:26.820 23:05:46 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:26.820 23:05:46 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:10:26.820 23:05:46 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:26.820 23:05:46 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:26.820 23:05:46 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:26.820 23:05:46 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:26.820 23:05:46 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:10:26.820 23:05:46 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:26.820 23:05:46 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:10:26.820 23:05:46 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:26.820 23:05:46 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:26.820 23:05:46 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:26.820 23:05:46 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:26.820 23:05:46 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:10:26.820 23:05:46 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:26.820 23:05:46 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:10:26.820 23:05:46 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:26.820 23:05:46 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:26.820 23:05:46 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:26.820 23:05:46 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:26.820 23:05:46 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:10:26.820 23:05:46 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:26.820 23:05:46 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:10:26.820 23:05:46 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:26.820 23:05:46 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:26.820 23:05:46 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:26.820 23:05:46 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:26.820 23:05:46 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:10:26.820 23:05:46 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:26.820 23:05:46 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:10:26.820 23:05:46 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:26.820 23:05:46 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:26.820 23:05:46 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:26.820 23:05:46 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:26.820 23:05:46 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:10:26.820 23:05:46 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:26.820 23:05:46 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:26.820 23:05:46 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:26.820 23:05:46 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:26.820 23:05:46 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:10:26.820 23:05:46 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:26.820 23:05:46 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:26.820 23:05:46 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:26.820 23:05:46 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:26.820 23:05:46 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:10:26.820 23:05:46 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:26.820 23:05:46 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:26.820 23:05:46 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:26.820 23:05:46 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:10:26.820 23:05:46 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:26.820 23:05:46 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:10:26.820 23:05:46 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:10:26.820 23:05:46 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:27.390 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:27.390 Waiting for block devices as requested 00:10:27.390 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:27.651 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:27.651 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:27.651 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:32.935 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:32.935 23:05:52 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:10:32.935 23:05:52 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:33.196 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:10:33.196 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:33.196 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:10:33.457 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:10:33.737 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:33.737 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:33.737 23:05:53 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:10:33.737 23:05:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:33.737 23:05:53 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:10:33.737 23:05:53 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:10:33.737 23:05:53 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=78992 00:10:33.737 23:05:53 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:10:33.737 23:05:53 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:33.737 23:05:53 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:10:33.737 23:05:53 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:10:33.737 23:05:53 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:10:33.737 23:05:53 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:10:33.737 23:05:53 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:10:33.737 23:05:53 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:10:33.737 23:05:53 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 false 00:10:33.737 23:05:53 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:33.737 23:05:53 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:33.737 23:05:53 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:10:33.737 23:05:53 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:33.737 23:05:53 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:33.997 Initializing NVMe Controllers 00:10:33.997 Attaching to 0000:00:10.0 00:10:33.997 Attaching to 0000:00:11.0 00:10:33.997 Attached to 0000:00:11.0 00:10:33.997 Attached to 0000:00:10.0 00:10:33.997 Initialization complete. Starting I/O... 00:10:33.997 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:10:33.997 QEMU NVMe Ctrl (12340 ): 1 I/Os completed (+1) 00:10:33.997 00:10:34.942 QEMU NVMe Ctrl (12341 ): 2875 I/Os completed (+2875) 00:10:34.942 QEMU NVMe Ctrl (12340 ): 2969 I/Os completed (+2968) 00:10:34.942 00:10:36.316 QEMU NVMe Ctrl (12341 ): 6494 I/Os completed (+3619) 00:10:36.316 QEMU NVMe Ctrl (12340 ): 6981 I/Os completed (+4012) 00:10:36.316 00:10:37.257 QEMU NVMe Ctrl (12341 ): 11385 I/Os completed (+4891) 00:10:37.257 QEMU NVMe Ctrl (12340 ): 12404 I/Os completed (+5423) 00:10:37.257 00:10:38.204 QEMU NVMe Ctrl (12341 ): 14819 I/Os completed (+3434) 00:10:38.204 QEMU NVMe Ctrl (12340 ): 16051 I/Os completed (+3647) 00:10:38.204 00:10:39.144 QEMU NVMe Ctrl (12341 ): 18698 I/Os completed (+3879) 00:10:39.144 QEMU NVMe Ctrl (12340 ): 19922 I/Os completed (+3871) 00:10:39.144 00:10:40.075 23:05:59 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:40.075 23:05:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:40.075 23:05:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:40.075 [2024-11-18 23:05:59.107636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:40.075 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:40.075 [2024-11-18 23:05:59.108943] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.075 [2024-11-18 23:05:59.108992] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.075 [2024-11-18 23:05:59.109006] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.075 [2024-11-18 23:05:59.109020] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.075 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:40.075 [2024-11-18 23:05:59.110004] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.075 [2024-11-18 23:05:59.110040] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.075 [2024-11-18 23:05:59.110052] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.075 [2024-11-18 23:05:59.110064] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.075 23:05:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:40.075 23:05:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:40.075 [2024-11-18 23:05:59.125699] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:40.075 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:40.075 [2024-11-18 23:05:59.126491] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.075 [2024-11-18 23:05:59.126527] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.075 [2024-11-18 23:05:59.126543] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.075 [2024-11-18 23:05:59.126556] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.075 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:40.075 [2024-11-18 23:05:59.127542] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.075 [2024-11-18 23:05:59.127577] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.075 [2024-11-18 23:05:59.127592] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.075 [2024-11-18 23:05:59.127601] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.075 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:40.075 23:05:59 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:40.075 EAL: Scan for (pci) bus failed. 00:10:40.075 23:05:59 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:40.075 23:05:59 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:40.075 23:05:59 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:40.075 23:05:59 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:40.075 23:05:59 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:40.075 23:05:59 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:40.075 23:05:59 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:40.075 23:05:59 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:40.075 23:05:59 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:40.075 Attaching to 0000:00:10.0 00:10:40.075 Attached to 0000:00:10.0 00:10:40.075 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:10:40.075 00:10:40.075 23:05:59 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:40.075 23:05:59 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:40.075 23:05:59 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:40.075 Attaching to 0000:00:11.0 00:10:40.075 Attached to 0000:00:11.0 00:10:41.007 QEMU NVMe Ctrl (12340 ): 4332 I/Os completed (+4332) 00:10:41.007 QEMU NVMe Ctrl (12341 ): 3736 I/Os completed (+3736) 00:10:41.007 00:10:41.951 QEMU NVMe Ctrl (12340 ): 10893 I/Os completed (+6561) 00:10:41.951 QEMU NVMe Ctrl (12341 ): 10078 I/Os completed (+6342) 00:10:41.951 00:10:42.919 QEMU NVMe Ctrl (12340 ): 14354 I/Os completed (+3461) 00:10:42.919 QEMU NVMe Ctrl (12341 ): 13706 I/Os completed (+3628) 00:10:42.919 00:10:44.306 QEMU NVMe Ctrl (12340 ): 18666 I/Os completed (+4312) 00:10:44.306 QEMU NVMe Ctrl (12341 ): 18177 I/Os completed (+4471) 00:10:44.306 00:10:45.249 QEMU NVMe Ctrl (12340 ): 22947 I/Os completed (+4281) 00:10:45.249 QEMU NVMe Ctrl (12341 ): 22429 I/Os completed (+4252) 00:10:45.249 00:10:46.190 QEMU NVMe Ctrl (12340 ): 27075 I/Os completed (+4128) 00:10:46.190 QEMU NVMe Ctrl (12341 ): 26464 I/Os completed (+4035) 00:10:46.190 00:10:47.134 QEMU NVMe Ctrl (12340 ): 30552 I/Os completed (+3477) 00:10:47.134 QEMU NVMe Ctrl (12341 ): 30190 I/Os completed (+3726) 00:10:47.134 00:10:48.078 QEMU NVMe Ctrl (12340 ): 33220 I/Os completed (+2668) 00:10:48.078 QEMU NVMe Ctrl (12341 ): 32871 I/Os completed (+2681) 00:10:48.078 00:10:49.020 QEMU NVMe Ctrl (12340 ): 36190 I/Os completed (+2970) 00:10:49.020 QEMU NVMe Ctrl (12341 ): 36007 I/Os completed (+3136) 00:10:49.020 00:10:49.962 QEMU NVMe Ctrl (12340 ): 39188 I/Os completed (+2998) 00:10:49.962 QEMU NVMe Ctrl (12341 ): 39057 I/Os completed (+3050) 00:10:49.962 00:10:51.348 QEMU NVMe Ctrl (12340 ): 42085 I/Os completed (+2897) 00:10:51.348 QEMU NVMe Ctrl (12341 ): 41974 I/Os completed (+2917) 00:10:51.348 00:10:51.922 QEMU NVMe Ctrl (12340 ): 44649 I/Os completed (+2564) 00:10:51.922 QEMU NVMe Ctrl (12341 ): 44546 I/Os completed (+2572) 00:10:51.922 00:10:52.181 23:06:11 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:52.181 23:06:11 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:52.181 23:06:11 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:52.181 23:06:11 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:52.181 [2024-11-18 23:06:11.368344] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:52.181 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:52.181 [2024-11-18 23:06:11.370047] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:52.181 [2024-11-18 23:06:11.370325] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:52.181 [2024-11-18 23:06:11.370375] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:52.181 [2024-11-18 23:06:11.370473] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:52.181 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:52.181 [2024-11-18 23:06:11.372807] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:52.181 [2024-11-18 23:06:11.372941] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:52.181 [2024-11-18 23:06:11.372978] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:52.181 [2024-11-18 23:06:11.373016] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:52.182 23:06:11 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:52.182 23:06:11 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:52.182 [2024-11-18 23:06:11.394218] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:52.182 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:52.182 [2024-11-18 23:06:11.395494] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:52.182 [2024-11-18 23:06:11.395545] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:52.182 [2024-11-18 23:06:11.395565] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:52.182 [2024-11-18 23:06:11.395581] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:52.182 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:52.182 [2024-11-18 23:06:11.397013] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:52.182 [2024-11-18 23:06:11.397068] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:52.182 [2024-11-18 23:06:11.397089] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:52.182 [2024-11-18 23:06:11.397105] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:52.182 23:06:11 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:52.182 23:06:11 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:52.182 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:52.182 EAL: Scan for (pci) bus failed. 00:10:52.182 23:06:11 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:52.182 23:06:11 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:52.182 23:06:11 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:52.440 23:06:11 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:52.440 23:06:11 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:52.440 23:06:11 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:52.440 23:06:11 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:52.440 23:06:11 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:52.440 Attaching to 0000:00:10.0 00:10:52.440 Attached to 0000:00:10.0 00:10:52.440 23:06:11 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:52.440 23:06:11 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:52.440 23:06:11 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:52.440 Attaching to 0000:00:11.0 00:10:52.440 Attached to 0000:00:11.0 00:10:53.007 QEMU NVMe Ctrl (12340 ): 2998 I/Os completed (+2998) 00:10:53.007 QEMU NVMe Ctrl (12341 ): 2976 I/Os completed (+2976) 00:10:53.007 00:10:53.946 QEMU NVMe Ctrl (12340 ): 6778 I/Os completed (+3780) 00:10:53.946 QEMU NVMe Ctrl (12341 ): 7035 I/Os completed (+4059) 00:10:53.946 00:10:55.328 QEMU NVMe Ctrl (12340 ): 9957 I/Os completed (+3179) 00:10:55.328 QEMU NVMe Ctrl (12341 ): 10252 I/Os completed (+3217) 00:10:55.328 00:10:56.265 QEMU NVMe Ctrl (12340 ): 13823 I/Os completed (+3866) 00:10:56.265 QEMU NVMe Ctrl (12341 ): 14065 I/Os completed (+3813) 00:10:56.265 00:10:57.213 QEMU NVMe Ctrl (12340 ): 17589 I/Os completed (+3766) 00:10:57.213 QEMU NVMe Ctrl (12341 ): 17845 I/Os completed (+3780) 00:10:57.213 00:10:58.157 QEMU NVMe Ctrl (12340 ): 21108 I/Os completed (+3519) 00:10:58.157 QEMU NVMe Ctrl (12341 ): 21324 I/Os completed (+3479) 00:10:58.157 00:10:59.101 QEMU NVMe Ctrl (12340 ): 24650 I/Os completed (+3542) 00:10:59.101 QEMU NVMe Ctrl (12341 ): 24833 I/Os completed (+3509) 00:10:59.101 00:11:00.044 QEMU NVMe Ctrl (12340 ): 28263 I/Os completed (+3613) 00:11:00.044 QEMU NVMe Ctrl (12341 ): 28461 I/Os completed (+3628) 00:11:00.044 00:11:01.026 QEMU NVMe Ctrl (12340 ): 32157 I/Os completed (+3894) 00:11:01.026 QEMU NVMe Ctrl (12341 ): 32299 I/Os completed (+3838) 00:11:01.026 00:11:01.958 QEMU NVMe Ctrl (12340 ): 36914 I/Os completed (+4757) 00:11:01.958 QEMU NVMe Ctrl (12341 ): 37054 I/Os completed (+4755) 00:11:01.958 00:11:03.334 QEMU NVMe Ctrl (12340 ): 41358 I/Os completed (+4444) 00:11:03.334 QEMU NVMe Ctrl (12341 ): 41469 I/Os completed (+4415) 00:11:03.334 00:11:04.268 QEMU NVMe Ctrl (12340 ): 45921 I/Os completed (+4563) 00:11:04.268 QEMU NVMe Ctrl (12341 ): 46008 I/Os completed (+4539) 00:11:04.268 00:11:04.527 23:06:23 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:04.527 23:06:23 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:04.527 23:06:23 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:04.527 23:06:23 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:04.527 [2024-11-18 23:06:23.654460] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:04.527 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:04.527 [2024-11-18 23:06:23.655392] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.527 [2024-11-18 23:06:23.655560] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.527 [2024-11-18 23:06:23.655591] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.527 [2024-11-18 23:06:23.655669] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.527 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:04.527 [2024-11-18 23:06:23.656912] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.527 [2024-11-18 23:06:23.657018] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.527 [2024-11-18 23:06:23.657047] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.527 [2024-11-18 23:06:23.657068] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.527 23:06:23 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:04.527 23:06:23 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:04.527 [2024-11-18 23:06:23.669862] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:04.527 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:04.527 [2024-11-18 23:06:23.670705] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.527 [2024-11-18 23:06:23.670803] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.527 [2024-11-18 23:06:23.670833] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.527 [2024-11-18 23:06:23.670881] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.527 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:04.528 [2024-11-18 23:06:23.671827] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.528 [2024-11-18 23:06:23.671946] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.528 [2024-11-18 23:06:23.671974] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.528 [2024-11-18 23:06:23.672021] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.528 23:06:23 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:11:04.528 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:11:04.528 EAL: Scan for (pci) bus failed. 00:11:04.528 23:06:23 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:04.528 23:06:23 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:04.528 23:06:23 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:04.528 23:06:23 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:04.528 23:06:23 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:04.528 23:06:23 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:04.528 23:06:23 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:04.528 23:06:23 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:04.528 23:06:23 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:04.528 Attaching to 0000:00:10.0 00:11:04.528 Attached to 0000:00:10.0 00:11:04.788 23:06:23 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:04.788 23:06:23 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:04.788 23:06:23 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:04.788 Attaching to 0000:00:11.0 00:11:04.788 Attached to 0000:00:11.0 00:11:04.788 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:04.788 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:04.788 [2024-11-18 23:06:23.934665] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:11:17.017 23:06:35 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:17.017 23:06:35 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:17.017 23:06:35 sw_hotplug -- common/autotest_common.sh@717 -- # time=42.83 00:11:17.017 23:06:35 sw_hotplug -- common/autotest_common.sh@718 -- # echo 42.83 00:11:17.017 23:06:35 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:11:17.017 23:06:35 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.83 00:11:17.017 23:06:35 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.83 2 00:11:17.017 remove_attach_helper took 42.83s to complete (handling 2 nvme drive(s)) 23:06:35 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:11:23.678 23:06:41 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 78992 00:11:23.678 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (78992) - No such process 00:11:23.678 23:06:41 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 78992 00:11:23.678 23:06:41 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:11:23.678 23:06:41 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:11:23.678 23:06:41 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:11:23.678 23:06:41 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=79535 00:11:23.678 23:06:41 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:11:23.678 23:06:41 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 79535 00:11:23.678 23:06:41 sw_hotplug -- common/autotest_common.sh@831 -- # '[' -z 79535 ']' 00:11:23.678 23:06:41 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:23.678 23:06:41 sw_hotplug -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:23.678 23:06:41 sw_hotplug -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:23.678 23:06:41 sw_hotplug -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:23.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:23.678 23:06:41 sw_hotplug -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:23.678 23:06:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:23.678 [2024-11-18 23:06:42.033094] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:23.678 [2024-11-18 23:06:42.033609] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79535 ] 00:11:23.678 [2024-11-18 23:06:42.191794] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:23.678 [2024-11-18 23:06:42.242622] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.678 23:06:42 sw_hotplug -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:23.678 23:06:42 sw_hotplug -- common/autotest_common.sh@864 -- # return 0 00:11:23.678 23:06:42 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:23.678 23:06:42 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.678 23:06:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:23.678 23:06:42 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.678 23:06:42 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:11:23.678 23:06:42 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:23.678 23:06:42 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:23.678 23:06:42 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:11:23.678 23:06:42 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:11:23.678 23:06:42 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:11:23.678 23:06:42 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:11:23.678 23:06:42 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:11:23.678 23:06:42 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:23.678 23:06:42 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:23.678 23:06:42 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:23.678 23:06:42 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:23.678 23:06:42 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:30.249 23:06:48 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:30.249 23:06:48 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:30.249 23:06:48 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:30.249 23:06:48 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:30.249 23:06:48 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:30.249 23:06:48 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:30.249 23:06:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:30.249 23:06:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:30.249 23:06:48 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:30.249 23:06:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:30.249 23:06:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:30.249 23:06:48 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.249 23:06:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:30.249 23:06:48 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.249 23:06:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:30.249 23:06:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:30.249 [2024-11-18 23:06:48.984179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:30.249 [2024-11-18 23:06:48.985263] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.249 [2024-11-18 23:06:48.985288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:30.249 [2024-11-18 23:06:48.985303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.249 [2024-11-18 23:06:48.985315] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.249 [2024-11-18 23:06:48.985324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:30.249 [2024-11-18 23:06:48.985334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.249 [2024-11-18 23:06:48.985345] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.249 [2024-11-18 23:06:48.985352] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:30.249 [2024-11-18 23:06:48.985359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.249 [2024-11-18 23:06:48.985365] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.249 [2024-11-18 23:06:48.985373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:30.249 [2024-11-18 23:06:48.985379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.249 23:06:49 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:30.250 23:06:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:30.250 23:06:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:30.250 23:06:49 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:30.250 23:06:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:30.250 23:06:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:30.250 23:06:49 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.250 23:06:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:30.250 [2024-11-18 23:06:49.484177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:30.250 [2024-11-18 23:06:49.485175] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.250 [2024-11-18 23:06:49.485289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:30.250 [2024-11-18 23:06:49.485303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.250 [2024-11-18 23:06:49.485314] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.250 [2024-11-18 23:06:49.485321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:30.250 [2024-11-18 23:06:49.485329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.250 [2024-11-18 23:06:49.485335] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.250 [2024-11-18 23:06:49.485343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:30.250 [2024-11-18 23:06:49.485349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.250 [2024-11-18 23:06:49.485358] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.250 [2024-11-18 23:06:49.485364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:30.250 [2024-11-18 23:06:49.485372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.250 23:06:49 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.250 23:06:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:30.250 23:06:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:30.817 23:06:50 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:30.817 23:06:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:30.817 23:06:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:30.817 23:06:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:30.817 23:06:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:30.817 23:06:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:30.817 23:06:50 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.817 23:06:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:30.817 23:06:50 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.817 23:06:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:30.817 23:06:50 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:30.817 23:06:50 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:30.817 23:06:50 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:30.817 23:06:50 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:31.074 23:06:50 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:31.074 23:06:50 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:31.074 23:06:50 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:31.074 23:06:50 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:31.074 23:06:50 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:31.074 23:06:50 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:31.074 23:06:50 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:31.074 23:06:50 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:43.388 23:07:02 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:43.388 23:07:02 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:43.388 23:07:02 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:43.388 23:07:02 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:43.388 23:07:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:43.388 23:07:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:43.388 23:07:02 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.388 23:07:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:43.388 23:07:02 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.388 23:07:02 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:43.388 23:07:02 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:43.388 23:07:02 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:43.388 23:07:02 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:43.388 23:07:02 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:43.388 23:07:02 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:43.388 23:07:02 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:43.388 23:07:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:43.388 23:07:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:43.388 23:07:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:43.388 23:07:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:43.388 23:07:02 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:43.388 23:07:02 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.388 23:07:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:43.388 [2024-11-18 23:07:02.384381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:43.388 [2024-11-18 23:07:02.385522] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.388 [2024-11-18 23:07:02.385544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:43.388 [2024-11-18 23:07:02.385555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.388 [2024-11-18 23:07:02.385566] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.388 [2024-11-18 23:07:02.385574] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:43.388 [2024-11-18 23:07:02.385595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.388 [2024-11-18 23:07:02.385603] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.388 [2024-11-18 23:07:02.385610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:43.388 [2024-11-18 23:07:02.385617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.388 [2024-11-18 23:07:02.385624] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.388 [2024-11-18 23:07:02.385631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:43.388 [2024-11-18 23:07:02.385637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.388 23:07:02 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.388 23:07:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:43.388 23:07:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:43.645 [2024-11-18 23:07:02.784385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:43.645 [2024-11-18 23:07:02.785504] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.645 [2024-11-18 23:07:02.785533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:43.645 [2024-11-18 23:07:02.785542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.645 [2024-11-18 23:07:02.785553] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.645 [2024-11-18 23:07:02.785560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:43.645 [2024-11-18 23:07:02.785567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.645 [2024-11-18 23:07:02.785574] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.645 [2024-11-18 23:07:02.785582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:43.645 [2024-11-18 23:07:02.785588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.645 [2024-11-18 23:07:02.785595] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.645 [2024-11-18 23:07:02.785601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:43.645 [2024-11-18 23:07:02.785608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.645 23:07:02 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:43.645 23:07:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:43.645 23:07:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:43.645 23:07:02 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:43.645 23:07:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:43.645 23:07:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:43.645 23:07:02 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.645 23:07:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:43.645 23:07:02 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.645 23:07:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:43.645 23:07:02 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:43.902 23:07:03 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:43.902 23:07:03 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:43.902 23:07:03 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:43.902 23:07:03 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:43.902 23:07:03 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:43.902 23:07:03 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:43.902 23:07:03 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:43.902 23:07:03 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:43.902 23:07:03 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:43.902 23:07:03 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:43.902 23:07:03 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:56.110 23:07:15 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:56.110 23:07:15 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:56.110 23:07:15 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:56.110 23:07:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:56.110 23:07:15 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:56.110 23:07:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:56.110 23:07:15 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.110 23:07:15 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:56.110 23:07:15 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.110 23:07:15 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:56.110 23:07:15 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:56.110 23:07:15 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:56.110 23:07:15 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:56.110 23:07:15 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:56.110 23:07:15 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:56.110 23:07:15 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:56.110 23:07:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:56.110 23:07:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:56.110 23:07:15 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:56.110 23:07:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:56.110 23:07:15 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.110 23:07:15 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:56.110 23:07:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:56.110 [2024-11-18 23:07:15.284588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:56.110 [2024-11-18 23:07:15.285734] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:56.110 [2024-11-18 23:07:15.285833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:56.110 [2024-11-18 23:07:15.285851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:56.110 [2024-11-18 23:07:15.285863] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:56.110 [2024-11-18 23:07:15.285872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:56.110 [2024-11-18 23:07:15.285879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:56.110 [2024-11-18 23:07:15.285887] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:56.110 [2024-11-18 23:07:15.285893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:56.110 [2024-11-18 23:07:15.285900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:56.110 [2024-11-18 23:07:15.285906] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:56.111 [2024-11-18 23:07:15.285914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:56.111 [2024-11-18 23:07:15.285920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:56.111 23:07:15 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.111 23:07:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:56.111 23:07:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:56.677 [2024-11-18 23:07:15.784594] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:56.677 [2024-11-18 23:07:15.785589] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:56.677 [2024-11-18 23:07:15.785620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:56.677 [2024-11-18 23:07:15.785629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:56.677 [2024-11-18 23:07:15.785641] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:56.677 [2024-11-18 23:07:15.785648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:56.677 [2024-11-18 23:07:15.785657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:56.677 [2024-11-18 23:07:15.785663] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:56.677 [2024-11-18 23:07:15.785671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:56.677 [2024-11-18 23:07:15.785681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:56.677 [2024-11-18 23:07:15.785688] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:56.677 [2024-11-18 23:07:15.785695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:56.677 [2024-11-18 23:07:15.785702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:56.677 23:07:15 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:56.677 23:07:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:56.677 23:07:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:56.677 23:07:15 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:56.677 23:07:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:56.677 23:07:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:56.677 23:07:15 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.677 23:07:15 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:56.677 23:07:15 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.677 23:07:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:56.677 23:07:15 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:56.677 23:07:15 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:56.677 23:07:15 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:56.677 23:07:15 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:56.677 23:07:15 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:56.677 23:07:15 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:56.677 23:07:15 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:56.677 23:07:15 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:56.677 23:07:15 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:56.677 23:07:16 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:56.935 23:07:16 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:56.935 23:07:16 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:09.245 23:07:28 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:09.245 23:07:28 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:09.245 23:07:28 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:09.245 23:07:28 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:09.245 23:07:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:09.245 23:07:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:09.245 23:07:28 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.245 23:07:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:09.245 23:07:28 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.245 23:07:28 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:09.245 23:07:28 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:09.245 23:07:28 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.21 00:12:09.245 23:07:28 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.21 00:12:09.245 23:07:28 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:12:09.245 23:07:28 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.21 00:12:09.245 23:07:28 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.21 2 00:12:09.245 remove_attach_helper took 45.21s to complete (handling 2 nvme drive(s)) 23:07:28 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:12:09.245 23:07:28 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.245 23:07:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:09.245 23:07:28 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.245 23:07:28 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:12:09.245 23:07:28 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.245 23:07:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:09.245 23:07:28 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.245 23:07:28 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:12:09.245 23:07:28 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:12:09.245 23:07:28 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:12:09.245 23:07:28 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:12:09.245 23:07:28 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:12:09.245 23:07:28 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:12:09.245 23:07:28 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:12:09.245 23:07:28 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:12:09.245 23:07:28 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:12:09.245 23:07:28 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:12:09.245 23:07:28 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:12:09.245 23:07:28 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:12:09.245 23:07:28 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:12:15.809 23:07:34 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:15.809 23:07:34 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:15.809 23:07:34 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:15.809 23:07:34 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:15.809 23:07:34 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:15.809 23:07:34 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:15.809 23:07:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:15.809 23:07:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:15.809 23:07:34 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:15.809 23:07:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:15.809 23:07:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:15.809 23:07:34 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.809 23:07:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:15.809 23:07:34 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.809 23:07:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:15.809 23:07:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:15.809 [2024-11-18 23:07:34.219964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:15.809 [2024-11-18 23:07:34.221076] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:15.809 [2024-11-18 23:07:34.221174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:15.809 [2024-11-18 23:07:34.221191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:15.809 [2024-11-18 23:07:34.221203] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:15.809 [2024-11-18 23:07:34.221212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:15.809 [2024-11-18 23:07:34.221219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:15.809 [2024-11-18 23:07:34.221227] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:15.809 [2024-11-18 23:07:34.221233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:15.809 [2024-11-18 23:07:34.221243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:15.809 [2024-11-18 23:07:34.221249] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:15.809 [2024-11-18 23:07:34.221257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:15.809 [2024-11-18 23:07:34.221264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:15.809 [2024-11-18 23:07:34.619965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:15.809 [2024-11-18 23:07:34.620974] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:15.809 [2024-11-18 23:07:34.621005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:15.810 [2024-11-18 23:07:34.621015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:15.810 [2024-11-18 23:07:34.621025] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:15.810 [2024-11-18 23:07:34.621032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:15.810 [2024-11-18 23:07:34.621040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:15.810 [2024-11-18 23:07:34.621047] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:15.810 [2024-11-18 23:07:34.621054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:15.810 [2024-11-18 23:07:34.621060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:15.810 [2024-11-18 23:07:34.621068] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:15.810 [2024-11-18 23:07:34.621074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:15.810 [2024-11-18 23:07:34.621083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:15.810 23:07:34 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:15.810 23:07:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:15.810 23:07:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:15.810 23:07:34 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:15.810 23:07:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:15.810 23:07:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:15.810 23:07:34 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.810 23:07:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:15.810 23:07:34 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.810 23:07:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:15.810 23:07:34 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:15.810 23:07:34 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:15.810 23:07:34 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:15.810 23:07:34 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:15.810 23:07:34 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:15.810 23:07:34 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:15.810 23:07:34 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:15.810 23:07:34 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:15.810 23:07:34 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:15.810 23:07:34 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:15.810 23:07:34 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:15.810 23:07:34 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:28.009 23:07:46 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:28.009 23:07:46 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:28.009 23:07:46 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:28.009 23:07:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:28.009 23:07:46 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.009 23:07:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:28.009 23:07:46 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:28.009 23:07:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:28.009 23:07:47 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.009 23:07:47 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:28.009 23:07:47 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:28.009 23:07:47 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:28.009 23:07:47 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:28.009 23:07:47 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:28.009 23:07:47 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:28.009 23:07:47 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:28.009 23:07:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:28.009 23:07:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:28.009 23:07:47 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:28.009 23:07:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:28.009 23:07:47 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.009 23:07:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:28.009 23:07:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:28.009 23:07:47 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.009 23:07:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:28.009 23:07:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:28.009 [2024-11-18 23:07:47.120168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:28.009 [2024-11-18 23:07:47.121220] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:28.009 [2024-11-18 23:07:47.121243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:28.009 [2024-11-18 23:07:47.121256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.009 [2024-11-18 23:07:47.121267] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:28.009 [2024-11-18 23:07:47.121277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:28.009 [2024-11-18 23:07:47.121284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.009 [2024-11-18 23:07:47.121292] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:28.009 [2024-11-18 23:07:47.121298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:28.009 [2024-11-18 23:07:47.121306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.009 [2024-11-18 23:07:47.121312] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:28.009 [2024-11-18 23:07:47.121320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:28.009 [2024-11-18 23:07:47.121326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.266 [2024-11-18 23:07:47.520166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:28.266 [2024-11-18 23:07:47.521137] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:28.266 [2024-11-18 23:07:47.521181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:28.266 [2024-11-18 23:07:47.521192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.266 [2024-11-18 23:07:47.521202] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:28.267 [2024-11-18 23:07:47.521210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:28.267 [2024-11-18 23:07:47.521218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.267 [2024-11-18 23:07:47.521224] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:28.267 [2024-11-18 23:07:47.521232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:28.267 [2024-11-18 23:07:47.521238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.267 [2024-11-18 23:07:47.521245] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:28.267 [2024-11-18 23:07:47.521251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:28.267 [2024-11-18 23:07:47.521260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.267 23:07:47 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:28.267 23:07:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:28.267 23:07:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:28.267 23:07:47 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:28.267 23:07:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:28.267 23:07:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:28.267 23:07:47 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.267 23:07:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:28.267 23:07:47 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.267 23:07:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:28.267 23:07:47 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:28.525 23:07:47 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:28.525 23:07:47 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:28.525 23:07:47 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:28.525 23:07:47 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:28.525 23:07:47 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:28.525 23:07:47 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:28.525 23:07:47 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:28.525 23:07:47 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:28.525 23:07:47 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:28.525 23:07:47 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:28.525 23:07:47 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:40.743 23:07:59 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:40.743 23:07:59 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:40.743 23:07:59 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:40.743 23:07:59 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:40.743 23:07:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:40.743 23:07:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:40.743 23:07:59 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.743 23:07:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:40.743 23:07:59 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.743 23:07:59 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:40.743 23:07:59 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:40.743 23:07:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:40.743 23:07:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:40.743 23:07:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:40.743 23:07:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:40.743 23:07:59 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:40.743 23:07:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:40.743 23:07:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:40.743 23:07:59 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:40.743 23:07:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:40.743 23:07:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:40.743 23:07:59 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.743 23:07:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:40.743 23:07:59 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.743 23:07:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:40.743 23:07:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:40.743 [2024-11-18 23:08:00.020372] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:40.743 [2024-11-18 23:08:00.021138] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:40.743 [2024-11-18 23:08:00.021174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:40.743 [2024-11-18 23:08:00.021186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.743 [2024-11-18 23:08:00.021198] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:40.743 [2024-11-18 23:08:00.021209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:40.743 [2024-11-18 23:08:00.021216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.743 [2024-11-18 23:08:00.021224] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:40.743 [2024-11-18 23:08:00.021230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:40.743 [2024-11-18 23:08:00.021238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.743 [2024-11-18 23:08:00.021244] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:40.743 [2024-11-18 23:08:00.021252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:40.743 [2024-11-18 23:08:00.021259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:41.307 [2024-11-18 23:08:00.420376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:41.308 [2024-11-18 23:08:00.421309] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:41.308 [2024-11-18 23:08:00.421337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:41.308 [2024-11-18 23:08:00.421347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:41.308 [2024-11-18 23:08:00.421358] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:41.308 [2024-11-18 23:08:00.421366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:41.308 [2024-11-18 23:08:00.421375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:41.308 [2024-11-18 23:08:00.421381] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:41.308 [2024-11-18 23:08:00.421391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:41.308 [2024-11-18 23:08:00.421398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:41.308 [2024-11-18 23:08:00.421405] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:41.308 [2024-11-18 23:08:00.421412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:41.308 [2024-11-18 23:08:00.421420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:41.308 23:08:00 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:41.308 23:08:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:41.308 23:08:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:41.308 23:08:00 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:41.308 23:08:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:41.308 23:08:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:41.308 23:08:00 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.308 23:08:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:41.308 23:08:00 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.308 23:08:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:41.308 23:08:00 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:41.308 23:08:00 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:41.308 23:08:00 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:41.308 23:08:00 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:41.566 23:08:00 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:41.566 23:08:00 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:41.566 23:08:00 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:41.566 23:08:00 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:41.566 23:08:00 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:41.566 23:08:00 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:41.566 23:08:00 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:41.566 23:08:00 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:53.881 23:08:12 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:53.881 23:08:12 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:53.881 23:08:12 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:53.881 23:08:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:53.881 23:08:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:53.881 23:08:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:53.881 23:08:12 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.881 23:08:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:53.881 23:08:12 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.881 23:08:12 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:53.882 23:08:12 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:53.882 23:08:12 sw_hotplug -- common/autotest_common.sh@717 -- # time=44.69 00:12:53.882 23:08:12 sw_hotplug -- common/autotest_common.sh@718 -- # echo 44.69 00:12:53.882 23:08:12 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:12:53.882 23:08:12 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.69 00:12:53.882 23:08:12 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.69 2 00:12:53.882 remove_attach_helper took 44.69s to complete (handling 2 nvme drive(s)) 23:08:12 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:12:53.882 23:08:12 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 79535 00:12:53.882 23:08:12 sw_hotplug -- common/autotest_common.sh@950 -- # '[' -z 79535 ']' 00:12:53.882 23:08:12 sw_hotplug -- common/autotest_common.sh@954 -- # kill -0 79535 00:12:53.882 23:08:12 sw_hotplug -- common/autotest_common.sh@955 -- # uname 00:12:53.882 23:08:12 sw_hotplug -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:53.882 23:08:12 sw_hotplug -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79535 00:12:53.882 killing process with pid 79535 00:12:53.882 23:08:12 sw_hotplug -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:53.882 23:08:12 sw_hotplug -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:53.882 23:08:12 sw_hotplug -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79535' 00:12:53.882 23:08:12 sw_hotplug -- common/autotest_common.sh@969 -- # kill 79535 00:12:53.882 23:08:12 sw_hotplug -- common/autotest_common.sh@974 -- # wait 79535 00:12:53.882 23:08:13 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:54.143 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:54.715 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:54.715 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:54.715 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:54.715 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:54.715 00:12:54.715 real 2m28.511s 00:12:54.715 user 1m49.430s 00:12:54.715 sys 0m17.627s 00:12:54.715 ************************************ 00:12:54.715 END TEST sw_hotplug 00:12:54.715 ************************************ 00:12:54.715 23:08:14 sw_hotplug -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:54.715 23:08:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:54.976 23:08:14 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:12:54.976 23:08:14 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:54.976 23:08:14 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:54.976 23:08:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:54.976 23:08:14 -- common/autotest_common.sh@10 -- # set +x 00:12:54.976 ************************************ 00:12:54.976 START TEST nvme_xnvme 00:12:54.976 ************************************ 00:12:54.976 23:08:14 nvme_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:54.976 * Looking for test storage... 00:12:54.976 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:54.976 23:08:14 nvme_xnvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:54.976 23:08:14 nvme_xnvme -- common/autotest_common.sh@1681 -- # lcov --version 00:12:54.976 23:08:14 nvme_xnvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:54.976 23:08:14 nvme_xnvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:54.976 23:08:14 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:54.976 23:08:14 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:54.976 23:08:14 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:54.976 23:08:14 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:54.976 23:08:14 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:54.976 23:08:14 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:54.976 23:08:14 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:54.976 23:08:14 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:54.976 23:08:14 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:54.976 23:08:14 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:54.976 23:08:14 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:54.976 23:08:14 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:54.976 23:08:14 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:54.976 23:08:14 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:54.976 23:08:14 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:54.976 23:08:14 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:54.976 23:08:14 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:54.976 23:08:14 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:54.976 23:08:14 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:54.976 23:08:14 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:54.976 23:08:14 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:54.976 23:08:14 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:54.976 23:08:14 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:54.976 23:08:14 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:54.976 23:08:14 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:54.976 23:08:14 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:54.976 23:08:14 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:54.976 23:08:14 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:54.976 23:08:14 nvme_xnvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:54.977 23:08:14 nvme_xnvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:54.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.977 --rc genhtml_branch_coverage=1 00:12:54.977 --rc genhtml_function_coverage=1 00:12:54.977 --rc genhtml_legend=1 00:12:54.977 --rc geninfo_all_blocks=1 00:12:54.977 --rc geninfo_unexecuted_blocks=1 00:12:54.977 00:12:54.977 ' 00:12:54.977 23:08:14 nvme_xnvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:54.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.977 --rc genhtml_branch_coverage=1 00:12:54.977 --rc genhtml_function_coverage=1 00:12:54.977 --rc genhtml_legend=1 00:12:54.977 --rc geninfo_all_blocks=1 00:12:54.977 --rc geninfo_unexecuted_blocks=1 00:12:54.977 00:12:54.977 ' 00:12:54.977 23:08:14 nvme_xnvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:54.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.977 --rc genhtml_branch_coverage=1 00:12:54.977 --rc genhtml_function_coverage=1 00:12:54.977 --rc genhtml_legend=1 00:12:54.977 --rc geninfo_all_blocks=1 00:12:54.977 --rc geninfo_unexecuted_blocks=1 00:12:54.977 00:12:54.977 ' 00:12:54.977 23:08:14 nvme_xnvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:54.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:54.977 --rc genhtml_branch_coverage=1 00:12:54.977 --rc genhtml_function_coverage=1 00:12:54.977 --rc genhtml_legend=1 00:12:54.977 --rc geninfo_all_blocks=1 00:12:54.977 --rc geninfo_unexecuted_blocks=1 00:12:54.977 00:12:54.977 ' 00:12:54.977 23:08:14 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:54.977 23:08:14 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:54.977 23:08:14 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:54.977 23:08:14 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:54.977 23:08:14 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:54.977 23:08:14 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.977 23:08:14 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.977 23:08:14 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.977 23:08:14 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:54.977 23:08:14 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.977 23:08:14 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:12:54.977 23:08:14 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:54.977 23:08:14 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:54.977 23:08:14 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:54.977 ************************************ 00:12:54.977 START TEST xnvme_to_malloc_dd_copy 00:12:54.977 ************************************ 00:12:54.977 23:08:14 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1125 -- # malloc_to_xnvme_copy 00:12:54.977 23:08:14 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:12:54.977 23:08:14 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:12:54.977 23:08:14 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:12:54.977 23:08:14 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:12:54.977 23:08:14 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:12:54.977 23:08:14 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:12:54.977 23:08:14 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:54.977 23:08:14 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:12:54.977 23:08:14 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:12:54.977 23:08:14 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:12:54.977 23:08:14 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:12:54.977 23:08:14 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:12:54.977 23:08:14 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:12:54.977 23:08:14 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:12:54.977 23:08:14 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:12:54.977 23:08:14 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:12:54.977 23:08:14 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:54.977 23:08:14 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:54.977 23:08:14 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:54.977 23:08:14 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:54.977 23:08:14 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:54.977 23:08:14 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:54.977 23:08:14 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:54.977 23:08:14 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:55.239 { 00:12:55.239 "subsystems": [ 00:12:55.239 { 00:12:55.239 "subsystem": "bdev", 00:12:55.239 "config": [ 00:12:55.239 { 00:12:55.239 "params": { 00:12:55.239 "block_size": 512, 00:12:55.239 "num_blocks": 2097152, 00:12:55.239 "name": "malloc0" 00:12:55.239 }, 00:12:55.239 "method": "bdev_malloc_create" 00:12:55.239 }, 00:12:55.239 { 00:12:55.239 "params": { 00:12:55.239 "io_mechanism": "libaio", 00:12:55.239 "filename": "/dev/nullb0", 00:12:55.239 "name": "null0" 00:12:55.239 }, 00:12:55.239 "method": "bdev_xnvme_create" 00:12:55.239 }, 00:12:55.239 { 00:12:55.239 "method": "bdev_wait_for_examine" 00:12:55.239 } 00:12:55.239 ] 00:12:55.239 } 00:12:55.239 ] 00:12:55.239 } 00:12:55.239 [2024-11-18 23:08:14.404386] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:55.239 [2024-11-18 23:08:14.404708] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80904 ] 00:12:55.239 [2024-11-18 23:08:14.559670] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:55.239 [2024-11-18 23:08:14.608761] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:56.623  [2024-11-18T23:08:16.947Z] Copying: 223/1024 [MB] (223 MBps) [2024-11-18T23:08:18.327Z] Copying: 447/1024 [MB] (224 MBps) [2024-11-18T23:08:19.263Z] Copying: 716/1024 [MB] (268 MBps) [2024-11-18T23:08:19.263Z] Copying: 1024/1024 [MB] (average 256 MBps) 00:12:59.885 00:12:59.885 23:08:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:59.885 23:08:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:59.885 23:08:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:59.885 23:08:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:00.143 { 00:13:00.143 "subsystems": [ 00:13:00.143 { 00:13:00.143 "subsystem": "bdev", 00:13:00.143 "config": [ 00:13:00.143 { 00:13:00.143 "params": { 00:13:00.143 "block_size": 512, 00:13:00.143 "num_blocks": 2097152, 00:13:00.143 "name": "malloc0" 00:13:00.143 }, 00:13:00.143 "method": "bdev_malloc_create" 00:13:00.143 }, 00:13:00.144 { 00:13:00.144 "params": { 00:13:00.144 "io_mechanism": "libaio", 00:13:00.144 "filename": "/dev/nullb0", 00:13:00.144 "name": "null0" 00:13:00.144 }, 00:13:00.144 "method": "bdev_xnvme_create" 00:13:00.144 }, 00:13:00.144 { 00:13:00.144 "method": "bdev_wait_for_examine" 00:13:00.144 } 00:13:00.144 ] 00:13:00.144 } 00:13:00.144 ] 00:13:00.144 } 00:13:00.144 [2024-11-18 23:08:19.288682] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:00.144 [2024-11-18 23:08:19.289295] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80964 ] 00:13:00.144 [2024-11-18 23:08:19.435612] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:00.144 [2024-11-18 23:08:19.463992] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.528  [2024-11-18T23:08:21.841Z] Copying: 309/1024 [MB] (309 MBps) [2024-11-18T23:08:22.785Z] Copying: 619/1024 [MB] (310 MBps) [2024-11-18T23:08:23.043Z] Copying: 931/1024 [MB] (311 MBps) [2024-11-18T23:08:23.611Z] Copying: 1024/1024 [MB] (average 310 MBps) 00:13:04.233 00:13:04.233 23:08:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:13:04.233 23:08:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:04.233 23:08:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:13:04.233 23:08:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:13:04.233 23:08:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:13:04.233 23:08:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:04.233 { 00:13:04.233 "subsystems": [ 00:13:04.233 { 00:13:04.234 "subsystem": "bdev", 00:13:04.234 "config": [ 00:13:04.234 { 00:13:04.234 "params": { 00:13:04.234 "block_size": 512, 00:13:04.234 "num_blocks": 2097152, 00:13:04.234 "name": "malloc0" 00:13:04.234 }, 00:13:04.234 "method": "bdev_malloc_create" 00:13:04.234 }, 00:13:04.234 { 00:13:04.234 "params": { 00:13:04.234 "io_mechanism": "io_uring", 00:13:04.234 "filename": "/dev/nullb0", 00:13:04.234 "name": "null0" 00:13:04.234 }, 00:13:04.234 "method": "bdev_xnvme_create" 00:13:04.234 }, 00:13:04.234 { 00:13:04.234 "method": "bdev_wait_for_examine" 00:13:04.234 } 00:13:04.234 ] 00:13:04.234 } 00:13:04.234 ] 00:13:04.234 } 00:13:04.234 [2024-11-18 23:08:23.397366] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:04.234 [2024-11-18 23:08:23.397482] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81018 ] 00:13:04.234 [2024-11-18 23:08:23.546762] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:04.234 [2024-11-18 23:08:23.579703] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:05.621  [2024-11-18T23:08:25.943Z] Copying: 236/1024 [MB] (236 MBps) [2024-11-18T23:08:26.878Z] Copying: 481/1024 [MB] (244 MBps) [2024-11-18T23:08:27.811Z] Copying: 799/1024 [MB] (317 MBps) [2024-11-18T23:08:28.069Z] Copying: 1024/1024 [MB] (average 276 MBps) 00:13:08.691 00:13:08.691 23:08:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:13:08.691 23:08:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:13:08.691 23:08:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:13:08.691 23:08:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:08.691 { 00:13:08.691 "subsystems": [ 00:13:08.691 { 00:13:08.692 "subsystem": "bdev", 00:13:08.692 "config": [ 00:13:08.692 { 00:13:08.692 "params": { 00:13:08.692 "block_size": 512, 00:13:08.692 "num_blocks": 2097152, 00:13:08.692 "name": "malloc0" 00:13:08.692 }, 00:13:08.692 "method": "bdev_malloc_create" 00:13:08.692 }, 00:13:08.692 { 00:13:08.692 "params": { 00:13:08.692 "io_mechanism": "io_uring", 00:13:08.692 "filename": "/dev/nullb0", 00:13:08.692 "name": "null0" 00:13:08.692 }, 00:13:08.692 "method": "bdev_xnvme_create" 00:13:08.692 }, 00:13:08.692 { 00:13:08.692 "method": "bdev_wait_for_examine" 00:13:08.692 } 00:13:08.692 ] 00:13:08.692 } 00:13:08.692 ] 00:13:08.692 } 00:13:08.692 [2024-11-18 23:08:27.905349] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:08.692 [2024-11-18 23:08:27.905573] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81072 ] 00:13:08.692 [2024-11-18 23:08:28.050615] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:08.960 [2024-11-18 23:08:28.079670] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:10.335  [2024-11-18T23:08:30.647Z] Copying: 322/1024 [MB] (322 MBps) [2024-11-18T23:08:31.581Z] Copying: 645/1024 [MB] (322 MBps) [2024-11-18T23:08:31.581Z] Copying: 968/1024 [MB] (323 MBps) [2024-11-18T23:08:31.841Z] Copying: 1024/1024 [MB] (average 322 MBps) 00:13:12.463 00:13:12.463 23:08:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:13:12.463 23:08:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:13:12.463 00:13:12.463 real 0m17.538s 00:13:12.463 user 0m14.533s 00:13:12.463 sys 0m2.490s 00:13:12.463 ************************************ 00:13:12.463 END TEST xnvme_to_malloc_dd_copy 00:13:12.463 ************************************ 00:13:12.463 23:08:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:12.463 23:08:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:12.723 23:08:31 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:12.723 23:08:31 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:12.723 23:08:31 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:12.723 23:08:31 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:12.723 ************************************ 00:13:12.723 START TEST xnvme_bdevperf 00:13:12.723 ************************************ 00:13:12.723 23:08:31 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1125 -- # xnvme_bdevperf 00:13:12.723 23:08:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:13:12.723 23:08:31 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:13:12.723 23:08:31 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:13:12.723 23:08:31 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:13:12.723 23:08:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:13:12.723 23:08:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:13:12.723 23:08:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:13:12.723 23:08:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:13:12.723 23:08:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:13:12.723 23:08:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:13:12.723 23:08:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:13:12.723 23:08:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:13:12.723 23:08:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:13:12.723 23:08:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:13:12.723 23:08:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:13:12.723 23:08:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:13:12.724 23:08:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:13:12.724 23:08:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:13:12.724 23:08:31 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:12.724 23:08:31 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:12.724 { 00:13:12.724 "subsystems": [ 00:13:12.724 { 00:13:12.724 "subsystem": "bdev", 00:13:12.724 "config": [ 00:13:12.724 { 00:13:12.724 "params": { 00:13:12.724 "io_mechanism": "libaio", 00:13:12.724 "filename": "/dev/nullb0", 00:13:12.724 "name": "null0" 00:13:12.724 }, 00:13:12.724 "method": "bdev_xnvme_create" 00:13:12.724 }, 00:13:12.724 { 00:13:12.724 "method": "bdev_wait_for_examine" 00:13:12.724 } 00:13:12.724 ] 00:13:12.724 } 00:13:12.724 ] 00:13:12.724 } 00:13:12.724 [2024-11-18 23:08:31.976200] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:12.724 [2024-11-18 23:08:31.976320] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81149 ] 00:13:12.983 [2024-11-18 23:08:32.123289] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:12.983 [2024-11-18 23:08:32.166073] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:12.983 Running I/O for 5 seconds... 00:13:15.307 207104.00 IOPS, 809.00 MiB/s [2024-11-18T23:08:35.279Z] 207392.00 IOPS, 810.12 MiB/s [2024-11-18T23:08:36.651Z] 207466.67 IOPS, 810.42 MiB/s [2024-11-18T23:08:37.589Z] 207520.00 IOPS, 810.62 MiB/s 00:13:18.211 Latency(us) 00:13:18.211 [2024-11-18T23:08:37.589Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:18.211 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:18.211 null0 : 5.00 207545.83 810.73 0.00 0.00 306.20 111.85 1518.67 00:13:18.211 [2024-11-18T23:08:37.589Z] =================================================================================================================== 00:13:18.211 [2024-11-18T23:08:37.589Z] Total : 207545.83 810.73 0.00 0.00 306.20 111.85 1518.67 00:13:18.211 23:08:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:13:18.211 23:08:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:18.211 23:08:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:13:18.211 23:08:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:13:18.211 23:08:37 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:18.211 23:08:37 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:18.211 { 00:13:18.211 "subsystems": [ 00:13:18.211 { 00:13:18.211 "subsystem": "bdev", 00:13:18.211 "config": [ 00:13:18.211 { 00:13:18.211 "params": { 00:13:18.211 "io_mechanism": "io_uring", 00:13:18.211 "filename": "/dev/nullb0", 00:13:18.211 "name": "null0" 00:13:18.211 }, 00:13:18.211 "method": "bdev_xnvme_create" 00:13:18.211 }, 00:13:18.211 { 00:13:18.211 "method": "bdev_wait_for_examine" 00:13:18.211 } 00:13:18.211 ] 00:13:18.211 } 00:13:18.211 ] 00:13:18.211 } 00:13:18.211 [2024-11-18 23:08:37.463894] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:18.211 [2024-11-18 23:08:37.464014] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81212 ] 00:13:18.471 [2024-11-18 23:08:37.610457] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:18.471 [2024-11-18 23:08:37.646432] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:18.471 Running I/O for 5 seconds... 00:13:20.776 238464.00 IOPS, 931.50 MiB/s [2024-11-18T23:08:40.720Z] 238400.00 IOPS, 931.25 MiB/s [2024-11-18T23:08:42.092Z] 238250.67 IOPS, 930.67 MiB/s [2024-11-18T23:08:43.029Z] 238256.00 IOPS, 930.69 MiB/s [2024-11-18T23:08:43.029Z] 238246.40 IOPS, 930.65 MiB/s 00:13:23.651 Latency(us) 00:13:23.651 [2024-11-18T23:08:43.029Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:23.651 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:23.651 null0 : 5.00 238176.31 930.38 0.00 0.00 266.30 236.31 1474.56 00:13:23.651 [2024-11-18T23:08:43.029Z] =================================================================================================================== 00:13:23.651 [2024-11-18T23:08:43.029Z] Total : 238176.31 930.38 0.00 0.00 266.30 236.31 1474.56 00:13:23.651 23:08:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:13:23.651 23:08:42 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:13:23.651 ************************************ 00:13:23.651 END TEST xnvme_bdevperf 00:13:23.651 ************************************ 00:13:23.651 00:13:23.651 real 0m10.994s 00:13:23.651 user 0m8.581s 00:13:23.651 sys 0m2.194s 00:13:23.651 23:08:42 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:23.651 23:08:42 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:23.651 ************************************ 00:13:23.651 END TEST nvme_xnvme 00:13:23.651 ************************************ 00:13:23.651 00:13:23.651 real 0m28.810s 00:13:23.651 user 0m23.226s 00:13:23.651 sys 0m4.821s 00:13:23.651 23:08:42 nvme_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:23.651 23:08:42 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:23.651 23:08:42 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:13:23.651 23:08:42 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:23.651 23:08:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:23.651 23:08:42 -- common/autotest_common.sh@10 -- # set +x 00:13:23.651 ************************************ 00:13:23.651 START TEST blockdev_xnvme 00:13:23.651 ************************************ 00:13:23.651 23:08:42 blockdev_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:13:23.912 * Looking for test storage... 00:13:23.912 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:13:23.912 23:08:43 blockdev_xnvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:23.912 23:08:43 blockdev_xnvme -- common/autotest_common.sh@1681 -- # lcov --version 00:13:23.912 23:08:43 blockdev_xnvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:23.912 23:08:43 blockdev_xnvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:23.912 23:08:43 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:23.912 23:08:43 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:23.912 23:08:43 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:23.912 23:08:43 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:13:23.912 23:08:43 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:13:23.912 23:08:43 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:13:23.912 23:08:43 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:13:23.912 23:08:43 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:13:23.912 23:08:43 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:13:23.912 23:08:43 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:13:23.912 23:08:43 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:23.912 23:08:43 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:13:23.912 23:08:43 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:13:23.912 23:08:43 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:23.912 23:08:43 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:23.912 23:08:43 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:13:23.912 23:08:43 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:13:23.912 23:08:43 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:23.912 23:08:43 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:13:23.912 23:08:43 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:13:23.912 23:08:43 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:13:23.912 23:08:43 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:13:23.912 23:08:43 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:23.912 23:08:43 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:13:23.912 23:08:43 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:13:23.912 23:08:43 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:23.912 23:08:43 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:23.912 23:08:43 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:13:23.912 23:08:43 blockdev_xnvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:23.912 23:08:43 blockdev_xnvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:23.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:23.912 --rc genhtml_branch_coverage=1 00:13:23.912 --rc genhtml_function_coverage=1 00:13:23.912 --rc genhtml_legend=1 00:13:23.912 --rc geninfo_all_blocks=1 00:13:23.912 --rc geninfo_unexecuted_blocks=1 00:13:23.912 00:13:23.912 ' 00:13:23.912 23:08:43 blockdev_xnvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:23.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:23.912 --rc genhtml_branch_coverage=1 00:13:23.912 --rc genhtml_function_coverage=1 00:13:23.912 --rc genhtml_legend=1 00:13:23.912 --rc geninfo_all_blocks=1 00:13:23.912 --rc geninfo_unexecuted_blocks=1 00:13:23.912 00:13:23.912 ' 00:13:23.912 23:08:43 blockdev_xnvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:23.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:23.912 --rc genhtml_branch_coverage=1 00:13:23.912 --rc genhtml_function_coverage=1 00:13:23.912 --rc genhtml_legend=1 00:13:23.912 --rc geninfo_all_blocks=1 00:13:23.912 --rc geninfo_unexecuted_blocks=1 00:13:23.912 00:13:23.912 ' 00:13:23.912 23:08:43 blockdev_xnvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:23.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:23.912 --rc genhtml_branch_coverage=1 00:13:23.912 --rc genhtml_function_coverage=1 00:13:23.912 --rc genhtml_legend=1 00:13:23.912 --rc geninfo_all_blocks=1 00:13:23.912 --rc geninfo_unexecuted_blocks=1 00:13:23.912 00:13:23.912 ' 00:13:23.912 23:08:43 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:13:23.912 23:08:43 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:13:23.912 23:08:43 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:13:23.912 23:08:43 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:23.912 23:08:43 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:13:23.912 23:08:43 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:13:23.912 23:08:43 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:13:23.912 23:08:43 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:13:23.912 23:08:43 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:13:23.912 23:08:43 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:13:23.912 23:08:43 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:13:23.912 23:08:43 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:13:23.912 23:08:43 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:13:23.912 23:08:43 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:13:23.912 23:08:43 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:13:23.912 23:08:43 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:13:23.912 23:08:43 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:13:23.912 23:08:43 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:13:23.912 23:08:43 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:13:23.912 23:08:43 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:13:23.912 23:08:43 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:13:23.912 23:08:43 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:13:23.912 23:08:43 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:13:23.912 23:08:43 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:13:23.912 23:08:43 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=81354 00:13:23.912 23:08:43 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:13:23.912 23:08:43 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 81354 00:13:23.912 23:08:43 blockdev_xnvme -- common/autotest_common.sh@831 -- # '[' -z 81354 ']' 00:13:23.912 23:08:43 blockdev_xnvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:23.912 23:08:43 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:13:23.912 23:08:43 blockdev_xnvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:23.912 23:08:43 blockdev_xnvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:23.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:23.912 23:08:43 blockdev_xnvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:23.912 23:08:43 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:23.912 [2024-11-18 23:08:43.202986] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:23.912 [2024-11-18 23:08:43.203298] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81354 ] 00:13:24.171 [2024-11-18 23:08:43.350633] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:24.171 [2024-11-18 23:08:43.393690] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:24.738 23:08:44 blockdev_xnvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:24.738 23:08:44 blockdev_xnvme -- common/autotest_common.sh@864 -- # return 0 00:13:24.738 23:08:44 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:13:24.738 23:08:44 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:13:24.738 23:08:44 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:13:24.738 23:08:44 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:13:24.738 23:08:44 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:24.996 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:25.255 Waiting for block devices as requested 00:13:25.255 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:25.255 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:25.255 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:25.511 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:30.775 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:30.775 23:08:49 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:13:30.775 23:08:49 blockdev_xnvme -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:13:30.775 23:08:49 blockdev_xnvme -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:13:30.775 23:08:49 blockdev_xnvme -- common/autotest_common.sh@1656 -- # local nvme bdf 00:13:30.775 23:08:49 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:30.775 23:08:49 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:13:30.775 23:08:49 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:13:30.775 23:08:49 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:13:30.775 23:08:49 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:30.775 23:08:49 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:30.775 23:08:49 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:13:30.775 23:08:49 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:13:30.775 23:08:49 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:13:30.775 23:08:49 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:30.775 23:08:49 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:30.775 23:08:49 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:13:30.775 23:08:49 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:13:30.775 23:08:49 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:13:30.775 23:08:49 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:30.775 23:08:49 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:30.775 23:08:49 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:13:30.775 23:08:49 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:13:30.775 23:08:49 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:13:30.775 23:08:49 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:30.775 23:08:49 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:30.775 23:08:49 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:13:30.775 23:08:49 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:13:30.775 23:08:49 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:13:30.775 23:08:49 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:30.775 23:08:49 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:30.775 23:08:49 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:13:30.775 23:08:49 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:13:30.775 23:08:49 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:13:30.775 23:08:49 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:30.775 23:08:49 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:30.775 23:08:49 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:13:30.775 23:08:49 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:13:30.775 23:08:49 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:13:30.775 23:08:49 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:30.775 23:08:49 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:30.775 23:08:49 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:13:30.775 23:08:49 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:30.775 23:08:49 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:30.775 23:08:49 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:30.775 23:08:49 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:13:30.775 23:08:49 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:30.775 23:08:49 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:30.775 23:08:49 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:30.775 23:08:49 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:13:30.775 23:08:49 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:30.775 23:08:49 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:30.775 23:08:49 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:30.775 23:08:49 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:13:30.775 23:08:49 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:30.775 23:08:49 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:30.775 23:08:49 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:30.775 23:08:49 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:13:30.775 23:08:49 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:30.775 23:08:49 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:30.775 23:08:49 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:30.775 23:08:49 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:13:30.775 23:08:49 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:30.775 23:08:49 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:30.775 23:08:49 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:13:30.775 23:08:49 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:13:30.775 23:08:49 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.775 23:08:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:30.775 23:08:49 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:13:30.775 nvme0n1 00:13:30.775 nvme1n1 00:13:30.775 nvme2n1 00:13:30.775 nvme2n2 00:13:30.775 nvme2n3 00:13:30.775 nvme3n1 00:13:30.775 23:08:49 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.775 23:08:49 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:13:30.775 23:08:49 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.775 23:08:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:30.775 23:08:49 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.775 23:08:49 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:13:30.775 23:08:49 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:13:30.775 23:08:49 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.775 23:08:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:30.775 23:08:49 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.775 23:08:49 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:13:30.775 23:08:49 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.775 23:08:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:30.775 23:08:49 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.775 23:08:49 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:13:30.775 23:08:49 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.775 23:08:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:30.775 23:08:49 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.775 23:08:49 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:13:30.775 23:08:49 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:13:30.775 23:08:49 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.775 23:08:49 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:13:30.775 23:08:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:30.775 23:08:49 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.776 23:08:49 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:13:30.776 23:08:49 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:13:30.776 23:08:49 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "52981408-3e76-40f0-a172-082033d20d33"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "52981408-3e76-40f0-a172-082033d20d33",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "73d2089d-9c92-4ddd-a691-02fdb614f19a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "73d2089d-9c92-4ddd-a691-02fdb614f19a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "15eabcd2-9c8c-4e3b-ad74-67096aafdb78"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "15eabcd2-9c8c-4e3b-ad74-67096aafdb78",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "4b28419f-ef31-4413-ae4e-9663e46e7ce4"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4b28419f-ef31-4413-ae4e-9663e46e7ce4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "6a04d00d-077f-45f7-bad7-fb02d7a36648"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6a04d00d-077f-45f7-bad7-fb02d7a36648",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "e4b5a7ac-d315-4256-9deb-aaf91926a85d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "e4b5a7ac-d315-4256-9deb-aaf91926a85d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:13:30.776 23:08:49 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:13:30.776 23:08:49 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:13:30.776 23:08:49 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:13:30.776 23:08:49 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 81354 00:13:30.776 23:08:49 blockdev_xnvme -- common/autotest_common.sh@950 -- # '[' -z 81354 ']' 00:13:30.776 23:08:49 blockdev_xnvme -- common/autotest_common.sh@954 -- # kill -0 81354 00:13:30.776 23:08:49 blockdev_xnvme -- common/autotest_common.sh@955 -- # uname 00:13:30.776 23:08:49 blockdev_xnvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:30.776 23:08:49 blockdev_xnvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81354 00:13:30.776 killing process with pid 81354 00:13:30.776 23:08:49 blockdev_xnvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:30.776 23:08:49 blockdev_xnvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:30.776 23:08:49 blockdev_xnvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81354' 00:13:30.776 23:08:49 blockdev_xnvme -- common/autotest_common.sh@969 -- # kill 81354 00:13:30.776 23:08:49 blockdev_xnvme -- common/autotest_common.sh@974 -- # wait 81354 00:13:31.034 23:08:50 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:31.034 23:08:50 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:13:31.034 23:08:50 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:31.034 23:08:50 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:31.034 23:08:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:31.034 ************************************ 00:13:31.034 START TEST bdev_hello_world 00:13:31.034 ************************************ 00:13:31.034 23:08:50 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:13:31.034 [2024-11-18 23:08:50.255301] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:31.034 [2024-11-18 23:08:50.255397] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81697 ] 00:13:31.034 [2024-11-18 23:08:50.397183] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:31.292 [2024-11-18 23:08:50.426976] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:31.292 [2024-11-18 23:08:50.584606] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:13:31.292 [2024-11-18 23:08:50.584643] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:13:31.292 [2024-11-18 23:08:50.584659] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:13:31.292 [2024-11-18 23:08:50.586172] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:13:31.292 [2024-11-18 23:08:50.586443] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:13:31.292 [2024-11-18 23:08:50.586455] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:13:31.292 [2024-11-18 23:08:50.586691] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:13:31.292 00:13:31.292 [2024-11-18 23:08:50.586715] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:13:31.552 ************************************ 00:13:31.553 END TEST bdev_hello_world 00:13:31.553 ************************************ 00:13:31.553 00:13:31.553 real 0m0.512s 00:13:31.553 user 0m0.273s 00:13:31.553 sys 0m0.132s 00:13:31.553 23:08:50 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:31.553 23:08:50 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:13:31.553 23:08:50 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:13:31.553 23:08:50 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:31.553 23:08:50 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:31.553 23:08:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:31.553 ************************************ 00:13:31.553 START TEST bdev_bounds 00:13:31.553 ************************************ 00:13:31.553 23:08:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:13:31.553 23:08:50 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=81728 00:13:31.553 Process bdevio pid: 81728 00:13:31.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:31.553 23:08:50 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:13:31.553 23:08:50 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:31.553 23:08:50 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 81728' 00:13:31.553 23:08:50 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 81728 00:13:31.553 23:08:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 81728 ']' 00:13:31.553 23:08:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:31.553 23:08:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:31.553 23:08:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:31.553 23:08:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:31.553 23:08:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:31.553 [2024-11-18 23:08:50.829808] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:31.553 [2024-11-18 23:08:50.830085] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81728 ] 00:13:31.814 [2024-11-18 23:08:50.974108] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:31.814 [2024-11-18 23:08:51.029100] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:31.814 [2024-11-18 23:08:51.029388] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:13:31.814 [2024-11-18 23:08:51.029445] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:32.386 23:08:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:32.386 23:08:51 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:13:32.386 23:08:51 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:13:32.648 I/O targets: 00:13:32.648 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:13:32.648 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:13:32.648 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:32.648 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:32.648 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:32.648 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:13:32.648 00:13:32.648 00:13:32.648 CUnit - A unit testing framework for C - Version 2.1-3 00:13:32.648 http://cunit.sourceforge.net/ 00:13:32.648 00:13:32.648 00:13:32.648 Suite: bdevio tests on: nvme3n1 00:13:32.648 Test: blockdev write read block ...passed 00:13:32.648 Test: blockdev write zeroes read block ...passed 00:13:32.648 Test: blockdev write zeroes read no split ...passed 00:13:32.648 Test: blockdev write zeroes read split ...passed 00:13:32.648 Test: blockdev write zeroes read split partial ...passed 00:13:32.648 Test: blockdev reset ...passed 00:13:32.649 Test: blockdev write read 8 blocks ...passed 00:13:32.649 Test: blockdev write read size > 128k ...passed 00:13:32.649 Test: blockdev write read invalid size ...passed 00:13:32.649 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:32.649 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:32.649 Test: blockdev write read max offset ...passed 00:13:32.649 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:32.649 Test: blockdev writev readv 8 blocks ...passed 00:13:32.649 Test: blockdev writev readv 30 x 1block ...passed 00:13:32.649 Test: blockdev writev readv block ...passed 00:13:32.649 Test: blockdev writev readv size > 128k ...passed 00:13:32.649 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:32.649 Test: blockdev comparev and writev ...passed 00:13:32.649 Test: blockdev nvme passthru rw ...passed 00:13:32.649 Test: blockdev nvme passthru vendor specific ...passed 00:13:32.649 Test: blockdev nvme admin passthru ...passed 00:13:32.649 Test: blockdev copy ...passed 00:13:32.649 Suite: bdevio tests on: nvme2n3 00:13:32.649 Test: blockdev write read block ...passed 00:13:32.649 Test: blockdev write zeroes read block ...passed 00:13:32.649 Test: blockdev write zeroes read no split ...passed 00:13:32.649 Test: blockdev write zeroes read split ...passed 00:13:32.649 Test: blockdev write zeroes read split partial ...passed 00:13:32.649 Test: blockdev reset ...passed 00:13:32.649 Test: blockdev write read 8 blocks ...passed 00:13:32.649 Test: blockdev write read size > 128k ...passed 00:13:32.649 Test: blockdev write read invalid size ...passed 00:13:32.649 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:32.649 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:32.649 Test: blockdev write read max offset ...passed 00:13:32.649 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:32.649 Test: blockdev writev readv 8 blocks ...passed 00:13:32.649 Test: blockdev writev readv 30 x 1block ...passed 00:13:32.649 Test: blockdev writev readv block ...passed 00:13:32.649 Test: blockdev writev readv size > 128k ...passed 00:13:32.649 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:32.649 Test: blockdev comparev and writev ...passed 00:13:32.649 Test: blockdev nvme passthru rw ...passed 00:13:32.649 Test: blockdev nvme passthru vendor specific ...passed 00:13:32.649 Test: blockdev nvme admin passthru ...passed 00:13:32.649 Test: blockdev copy ...passed 00:13:32.649 Suite: bdevio tests on: nvme2n2 00:13:32.649 Test: blockdev write read block ...passed 00:13:32.649 Test: blockdev write zeroes read block ...passed 00:13:32.649 Test: blockdev write zeroes read no split ...passed 00:13:32.649 Test: blockdev write zeroes read split ...passed 00:13:32.649 Test: blockdev write zeroes read split partial ...passed 00:13:32.649 Test: blockdev reset ...passed 00:13:32.649 Test: blockdev write read 8 blocks ...passed 00:13:32.649 Test: blockdev write read size > 128k ...passed 00:13:32.649 Test: blockdev write read invalid size ...passed 00:13:32.649 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:32.649 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:32.649 Test: blockdev write read max offset ...passed 00:13:32.649 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:32.650 Test: blockdev writev readv 8 blocks ...passed 00:13:32.650 Test: blockdev writev readv 30 x 1block ...passed 00:13:32.650 Test: blockdev writev readv block ...passed 00:13:32.650 Test: blockdev writev readv size > 128k ...passed 00:13:32.650 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:32.650 Test: blockdev comparev and writev ...passed 00:13:32.650 Test: blockdev nvme passthru rw ...passed 00:13:32.650 Test: blockdev nvme passthru vendor specific ...passed 00:13:32.650 Test: blockdev nvme admin passthru ...passed 00:13:32.650 Test: blockdev copy ...passed 00:13:32.650 Suite: bdevio tests on: nvme2n1 00:13:32.650 Test: blockdev write read block ...passed 00:13:32.650 Test: blockdev write zeroes read block ...passed 00:13:32.650 Test: blockdev write zeroes read no split ...passed 00:13:32.650 Test: blockdev write zeroes read split ...passed 00:13:32.650 Test: blockdev write zeroes read split partial ...passed 00:13:32.650 Test: blockdev reset ...passed 00:13:32.650 Test: blockdev write read 8 blocks ...passed 00:13:32.650 Test: blockdev write read size > 128k ...passed 00:13:32.650 Test: blockdev write read invalid size ...passed 00:13:32.650 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:32.650 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:32.650 Test: blockdev write read max offset ...passed 00:13:32.650 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:32.650 Test: blockdev writev readv 8 blocks ...passed 00:13:32.650 Test: blockdev writev readv 30 x 1block ...passed 00:13:32.650 Test: blockdev writev readv block ...passed 00:13:32.650 Test: blockdev writev readv size > 128k ...passed 00:13:32.914 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:32.914 Test: blockdev comparev and writev ...passed 00:13:32.914 Test: blockdev nvme passthru rw ...passed 00:13:32.914 Test: blockdev nvme passthru vendor specific ...passed 00:13:32.914 Test: blockdev nvme admin passthru ...passed 00:13:32.914 Test: blockdev copy ...passed 00:13:32.914 Suite: bdevio tests on: nvme1n1 00:13:32.914 Test: blockdev write read block ...passed 00:13:32.914 Test: blockdev write zeroes read block ...passed 00:13:32.914 Test: blockdev write zeroes read no split ...passed 00:13:32.914 Test: blockdev write zeroes read split ...passed 00:13:32.914 Test: blockdev write zeroes read split partial ...passed 00:13:32.914 Test: blockdev reset ...passed 00:13:32.914 Test: blockdev write read 8 blocks ...passed 00:13:32.914 Test: blockdev write read size > 128k ...passed 00:13:32.914 Test: blockdev write read invalid size ...passed 00:13:32.914 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:32.914 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:32.914 Test: blockdev write read max offset ...passed 00:13:32.914 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:32.914 Test: blockdev writev readv 8 blocks ...passed 00:13:32.914 Test: blockdev writev readv 30 x 1block ...passed 00:13:32.914 Test: blockdev writev readv block ...passed 00:13:32.914 Test: blockdev writev readv size > 128k ...passed 00:13:32.914 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:32.914 Test: blockdev comparev and writev ...passed 00:13:32.914 Test: blockdev nvme passthru rw ...passed 00:13:32.914 Test: blockdev nvme passthru vendor specific ...passed 00:13:32.914 Test: blockdev nvme admin passthru ...passed 00:13:32.914 Test: blockdev copy ...passed 00:13:32.914 Suite: bdevio tests on: nvme0n1 00:13:32.914 Test: blockdev write read block ...passed 00:13:32.914 Test: blockdev write zeroes read block ...passed 00:13:32.914 Test: blockdev write zeroes read no split ...passed 00:13:32.914 Test: blockdev write zeroes read split ...passed 00:13:32.914 Test: blockdev write zeroes read split partial ...passed 00:13:32.914 Test: blockdev reset ...passed 00:13:32.914 Test: blockdev write read 8 blocks ...passed 00:13:32.914 Test: blockdev write read size > 128k ...passed 00:13:32.914 Test: blockdev write read invalid size ...passed 00:13:32.914 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:32.914 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:32.914 Test: blockdev write read max offset ...passed 00:13:32.914 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:32.914 Test: blockdev writev readv 8 blocks ...passed 00:13:32.914 Test: blockdev writev readv 30 x 1block ...passed 00:13:32.914 Test: blockdev writev readv block ...passed 00:13:32.914 Test: blockdev writev readv size > 128k ...passed 00:13:32.914 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:32.914 Test: blockdev comparev and writev ...passed 00:13:32.914 Test: blockdev nvme passthru rw ...passed 00:13:32.914 Test: blockdev nvme passthru vendor specific ...passed 00:13:32.914 Test: blockdev nvme admin passthru ...passed 00:13:32.914 Test: blockdev copy ...passed 00:13:32.914 00:13:32.914 Run Summary: Type Total Ran Passed Failed Inactive 00:13:32.914 suites 6 6 n/a 0 0 00:13:32.914 tests 138 138 138 0 0 00:13:32.914 asserts 780 780 780 0 n/a 00:13:32.914 00:13:32.914 Elapsed time = 0.607 seconds 00:13:32.914 0 00:13:32.914 23:08:52 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 81728 00:13:32.914 23:08:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 81728 ']' 00:13:32.914 23:08:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 81728 00:13:32.914 23:08:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:13:32.914 23:08:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:32.914 23:08:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81728 00:13:32.915 killing process with pid 81728 00:13:32.915 23:08:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:32.915 23:08:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:32.915 23:08:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81728' 00:13:32.915 23:08:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 81728 00:13:32.915 23:08:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 81728 00:13:33.176 23:08:52 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:13:33.176 00:13:33.176 real 0m1.647s 00:13:33.176 user 0m4.063s 00:13:33.176 sys 0m0.354s 00:13:33.176 23:08:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:33.176 23:08:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:33.176 ************************************ 00:13:33.176 END TEST bdev_bounds 00:13:33.176 ************************************ 00:13:33.176 23:08:52 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:13:33.176 23:08:52 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:33.176 23:08:52 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:33.176 23:08:52 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:33.176 ************************************ 00:13:33.176 START TEST bdev_nbd 00:13:33.176 ************************************ 00:13:33.176 23:08:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:13:33.176 23:08:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:13:33.176 23:08:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:13:33.176 23:08:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:33.176 23:08:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:33.176 23:08:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:33.176 23:08:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:13:33.176 23:08:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:13:33.176 23:08:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:13:33.176 23:08:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:33.176 23:08:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:13:33.176 23:08:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:13:33.176 23:08:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:33.176 23:08:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:13:33.176 23:08:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:33.176 23:08:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:13:33.176 23:08:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=81773 00:13:33.176 23:08:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:13:33.176 23:08:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 81773 /var/tmp/spdk-nbd.sock 00:13:33.176 23:08:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 81773 ']' 00:13:33.176 23:08:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:13:33.176 23:08:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:33.176 23:08:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:13:33.176 23:08:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:33.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:13:33.176 23:08:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:33.176 23:08:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:33.438 [2024-11-18 23:08:52.559724] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:33.438 [2024-11-18 23:08:52.559872] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:33.438 [2024-11-18 23:08:52.712654] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:33.438 [2024-11-18 23:08:52.765377] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:34.416 23:08:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:34.416 23:08:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:13:34.416 23:08:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:13:34.416 23:08:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:34.416 23:08:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:34.416 23:08:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:13:34.416 23:08:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:13:34.416 23:08:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:34.416 23:08:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:34.416 23:08:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:13:34.416 23:08:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:13:34.416 23:08:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:13:34.416 23:08:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:13:34.416 23:08:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:34.416 23:08:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:13:34.416 23:08:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:13:34.416 23:08:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:13:34.416 23:08:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:13:34.416 23:08:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:34.416 23:08:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:34.416 23:08:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:34.416 23:08:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:34.416 23:08:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:34.416 23:08:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:34.416 23:08:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:34.416 23:08:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:34.416 23:08:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:34.416 1+0 records in 00:13:34.416 1+0 records out 00:13:34.416 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00123597 s, 3.3 MB/s 00:13:34.416 23:08:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:34.416 23:08:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:34.416 23:08:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:34.416 23:08:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:34.416 23:08:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:34.416 23:08:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:34.416 23:08:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:34.416 23:08:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:13:34.690 23:08:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:13:34.690 23:08:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:13:34.690 23:08:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:13:34.690 23:08:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:34.690 23:08:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:34.690 23:08:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:34.690 23:08:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:34.690 23:08:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:34.690 23:08:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:34.690 23:08:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:34.690 23:08:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:34.690 23:08:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:34.690 1+0 records in 00:13:34.690 1+0 records out 00:13:34.690 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00141185 s, 2.9 MB/s 00:13:34.690 23:08:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:34.690 23:08:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:34.690 23:08:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:34.690 23:08:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:34.690 23:08:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:34.690 23:08:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:34.690 23:08:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:34.690 23:08:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:13:34.962 23:08:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:13:34.962 23:08:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:13:34.962 23:08:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:13:34.962 23:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:13:34.962 23:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:34.962 23:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:34.962 23:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:34.962 23:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:13:34.962 23:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:34.962 23:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:34.962 23:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:34.962 23:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:34.962 1+0 records in 00:13:34.962 1+0 records out 00:13:34.962 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00103326 s, 4.0 MB/s 00:13:34.962 23:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:34.962 23:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:34.962 23:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:34.962 23:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:34.962 23:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:34.962 23:08:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:34.962 23:08:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:34.962 23:08:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:13:35.223 23:08:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:13:35.223 23:08:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:13:35.223 23:08:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:13:35.223 23:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:13:35.223 23:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:35.224 23:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:35.224 23:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:35.224 23:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:13:35.224 23:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:35.224 23:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:35.224 23:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:35.224 23:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:35.224 1+0 records in 00:13:35.224 1+0 records out 00:13:35.224 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00113301 s, 3.6 MB/s 00:13:35.224 23:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.224 23:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:35.224 23:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.224 23:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:35.224 23:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:35.224 23:08:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:35.224 23:08:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:35.224 23:08:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:13:35.485 23:08:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:13:35.485 23:08:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:13:35.485 23:08:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:13:35.485 23:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:13:35.485 23:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:35.485 23:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:35.485 23:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:35.485 23:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:13:35.485 23:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:35.485 23:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:35.485 23:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:35.485 23:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:35.485 1+0 records in 00:13:35.485 1+0 records out 00:13:35.485 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00104247 s, 3.9 MB/s 00:13:35.485 23:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.486 23:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:35.486 23:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.486 23:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:35.486 23:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:35.486 23:08:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:35.486 23:08:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:35.486 23:08:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:13:35.747 23:08:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:13:35.747 23:08:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:13:35.747 23:08:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:13:35.747 23:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:13:35.747 23:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:35.747 23:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:35.748 23:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:35.748 23:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:13:35.748 23:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:35.748 23:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:35.748 23:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:35.748 23:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:35.748 1+0 records in 00:13:35.748 1+0 records out 00:13:35.748 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00111837 s, 3.7 MB/s 00:13:35.748 23:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.748 23:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:35.748 23:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.748 23:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:35.748 23:08:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:35.748 23:08:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:35.748 23:08:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:35.748 23:08:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:36.009 23:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:13:36.009 { 00:13:36.009 "nbd_device": "/dev/nbd0", 00:13:36.009 "bdev_name": "nvme0n1" 00:13:36.009 }, 00:13:36.009 { 00:13:36.009 "nbd_device": "/dev/nbd1", 00:13:36.009 "bdev_name": "nvme1n1" 00:13:36.009 }, 00:13:36.009 { 00:13:36.009 "nbd_device": "/dev/nbd2", 00:13:36.009 "bdev_name": "nvme2n1" 00:13:36.009 }, 00:13:36.009 { 00:13:36.009 "nbd_device": "/dev/nbd3", 00:13:36.009 "bdev_name": "nvme2n2" 00:13:36.009 }, 00:13:36.009 { 00:13:36.009 "nbd_device": "/dev/nbd4", 00:13:36.009 "bdev_name": "nvme2n3" 00:13:36.009 }, 00:13:36.009 { 00:13:36.009 "nbd_device": "/dev/nbd5", 00:13:36.009 "bdev_name": "nvme3n1" 00:13:36.009 } 00:13:36.009 ]' 00:13:36.009 23:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:13:36.009 23:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:13:36.009 { 00:13:36.009 "nbd_device": "/dev/nbd0", 00:13:36.009 "bdev_name": "nvme0n1" 00:13:36.009 }, 00:13:36.009 { 00:13:36.009 "nbd_device": "/dev/nbd1", 00:13:36.009 "bdev_name": "nvme1n1" 00:13:36.009 }, 00:13:36.009 { 00:13:36.009 "nbd_device": "/dev/nbd2", 00:13:36.009 "bdev_name": "nvme2n1" 00:13:36.009 }, 00:13:36.009 { 00:13:36.009 "nbd_device": "/dev/nbd3", 00:13:36.009 "bdev_name": "nvme2n2" 00:13:36.009 }, 00:13:36.009 { 00:13:36.009 "nbd_device": "/dev/nbd4", 00:13:36.009 "bdev_name": "nvme2n3" 00:13:36.009 }, 00:13:36.009 { 00:13:36.009 "nbd_device": "/dev/nbd5", 00:13:36.009 "bdev_name": "nvme3n1" 00:13:36.009 } 00:13:36.009 ]' 00:13:36.009 23:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:13:36.009 23:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:13:36.009 23:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:36.009 23:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:13:36.009 23:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:36.009 23:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:36.009 23:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:36.009 23:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:36.270 23:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:36.270 23:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:36.270 23:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:36.270 23:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:36.270 23:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:36.270 23:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:36.270 23:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:36.270 23:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:36.270 23:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:36.270 23:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:36.530 23:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:36.530 23:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:36.530 23:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:36.530 23:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:36.530 23:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:36.530 23:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:36.530 23:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:36.530 23:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:36.530 23:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:36.530 23:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:13:36.530 23:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:13:36.530 23:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:13:36.530 23:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:13:36.530 23:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:36.530 23:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:36.530 23:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:13:36.530 23:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:36.530 23:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:36.530 23:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:36.530 23:08:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:13:36.792 23:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:13:36.792 23:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:13:36.792 23:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:13:36.792 23:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:36.792 23:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:36.792 23:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:13:36.792 23:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:36.792 23:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:36.792 23:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:36.792 23:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:13:37.054 23:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:13:37.054 23:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:13:37.054 23:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:13:37.054 23:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:37.054 23:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:37.054 23:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:13:37.054 23:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:37.054 23:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:37.054 23:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:37.054 23:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:13:37.316 23:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:13:37.316 23:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:13:37.316 23:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:13:37.316 23:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:37.316 23:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:37.316 23:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:13:37.316 23:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:37.316 23:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:37.316 23:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:37.316 23:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:37.316 23:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:37.575 23:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:37.575 23:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:37.575 23:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:37.575 23:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:37.575 23:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:37.576 23:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:37.576 23:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:37.576 23:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:37.576 23:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:37.576 23:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:13:37.576 23:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:13:37.576 23:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:13:37.576 23:08:56 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:37.576 23:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:37.576 23:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:37.576 23:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:13:37.576 23:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:37.576 23:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:13:37.576 23:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:37.576 23:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:37.576 23:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:37.576 23:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:37.576 23:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:37.576 23:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:37.576 23:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:13:37.576 23:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:37.576 23:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:37.576 23:08:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:13:37.834 /dev/nbd0 00:13:37.834 23:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:37.834 23:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:37.834 23:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:37.834 23:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:37.834 23:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:37.834 23:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:37.834 23:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:37.834 23:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:37.834 23:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:37.834 23:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:37.834 23:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:37.834 1+0 records in 00:13:37.834 1+0 records out 00:13:37.834 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000347487 s, 11.8 MB/s 00:13:37.834 23:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:37.835 23:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:37.835 23:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:37.835 23:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:37.835 23:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:37.835 23:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:37.835 23:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:37.835 23:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:13:38.093 /dev/nbd1 00:13:38.093 23:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:38.093 23:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:38.093 23:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:38.093 23:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:38.093 23:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:38.093 23:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:38.093 23:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:38.093 23:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:38.093 23:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:38.093 23:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:38.093 23:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:38.093 1+0 records in 00:13:38.093 1+0 records out 00:13:38.093 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000519873 s, 7.9 MB/s 00:13:38.093 23:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:38.093 23:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:38.093 23:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:38.093 23:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:38.093 23:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:38.093 23:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:38.093 23:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:38.093 23:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:13:38.354 /dev/nbd10 00:13:38.354 23:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:13:38.354 23:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:13:38.354 23:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:13:38.354 23:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:38.354 23:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:38.354 23:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:38.354 23:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:13:38.354 23:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:38.354 23:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:38.354 23:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:38.354 23:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:38.354 1+0 records in 00:13:38.354 1+0 records out 00:13:38.354 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000864021 s, 4.7 MB/s 00:13:38.354 23:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:38.354 23:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:38.354 23:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:38.354 23:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:38.354 23:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:38.354 23:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:38.354 23:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:38.354 23:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:13:38.614 /dev/nbd11 00:13:38.614 23:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:13:38.614 23:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:13:38.614 23:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:13:38.614 23:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:38.614 23:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:38.614 23:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:38.614 23:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:13:38.614 23:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:38.614 23:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:38.614 23:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:38.615 23:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:38.615 1+0 records in 00:13:38.615 1+0 records out 00:13:38.615 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00113856 s, 3.6 MB/s 00:13:38.615 23:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:38.615 23:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:38.615 23:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:38.615 23:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:38.615 23:08:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:38.615 23:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:38.615 23:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:38.615 23:08:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:13:38.875 /dev/nbd12 00:13:38.875 23:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:13:38.875 23:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:13:38.875 23:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:13:38.875 23:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:38.875 23:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:38.875 23:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:38.875 23:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:13:38.875 23:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:38.875 23:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:38.875 23:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:38.875 23:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:38.875 1+0 records in 00:13:38.875 1+0 records out 00:13:38.875 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00164326 s, 2.5 MB/s 00:13:38.875 23:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:38.875 23:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:38.875 23:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:38.875 23:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:38.875 23:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:38.875 23:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:38.875 23:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:38.875 23:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:13:39.136 /dev/nbd13 00:13:39.136 23:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:13:39.136 23:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:13:39.136 23:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:13:39.136 23:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:39.136 23:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:39.136 23:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:39.136 23:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:13:39.136 23:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:39.136 23:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:39.136 23:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:39.136 23:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:39.136 1+0 records in 00:13:39.136 1+0 records out 00:13:39.136 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00125045 s, 3.3 MB/s 00:13:39.137 23:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:39.137 23:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:39.137 23:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:39.137 23:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:39.137 23:08:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:39.137 23:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:39.137 23:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:39.137 23:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:39.137 23:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:39.137 23:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:39.398 23:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:39.398 { 00:13:39.398 "nbd_device": "/dev/nbd0", 00:13:39.398 "bdev_name": "nvme0n1" 00:13:39.398 }, 00:13:39.398 { 00:13:39.398 "nbd_device": "/dev/nbd1", 00:13:39.398 "bdev_name": "nvme1n1" 00:13:39.398 }, 00:13:39.398 { 00:13:39.398 "nbd_device": "/dev/nbd10", 00:13:39.398 "bdev_name": "nvme2n1" 00:13:39.398 }, 00:13:39.398 { 00:13:39.398 "nbd_device": "/dev/nbd11", 00:13:39.398 "bdev_name": "nvme2n2" 00:13:39.398 }, 00:13:39.398 { 00:13:39.398 "nbd_device": "/dev/nbd12", 00:13:39.398 "bdev_name": "nvme2n3" 00:13:39.398 }, 00:13:39.398 { 00:13:39.398 "nbd_device": "/dev/nbd13", 00:13:39.398 "bdev_name": "nvme3n1" 00:13:39.398 } 00:13:39.398 ]' 00:13:39.398 23:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:39.398 23:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:39.399 { 00:13:39.399 "nbd_device": "/dev/nbd0", 00:13:39.399 "bdev_name": "nvme0n1" 00:13:39.399 }, 00:13:39.399 { 00:13:39.399 "nbd_device": "/dev/nbd1", 00:13:39.399 "bdev_name": "nvme1n1" 00:13:39.399 }, 00:13:39.399 { 00:13:39.399 "nbd_device": "/dev/nbd10", 00:13:39.399 "bdev_name": "nvme2n1" 00:13:39.399 }, 00:13:39.399 { 00:13:39.399 "nbd_device": "/dev/nbd11", 00:13:39.399 "bdev_name": "nvme2n2" 00:13:39.399 }, 00:13:39.399 { 00:13:39.399 "nbd_device": "/dev/nbd12", 00:13:39.399 "bdev_name": "nvme2n3" 00:13:39.399 }, 00:13:39.399 { 00:13:39.399 "nbd_device": "/dev/nbd13", 00:13:39.399 "bdev_name": "nvme3n1" 00:13:39.399 } 00:13:39.399 ]' 00:13:39.399 23:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:13:39.399 /dev/nbd1 00:13:39.399 /dev/nbd10 00:13:39.399 /dev/nbd11 00:13:39.399 /dev/nbd12 00:13:39.399 /dev/nbd13' 00:13:39.399 23:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:13:39.399 /dev/nbd1 00:13:39.399 /dev/nbd10 00:13:39.399 /dev/nbd11 00:13:39.399 /dev/nbd12 00:13:39.399 /dev/nbd13' 00:13:39.399 23:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:39.399 23:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:13:39.399 23:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:13:39.399 23:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:13:39.399 23:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:13:39.399 23:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:13:39.399 23:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:39.399 23:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:39.399 23:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:13:39.399 23:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:39.399 23:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:13:39.399 23:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:13:39.399 256+0 records in 00:13:39.399 256+0 records out 00:13:39.399 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00722779 s, 145 MB/s 00:13:39.399 23:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:39.399 23:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:13:39.660 256+0 records in 00:13:39.660 256+0 records out 00:13:39.660 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.23887 s, 4.4 MB/s 00:13:39.660 23:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:39.660 23:08:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:13:39.921 256+0 records in 00:13:39.921 256+0 records out 00:13:39.921 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.303833 s, 3.5 MB/s 00:13:39.921 23:08:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:39.921 23:08:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:13:40.182 256+0 records in 00:13:40.182 256+0 records out 00:13:40.182 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.197123 s, 5.3 MB/s 00:13:40.182 23:08:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:40.182 23:08:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:13:40.444 256+0 records in 00:13:40.444 256+0 records out 00:13:40.444 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.233302 s, 4.5 MB/s 00:13:40.444 23:08:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:40.444 23:08:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:13:40.706 256+0 records in 00:13:40.706 256+0 records out 00:13:40.706 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.255804 s, 4.1 MB/s 00:13:40.706 23:08:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:40.706 23:08:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:13:40.706 256+0 records in 00:13:40.706 256+0 records out 00:13:40.706 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.210457 s, 5.0 MB/s 00:13:40.966 23:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:13:40.966 23:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:40.966 23:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:40.966 23:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:13:40.966 23:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:40.966 23:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:13:40.966 23:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:13:40.966 23:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:40.966 23:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:13:40.966 23:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:40.966 23:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:13:40.966 23:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:40.966 23:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:13:40.966 23:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:40.966 23:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:13:40.966 23:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:40.966 23:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:13:40.966 23:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:40.966 23:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:13:40.966 23:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:40.966 23:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:40.966 23:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:40.966 23:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:40.966 23:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:40.966 23:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:40.966 23:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:40.966 23:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:41.228 23:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:41.228 23:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:41.228 23:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:41.228 23:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:41.228 23:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:41.228 23:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:41.228 23:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:41.228 23:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:41.228 23:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:41.228 23:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:41.228 23:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:41.228 23:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:41.228 23:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:41.228 23:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:41.228 23:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:41.228 23:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:41.228 23:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:41.228 23:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:41.228 23:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:41.228 23:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:13:41.489 23:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:13:41.489 23:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:13:41.489 23:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:13:41.489 23:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:41.489 23:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:41.489 23:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:13:41.489 23:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:41.489 23:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:41.489 23:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:41.489 23:09:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:13:41.748 23:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:13:41.748 23:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:13:41.748 23:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:13:41.748 23:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:41.748 23:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:41.748 23:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:13:41.748 23:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:41.748 23:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:41.748 23:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:41.748 23:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:13:42.006 23:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:13:42.006 23:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:13:42.006 23:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:13:42.006 23:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:42.006 23:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:42.006 23:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:13:42.006 23:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:42.006 23:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:42.006 23:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:42.006 23:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:13:42.264 23:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:13:42.264 23:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:13:42.264 23:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:13:42.264 23:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:42.264 23:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:42.264 23:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:13:42.264 23:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:42.264 23:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:42.264 23:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:42.264 23:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:42.264 23:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:42.522 23:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:42.522 23:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:42.522 23:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:42.522 23:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:42.522 23:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:42.522 23:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:42.522 23:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:42.522 23:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:42.522 23:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:42.522 23:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:13:42.522 23:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:42.522 23:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:13:42.522 23:09:01 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:42.522 23:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:42.522 23:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:13:42.522 23:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:13:42.522 malloc_lvol_verify 00:13:42.522 23:09:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:13:42.781 5b69674e-c6a5-4b09-a3dd-c2ed59ca7c60 00:13:42.781 23:09:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:13:43.039 6e7aae1f-66fc-4ab6-90f3-625ecfe35937 00:13:43.040 23:09:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:13:43.298 /dev/nbd0 00:13:43.298 23:09:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:13:43.298 23:09:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:13:43.298 23:09:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:13:43.298 23:09:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:13:43.298 23:09:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:13:43.298 mke2fs 1.47.0 (5-Feb-2023) 00:13:43.298 Discarding device blocks: 0/4096 done 00:13:43.298 Creating filesystem with 4096 1k blocks and 1024 inodes 00:13:43.298 00:13:43.298 Allocating group tables: 0/1 done 00:13:43.298 Writing inode tables: 0/1 done 00:13:43.298 Creating journal (1024 blocks): done 00:13:43.298 Writing superblocks and filesystem accounting information: 0/1 done 00:13:43.298 00:13:43.298 23:09:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:43.298 23:09:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:43.298 23:09:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:43.298 23:09:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:43.298 23:09:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:43.298 23:09:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:43.298 23:09:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:43.557 23:09:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:43.557 23:09:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:43.557 23:09:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:43.557 23:09:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:43.557 23:09:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:43.557 23:09:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:43.557 23:09:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:43.557 23:09:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:43.557 23:09:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 81773 00:13:43.557 23:09:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 81773 ']' 00:13:43.557 23:09:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 81773 00:13:43.557 23:09:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:13:43.557 23:09:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:43.557 23:09:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81773 00:13:43.557 killing process with pid 81773 00:13:43.557 23:09:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:43.557 23:09:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:43.557 23:09:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81773' 00:13:43.557 23:09:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 81773 00:13:43.557 23:09:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 81773 00:13:43.557 23:09:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:13:43.557 00:13:43.557 real 0m10.378s 00:13:43.557 user 0m14.235s 00:13:43.557 sys 0m3.670s 00:13:43.557 ************************************ 00:13:43.557 END TEST bdev_nbd 00:13:43.557 ************************************ 00:13:43.557 23:09:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:43.557 23:09:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:43.557 23:09:02 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:13:43.557 23:09:02 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:13:43.557 23:09:02 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:13:43.557 23:09:02 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:13:43.557 23:09:02 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:43.557 23:09:02 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:43.557 23:09:02 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:43.557 ************************************ 00:13:43.557 START TEST bdev_fio 00:13:43.557 ************************************ 00:13:43.557 23:09:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:13:43.557 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:13:43.557 23:09:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:13:43.557 23:09:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:13:43.557 23:09:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:13:43.557 23:09:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:13:43.557 23:09:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:13:43.557 23:09:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:13:43.557 23:09:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:13:43.557 23:09:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:43.557 23:09:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:13:43.557 23:09:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:13:43.557 23:09:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:13:43.557 23:09:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:13:43.557 23:09:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:43.557 23:09:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:13:43.557 23:09:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:13:43.557 23:09:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:43.557 23:09:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:13:43.557 23:09:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:13:43.557 23:09:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:13:43.819 23:09:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:13:43.819 23:09:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:13:43.819 23:09:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:13:43.819 23:09:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:13:43.819 23:09:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:43.819 23:09:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:13:43.819 23:09:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:13:43.819 23:09:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:43.819 23:09:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:13:43.819 23:09:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:13:43.819 23:09:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:43.819 23:09:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:13:43.819 23:09:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:13:43.819 23:09:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:43.819 23:09:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:13:43.819 23:09:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:13:43.819 23:09:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:43.819 23:09:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:13:43.819 23:09:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:13:43.819 23:09:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:43.819 23:09:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:13:43.819 23:09:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:13:43.819 23:09:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:13:43.819 23:09:02 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:43.819 23:09:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:13:43.819 23:09:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:43.819 23:09:02 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:43.819 ************************************ 00:13:43.819 START TEST bdev_fio_rw_verify 00:13:43.819 ************************************ 00:13:43.819 23:09:02 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:43.819 23:09:02 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:43.819 23:09:02 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:13:43.819 23:09:02 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:43.819 23:09:02 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:13:43.819 23:09:02 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:43.819 23:09:02 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:13:43.819 23:09:02 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:13:43.819 23:09:02 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:13:43.819 23:09:02 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:43.819 23:09:02 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:13:43.819 23:09:02 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:13:43.819 23:09:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:43.819 23:09:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:43.819 23:09:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:13:43.819 23:09:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:43.819 23:09:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:43.819 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:43.819 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:43.819 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:43.819 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:43.819 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:43.819 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:43.819 fio-3.35 00:13:43.819 Starting 6 threads 00:13:56.065 00:13:56.065 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=82178: Mon Nov 18 23:09:13 2024 00:13:56.065 read: IOPS=17.3k, BW=67.8MiB/s (71.1MB/s)(678MiB/10001msec) 00:13:56.065 slat (usec): min=2, max=4028, avg= 5.61, stdev=16.81 00:13:56.065 clat (usec): min=88, max=7862, avg=1167.73, stdev=845.86 00:13:56.065 lat (usec): min=90, max=7883, avg=1173.34, stdev=846.78 00:13:56.065 clat percentiles (usec): 00:13:56.065 | 50.000th=[ 1012], 99.000th=[ 3785], 99.900th=[ 5473], 99.990th=[ 7635], 00:13:56.065 | 99.999th=[ 7832] 00:13:56.065 write: IOPS=17.7k, BW=69.0MiB/s (72.4MB/s)(690MiB/10001msec); 0 zone resets 00:13:56.065 slat (usec): min=9, max=4159, avg=33.31, stdev=118.33 00:13:56.065 clat (usec): min=73, max=7459, avg=1286.50, stdev=872.81 00:13:56.065 lat (usec): min=87, max=7921, avg=1319.81, stdev=887.28 00:13:56.065 clat percentiles (usec): 00:13:56.065 | 50.000th=[ 1123], 99.000th=[ 4015], 99.900th=[ 5473], 99.990th=[ 7111], 00:13:56.065 | 99.999th=[ 7439] 00:13:56.065 bw ( KiB/s): min=42079, max=186728, per=92.54%, avg=65391.74, stdev=5198.26, samples=114 00:13:56.065 iops : min=10519, max=46682, avg=16347.47, stdev=1299.58, samples=114 00:13:56.065 lat (usec) : 100=0.02%, 250=6.15%, 500=17.75%, 750=13.08%, 1000=10.13% 00:13:56.065 lat (msec) : 2=35.97%, 4=16.05%, 10=0.86% 00:13:56.065 cpu : usr=47.17%, sys=31.22%, ctx=5906, majf=0, minf=16740 00:13:56.065 IO depths : 1=11.8%, 2=24.3%, 4=50.7%, 8=13.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:56.065 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:56.065 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:56.065 issued rwts: total=173527,176672,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:56.065 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:56.065 00:13:56.065 Run status group 0 (all jobs): 00:13:56.065 READ: bw=67.8MiB/s (71.1MB/s), 67.8MiB/s-67.8MiB/s (71.1MB/s-71.1MB/s), io=678MiB (711MB), run=10001-10001msec 00:13:56.065 WRITE: bw=69.0MiB/s (72.4MB/s), 69.0MiB/s-69.0MiB/s (72.4MB/s-72.4MB/s), io=690MiB (724MB), run=10001-10001msec 00:13:56.065 ----------------------------------------------------- 00:13:56.065 Suppressions used: 00:13:56.065 count bytes template 00:13:56.065 6 48 /usr/src/fio/parse.c 00:13:56.065 3029 290784 /usr/src/fio/iolog.c 00:13:56.065 1 8 libtcmalloc_minimal.so 00:13:56.065 1 904 libcrypto.so 00:13:56.065 ----------------------------------------------------- 00:13:56.065 00:13:56.065 00:13:56.065 real 0m11.091s 00:13:56.065 user 0m28.950s 00:13:56.065 sys 0m19.062s 00:13:56.065 23:09:14 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:56.065 23:09:14 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:13:56.065 ************************************ 00:13:56.065 END TEST bdev_fio_rw_verify 00:13:56.065 ************************************ 00:13:56.065 23:09:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:13:56.065 23:09:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:56.065 23:09:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:13:56.065 23:09:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:56.065 23:09:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:13:56.065 23:09:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:13:56.065 23:09:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:13:56.065 23:09:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:13:56.065 23:09:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:56.065 23:09:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:13:56.065 23:09:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:13:56.065 23:09:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:56.065 23:09:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:13:56.065 23:09:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:13:56.066 23:09:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:13:56.066 23:09:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:13:56.066 23:09:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:13:56.066 23:09:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "52981408-3e76-40f0-a172-082033d20d33"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "52981408-3e76-40f0-a172-082033d20d33",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "73d2089d-9c92-4ddd-a691-02fdb614f19a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "73d2089d-9c92-4ddd-a691-02fdb614f19a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "15eabcd2-9c8c-4e3b-ad74-67096aafdb78"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "15eabcd2-9c8c-4e3b-ad74-67096aafdb78",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "4b28419f-ef31-4413-ae4e-9663e46e7ce4"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4b28419f-ef31-4413-ae4e-9663e46e7ce4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "6a04d00d-077f-45f7-bad7-fb02d7a36648"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6a04d00d-077f-45f7-bad7-fb02d7a36648",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "e4b5a7ac-d315-4256-9deb-aaf91926a85d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "e4b5a7ac-d315-4256-9deb-aaf91926a85d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:13:56.066 23:09:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:13:56.066 23:09:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:56.066 23:09:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:13:56.066 /home/vagrant/spdk_repo/spdk 00:13:56.066 23:09:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:13:56.066 23:09:14 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:13:56.066 00:13:56.066 real 0m11.237s 00:13:56.066 user 0m29.019s 00:13:56.066 sys 0m19.136s 00:13:56.066 23:09:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:56.066 23:09:14 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:56.066 ************************************ 00:13:56.066 END TEST bdev_fio 00:13:56.066 ************************************ 00:13:56.066 23:09:14 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:56.066 23:09:14 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:56.066 23:09:14 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:13:56.066 23:09:14 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:56.066 23:09:14 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:56.066 ************************************ 00:13:56.066 START TEST bdev_verify 00:13:56.066 ************************************ 00:13:56.066 23:09:14 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:56.066 [2024-11-18 23:09:14.253813] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:56.066 [2024-11-18 23:09:14.254034] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82337 ] 00:13:56.066 [2024-11-18 23:09:14.402484] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:56.066 [2024-11-18 23:09:14.434291] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:56.066 [2024-11-18 23:09:14.434423] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:56.066 Running I/O for 5 seconds... 00:13:57.568 25888.00 IOPS, 101.12 MiB/s [2024-11-18T23:09:17.895Z] 26016.00 IOPS, 101.62 MiB/s [2024-11-18T23:09:18.829Z] 25525.33 IOPS, 99.71 MiB/s [2024-11-18T23:09:19.762Z] 25488.00 IOPS, 99.56 MiB/s [2024-11-18T23:09:19.762Z] 25490.80 IOPS, 99.57 MiB/s 00:14:00.384 Latency(us) 00:14:00.384 [2024-11-18T23:09:19.762Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:00.384 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:00.384 Verification LBA range: start 0x0 length 0xa0000 00:14:00.384 nvme0n1 : 5.02 1808.72 7.07 0.00 0.00 70622.70 13510.50 60898.07 00:14:00.384 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:00.384 Verification LBA range: start 0xa0000 length 0xa0000 00:14:00.384 nvme0n1 : 5.06 1872.54 7.31 0.00 0.00 68214.28 7259.37 64124.46 00:14:00.384 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:00.384 Verification LBA range: start 0x0 length 0xbd0bd 00:14:00.384 nvme1n1 : 5.04 3305.58 12.91 0.00 0.00 38481.14 3906.95 65334.35 00:14:00.384 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:00.384 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:14:00.384 nvme1n1 : 5.04 3390.00 13.24 0.00 0.00 37571.71 4461.49 63317.86 00:14:00.384 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:00.384 Verification LBA range: start 0x0 length 0x80000 00:14:00.384 nvme2n1 : 5.06 1821.97 7.12 0.00 0.00 69853.53 3957.37 69367.34 00:14:00.384 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:00.384 Verification LBA range: start 0x80000 length 0x80000 00:14:00.384 nvme2n1 : 5.06 1871.22 7.31 0.00 0.00 67816.61 9527.93 64931.05 00:14:00.384 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:00.384 Verification LBA range: start 0x0 length 0x80000 00:14:00.384 nvme2n2 : 5.06 1820.70 7.11 0.00 0.00 69864.10 4159.02 73400.32 00:14:00.384 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:00.384 Verification LBA range: start 0x80000 length 0x80000 00:14:00.384 nvme2n2 : 5.07 1892.89 7.39 0.00 0.00 66870.80 5268.09 62914.56 00:14:00.384 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:00.384 Verification LBA range: start 0x0 length 0x80000 00:14:00.384 nvme2n3 : 5.07 1819.49 7.11 0.00 0.00 69748.39 7461.02 75013.51 00:14:00.384 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:00.384 Verification LBA range: start 0x80000 length 0x80000 00:14:00.384 nvme2n3 : 5.07 1892.36 7.39 0.00 0.00 66822.98 5999.06 62107.96 00:14:00.384 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:00.384 Verification LBA range: start 0x0 length 0x20000 00:14:00.384 nvme3n1 : 5.07 1817.71 7.10 0.00 0.00 69671.19 3932.16 69367.34 00:14:00.384 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:00.384 Verification LBA range: start 0x20000 length 0x20000 00:14:00.384 nvme3n1 : 5.07 1891.83 7.39 0.00 0.00 66810.34 6704.84 66140.95 00:14:00.384 [2024-11-18T23:09:19.762Z] =================================================================================================================== 00:14:00.384 [2024-11-18T23:09:19.762Z] Total : 25205.01 98.46 0.00 0.00 60501.65 3906.95 75013.51 00:14:00.642 00:14:00.642 real 0m5.702s 00:14:00.642 user 0m8.899s 00:14:00.642 sys 0m1.774s 00:14:00.642 ************************************ 00:14:00.642 END TEST bdev_verify 00:14:00.642 ************************************ 00:14:00.642 23:09:19 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:00.642 23:09:19 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:14:00.642 23:09:19 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:14:00.642 23:09:19 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:14:00.642 23:09:19 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:00.642 23:09:19 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:00.642 ************************************ 00:14:00.642 START TEST bdev_verify_big_io 00:14:00.642 ************************************ 00:14:00.642 23:09:19 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:14:00.642 [2024-11-18 23:09:19.983236] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:00.642 [2024-11-18 23:09:19.983336] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82431 ] 00:14:00.901 [2024-11-18 23:09:20.130539] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:00.901 [2024-11-18 23:09:20.163034] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:00.901 [2024-11-18 23:09:20.163081] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:01.159 Running I/O for 5 seconds... 00:14:07.247 1504.00 IOPS, 94.00 MiB/s [2024-11-18T23:09:26.625Z] 3579.00 IOPS, 223.69 MiB/s 00:14:07.247 Latency(us) 00:14:07.247 [2024-11-18T23:09:26.625Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:07.247 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:07.247 Verification LBA range: start 0x0 length 0xa000 00:14:07.247 nvme0n1 : 5.92 121.61 7.60 0.00 0.00 983393.17 72593.72 1387346.71 00:14:07.247 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:07.247 Verification LBA range: start 0xa000 length 0xa000 00:14:07.247 nvme0n1 : 5.94 107.73 6.73 0.00 0.00 1127625.02 51622.20 1142141.24 00:14:07.247 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:07.247 Verification LBA range: start 0x0 length 0xbd0b 00:14:07.247 nvme1n1 : 5.85 153.26 9.58 0.00 0.00 772434.74 8721.33 1264743.98 00:14:07.247 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:07.247 Verification LBA range: start 0xbd0b length 0xbd0b 00:14:07.247 nvme1n1 : 5.80 129.56 8.10 0.00 0.00 914466.12 96791.63 1380893.93 00:14:07.247 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:07.247 Verification LBA range: start 0x0 length 0x8000 00:14:07.247 nvme2n1 : 5.85 139.54 8.72 0.00 0.00 819525.70 67350.84 813049.70 00:14:07.247 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:07.247 Verification LBA range: start 0x8000 length 0x8000 00:14:07.247 nvme2n1 : 5.95 137.18 8.57 0.00 0.00 844143.22 112923.57 1497043.89 00:14:07.247 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:07.247 Verification LBA range: start 0x0 length 0x8000 00:14:07.247 nvme2n2 : 5.95 166.67 10.42 0.00 0.00 678677.12 87919.06 929199.66 00:14:07.247 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:07.247 Verification LBA range: start 0x8000 length 0x8000 00:14:07.247 nvme2n2 : 5.95 116.62 7.29 0.00 0.00 956764.11 8116.38 1910021.51 00:14:07.247 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:07.247 Verification LBA range: start 0x0 length 0x8000 00:14:07.247 nvme2n3 : 5.95 121.94 7.62 0.00 0.00 896821.07 49605.71 2064888.12 00:14:07.247 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:07.247 Verification LBA range: start 0x8000 length 0x8000 00:14:07.247 nvme2n3 : 5.98 115.04 7.19 0.00 0.00 941947.23 34078.72 2387526.89 00:14:07.247 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:07.247 Verification LBA range: start 0x0 length 0x2000 00:14:07.247 nvme3n1 : 5.96 212.04 13.25 0.00 0.00 501574.13 2961.72 1013085.74 00:14:07.247 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:07.247 Verification LBA range: start 0x2000 length 0x2000 00:14:07.247 nvme3n1 : 6.01 149.14 9.32 0.00 0.00 703332.11 1272.91 967916.31 00:14:07.247 [2024-11-18T23:09:26.625Z] =================================================================================================================== 00:14:07.247 [2024-11-18T23:09:26.625Z] Total : 1670.33 104.40 0.00 0.00 814911.72 1272.91 2387526.89 00:14:07.247 00:14:07.247 real 0m6.622s 00:14:07.247 user 0m12.334s 00:14:07.247 sys 0m0.355s 00:14:07.247 23:09:26 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:07.247 ************************************ 00:14:07.247 END TEST bdev_verify_big_io 00:14:07.247 ************************************ 00:14:07.247 23:09:26 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:14:07.247 23:09:26 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:07.247 23:09:26 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:14:07.247 23:09:26 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:07.247 23:09:26 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:07.247 ************************************ 00:14:07.247 START TEST bdev_write_zeroes 00:14:07.247 ************************************ 00:14:07.247 23:09:26 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:07.507 [2024-11-18 23:09:26.644746] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:07.507 [2024-11-18 23:09:26.645276] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82532 ] 00:14:07.507 [2024-11-18 23:09:26.792555] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.507 [2024-11-18 23:09:26.823783] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:07.765 Running I/O for 1 seconds... 00:14:08.819 77760.00 IOPS, 303.75 MiB/s 00:14:08.820 Latency(us) 00:14:08.820 [2024-11-18T23:09:28.198Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:08.820 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:08.820 nvme0n1 : 1.03 11109.43 43.40 0.00 0.00 11511.20 8519.68 20164.92 00:14:08.820 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:08.820 nvme1n1 : 1.02 21817.27 85.22 0.00 0.00 5853.55 3579.27 13611.32 00:14:08.820 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:08.820 nvme2n1 : 1.03 11096.71 43.35 0.00 0.00 11453.45 7208.96 18551.73 00:14:08.820 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:08.820 nvme2n2 : 1.03 11084.33 43.30 0.00 0.00 11457.31 7360.20 18551.73 00:14:08.820 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:08.820 nvme2n3 : 1.03 11071.78 43.25 0.00 0.00 11462.65 7511.43 18854.20 00:14:08.820 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:08.820 nvme3n1 : 1.03 11059.47 43.20 0.00 0.00 11471.47 7612.26 20467.40 00:14:08.820 [2024-11-18T23:09:28.198Z] =================================================================================================================== 00:14:08.820 [2024-11-18T23:09:28.198Z] Total : 77239.00 301.71 0.00 0.00 9888.84 3579.27 20467.40 00:14:09.079 00:14:09.079 real 0m1.610s 00:14:09.079 user 0m0.862s 00:14:09.079 sys 0m0.602s 00:14:09.079 23:09:28 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:09.079 23:09:28 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:14:09.079 ************************************ 00:14:09.079 END TEST bdev_write_zeroes 00:14:09.079 ************************************ 00:14:09.079 23:09:28 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:09.079 23:09:28 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:14:09.079 23:09:28 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:09.079 23:09:28 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:09.079 ************************************ 00:14:09.079 START TEST bdev_json_nonenclosed 00:14:09.079 ************************************ 00:14:09.079 23:09:28 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:09.079 [2024-11-18 23:09:28.291608] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:09.079 [2024-11-18 23:09:28.291716] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82564 ] 00:14:09.079 [2024-11-18 23:09:28.430478] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:09.338 [2024-11-18 23:09:28.461503] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:09.338 [2024-11-18 23:09:28.461579] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:14:09.338 [2024-11-18 23:09:28.461598] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:14:09.338 [2024-11-18 23:09:28.461608] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:09.338 00:14:09.338 real 0m0.306s 00:14:09.338 user 0m0.120s 00:14:09.338 sys 0m0.084s 00:14:09.338 23:09:28 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:09.338 ************************************ 00:14:09.338 END TEST bdev_json_nonenclosed 00:14:09.338 ************************************ 00:14:09.338 23:09:28 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:14:09.338 23:09:28 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:09.338 23:09:28 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:14:09.338 23:09:28 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:09.338 23:09:28 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:09.338 ************************************ 00:14:09.338 START TEST bdev_json_nonarray 00:14:09.338 ************************************ 00:14:09.338 23:09:28 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:09.338 [2024-11-18 23:09:28.634021] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:09.338 [2024-11-18 23:09:28.634124] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82589 ] 00:14:09.606 [2024-11-18 23:09:28.779734] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:09.606 [2024-11-18 23:09:28.810425] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:09.606 [2024-11-18 23:09:28.810518] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:14:09.606 [2024-11-18 23:09:28.810533] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:14:09.606 [2024-11-18 23:09:28.810543] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:09.606 00:14:09.606 real 0m0.314s 00:14:09.606 user 0m0.119s 00:14:09.606 sys 0m0.092s 00:14:09.606 23:09:28 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:09.606 23:09:28 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:14:09.606 ************************************ 00:14:09.606 END TEST bdev_json_nonarray 00:14:09.606 ************************************ 00:14:09.606 23:09:28 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:14:09.606 23:09:28 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:14:09.606 23:09:28 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:14:09.606 23:09:28 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:14:09.606 23:09:28 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:14:09.606 23:09:28 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:14:09.606 23:09:28 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:09.606 23:09:28 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:14:09.606 23:09:28 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:14:09.606 23:09:28 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:14:09.606 23:09:28 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:14:09.606 23:09:28 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:10.171 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:13.466 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:14:13.466 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:14:13.466 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:13.724 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:14:13.724 00:14:13.724 real 0m50.079s 00:14:13.724 user 1m17.927s 00:14:13.724 sys 0m34.537s 00:14:13.724 23:09:33 blockdev_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:13.724 23:09:33 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:13.724 ************************************ 00:14:13.724 END TEST blockdev_xnvme 00:14:13.724 ************************************ 00:14:13.724 23:09:33 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:14:13.724 23:09:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:13.724 23:09:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:13.725 23:09:33 -- common/autotest_common.sh@10 -- # set +x 00:14:13.725 ************************************ 00:14:13.725 START TEST ublk 00:14:13.725 ************************************ 00:14:13.725 23:09:33 ublk -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:14:13.984 * Looking for test storage... 00:14:13.984 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:14:13.984 23:09:33 ublk -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:13.984 23:09:33 ublk -- common/autotest_common.sh@1681 -- # lcov --version 00:14:13.984 23:09:33 ublk -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:13.984 23:09:33 ublk -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:13.984 23:09:33 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:13.984 23:09:33 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:13.984 23:09:33 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:13.984 23:09:33 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:14:13.984 23:09:33 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:14:13.984 23:09:33 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:14:13.984 23:09:33 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:14:13.984 23:09:33 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:14:13.984 23:09:33 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:14:13.984 23:09:33 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:14:13.984 23:09:33 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:13.984 23:09:33 ublk -- scripts/common.sh@344 -- # case "$op" in 00:14:13.984 23:09:33 ublk -- scripts/common.sh@345 -- # : 1 00:14:13.984 23:09:33 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:13.984 23:09:33 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:13.984 23:09:33 ublk -- scripts/common.sh@365 -- # decimal 1 00:14:13.984 23:09:33 ublk -- scripts/common.sh@353 -- # local d=1 00:14:13.984 23:09:33 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:13.984 23:09:33 ublk -- scripts/common.sh@355 -- # echo 1 00:14:13.984 23:09:33 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:14:13.984 23:09:33 ublk -- scripts/common.sh@366 -- # decimal 2 00:14:13.984 23:09:33 ublk -- scripts/common.sh@353 -- # local d=2 00:14:13.984 23:09:33 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:13.984 23:09:33 ublk -- scripts/common.sh@355 -- # echo 2 00:14:13.984 23:09:33 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:14:13.984 23:09:33 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:13.984 23:09:33 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:13.984 23:09:33 ublk -- scripts/common.sh@368 -- # return 0 00:14:13.984 23:09:33 ublk -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:13.984 23:09:33 ublk -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:13.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:13.984 --rc genhtml_branch_coverage=1 00:14:13.984 --rc genhtml_function_coverage=1 00:14:13.984 --rc genhtml_legend=1 00:14:13.984 --rc geninfo_all_blocks=1 00:14:13.984 --rc geninfo_unexecuted_blocks=1 00:14:13.984 00:14:13.984 ' 00:14:13.984 23:09:33 ublk -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:13.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:13.984 --rc genhtml_branch_coverage=1 00:14:13.984 --rc genhtml_function_coverage=1 00:14:13.984 --rc genhtml_legend=1 00:14:13.984 --rc geninfo_all_blocks=1 00:14:13.984 --rc geninfo_unexecuted_blocks=1 00:14:13.984 00:14:13.984 ' 00:14:13.984 23:09:33 ublk -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:13.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:13.984 --rc genhtml_branch_coverage=1 00:14:13.984 --rc genhtml_function_coverage=1 00:14:13.984 --rc genhtml_legend=1 00:14:13.984 --rc geninfo_all_blocks=1 00:14:13.984 --rc geninfo_unexecuted_blocks=1 00:14:13.984 00:14:13.984 ' 00:14:13.984 23:09:33 ublk -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:13.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:13.984 --rc genhtml_branch_coverage=1 00:14:13.984 --rc genhtml_function_coverage=1 00:14:13.984 --rc genhtml_legend=1 00:14:13.984 --rc geninfo_all_blocks=1 00:14:13.984 --rc geninfo_unexecuted_blocks=1 00:14:13.984 00:14:13.984 ' 00:14:13.984 23:09:33 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:14:13.984 23:09:33 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:14:13.984 23:09:33 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:14:13.984 23:09:33 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:14:13.984 23:09:33 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:14:13.984 23:09:33 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:14:13.984 23:09:33 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:14:13.984 23:09:33 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:14:13.984 23:09:33 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:14:13.984 23:09:33 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:14:13.984 23:09:33 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:14:13.984 23:09:33 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:14:13.984 23:09:33 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:14:13.984 23:09:33 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:14:13.984 23:09:33 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:14:13.984 23:09:33 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:14:13.984 23:09:33 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:14:13.984 23:09:33 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:14:13.984 23:09:33 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:14:13.984 23:09:33 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:14:13.984 23:09:33 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:13.984 23:09:33 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:13.984 23:09:33 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:13.984 ************************************ 00:14:13.984 START TEST test_save_ublk_config 00:14:13.984 ************************************ 00:14:13.984 23:09:33 ublk.test_save_ublk_config -- common/autotest_common.sh@1125 -- # test_save_config 00:14:13.984 23:09:33 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:14:13.984 23:09:33 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=82880 00:14:13.984 23:09:33 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:14:13.984 23:09:33 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 82880 00:14:13.984 23:09:33 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 82880 ']' 00:14:13.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:13.984 23:09:33 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:13.984 23:09:33 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:13.984 23:09:33 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:13.984 23:09:33 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:13.984 23:09:33 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:13.984 23:09:33 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:14:13.984 [2024-11-18 23:09:33.319000] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:13.984 [2024-11-18 23:09:33.319114] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82880 ] 00:14:14.243 [2024-11-18 23:09:33.467534] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:14.243 [2024-11-18 23:09:33.509266] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:14.810 23:09:34 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:14.810 23:09:34 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:14:14.810 23:09:34 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:14:14.810 23:09:34 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:14:14.810 23:09:34 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.810 23:09:34 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:14.810 [2024-11-18 23:09:34.148174] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:14.810 [2024-11-18 23:09:34.148438] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:14.810 malloc0 00:14:14.810 [2024-11-18 23:09:34.172284] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:14:14.810 [2024-11-18 23:09:34.172368] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:14:14.810 [2024-11-18 23:09:34.172376] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:14.810 [2024-11-18 23:09:34.172386] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:14.810 [2024-11-18 23:09:34.180293] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:14.810 [2024-11-18 23:09:34.180319] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:15.069 [2024-11-18 23:09:34.188174] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:15.069 [2024-11-18 23:09:34.188268] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:15.069 [2024-11-18 23:09:34.205183] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:15.069 0 00:14:15.069 23:09:34 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.069 23:09:34 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:14:15.069 23:09:34 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.069 23:09:34 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:15.327 23:09:34 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.327 23:09:34 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:14:15.327 "subsystems": [ 00:14:15.327 { 00:14:15.327 "subsystem": "fsdev", 00:14:15.327 "config": [ 00:14:15.327 { 00:14:15.327 "method": "fsdev_set_opts", 00:14:15.327 "params": { 00:14:15.327 "fsdev_io_pool_size": 65535, 00:14:15.327 "fsdev_io_cache_size": 256 00:14:15.327 } 00:14:15.327 } 00:14:15.327 ] 00:14:15.327 }, 00:14:15.327 { 00:14:15.327 "subsystem": "keyring", 00:14:15.327 "config": [] 00:14:15.327 }, 00:14:15.327 { 00:14:15.327 "subsystem": "iobuf", 00:14:15.327 "config": [ 00:14:15.327 { 00:14:15.327 "method": "iobuf_set_options", 00:14:15.327 "params": { 00:14:15.327 "small_pool_count": 8192, 00:14:15.327 "large_pool_count": 1024, 00:14:15.327 "small_bufsize": 8192, 00:14:15.327 "large_bufsize": 135168 00:14:15.327 } 00:14:15.327 } 00:14:15.327 ] 00:14:15.327 }, 00:14:15.327 { 00:14:15.327 "subsystem": "sock", 00:14:15.327 "config": [ 00:14:15.327 { 00:14:15.328 "method": "sock_set_default_impl", 00:14:15.328 "params": { 00:14:15.328 "impl_name": "posix" 00:14:15.328 } 00:14:15.328 }, 00:14:15.328 { 00:14:15.328 "method": "sock_impl_set_options", 00:14:15.328 "params": { 00:14:15.328 "impl_name": "ssl", 00:14:15.328 "recv_buf_size": 4096, 00:14:15.328 "send_buf_size": 4096, 00:14:15.328 "enable_recv_pipe": true, 00:14:15.328 "enable_quickack": false, 00:14:15.328 "enable_placement_id": 0, 00:14:15.328 "enable_zerocopy_send_server": true, 00:14:15.328 "enable_zerocopy_send_client": false, 00:14:15.328 "zerocopy_threshold": 0, 00:14:15.328 "tls_version": 0, 00:14:15.328 "enable_ktls": false 00:14:15.328 } 00:14:15.328 }, 00:14:15.328 { 00:14:15.328 "method": "sock_impl_set_options", 00:14:15.328 "params": { 00:14:15.328 "impl_name": "posix", 00:14:15.328 "recv_buf_size": 2097152, 00:14:15.328 "send_buf_size": 2097152, 00:14:15.328 "enable_recv_pipe": true, 00:14:15.328 "enable_quickack": false, 00:14:15.328 "enable_placement_id": 0, 00:14:15.328 "enable_zerocopy_send_server": true, 00:14:15.328 "enable_zerocopy_send_client": false, 00:14:15.328 "zerocopy_threshold": 0, 00:14:15.328 "tls_version": 0, 00:14:15.328 "enable_ktls": false 00:14:15.328 } 00:14:15.328 } 00:14:15.328 ] 00:14:15.328 }, 00:14:15.328 { 00:14:15.328 "subsystem": "vmd", 00:14:15.328 "config": [] 00:14:15.328 }, 00:14:15.328 { 00:14:15.328 "subsystem": "accel", 00:14:15.328 "config": [ 00:14:15.328 { 00:14:15.328 "method": "accel_set_options", 00:14:15.328 "params": { 00:14:15.328 "small_cache_size": 128, 00:14:15.328 "large_cache_size": 16, 00:14:15.328 "task_count": 2048, 00:14:15.328 "sequence_count": 2048, 00:14:15.328 "buf_count": 2048 00:14:15.328 } 00:14:15.328 } 00:14:15.328 ] 00:14:15.328 }, 00:14:15.328 { 00:14:15.328 "subsystem": "bdev", 00:14:15.328 "config": [ 00:14:15.328 { 00:14:15.328 "method": "bdev_set_options", 00:14:15.328 "params": { 00:14:15.328 "bdev_io_pool_size": 65535, 00:14:15.328 "bdev_io_cache_size": 256, 00:14:15.328 "bdev_auto_examine": true, 00:14:15.328 "iobuf_small_cache_size": 128, 00:14:15.328 "iobuf_large_cache_size": 16 00:14:15.328 } 00:14:15.328 }, 00:14:15.328 { 00:14:15.328 "method": "bdev_raid_set_options", 00:14:15.328 "params": { 00:14:15.328 "process_window_size_kb": 1024, 00:14:15.328 "process_max_bandwidth_mb_sec": 0 00:14:15.328 } 00:14:15.328 }, 00:14:15.328 { 00:14:15.328 "method": "bdev_iscsi_set_options", 00:14:15.328 "params": { 00:14:15.328 "timeout_sec": 30 00:14:15.328 } 00:14:15.328 }, 00:14:15.328 { 00:14:15.328 "method": "bdev_nvme_set_options", 00:14:15.328 "params": { 00:14:15.328 "action_on_timeout": "none", 00:14:15.328 "timeout_us": 0, 00:14:15.328 "timeout_admin_us": 0, 00:14:15.328 "keep_alive_timeout_ms": 10000, 00:14:15.328 "arbitration_burst": 0, 00:14:15.328 "low_priority_weight": 0, 00:14:15.328 "medium_priority_weight": 0, 00:14:15.328 "high_priority_weight": 0, 00:14:15.328 "nvme_adminq_poll_period_us": 10000, 00:14:15.328 "nvme_ioq_poll_period_us": 0, 00:14:15.328 "io_queue_requests": 0, 00:14:15.328 "delay_cmd_submit": true, 00:14:15.328 "transport_retry_count": 4, 00:14:15.328 "bdev_retry_count": 3, 00:14:15.328 "transport_ack_timeout": 0, 00:14:15.328 "ctrlr_loss_timeout_sec": 0, 00:14:15.328 "reconnect_delay_sec": 0, 00:14:15.328 "fast_io_fail_timeout_sec": 0, 00:14:15.328 "disable_auto_failback": false, 00:14:15.328 "generate_uuids": false, 00:14:15.328 "transport_tos": 0, 00:14:15.328 "nvme_error_stat": false, 00:14:15.328 "rdma_srq_size": 0, 00:14:15.328 "io_path_stat": false, 00:14:15.328 "allow_accel_sequence": false, 00:14:15.328 "rdma_max_cq_size": 0, 00:14:15.328 "rdma_cm_event_timeout_ms": 0, 00:14:15.328 "dhchap_digests": [ 00:14:15.328 "sha256", 00:14:15.328 "sha384", 00:14:15.328 "sha512" 00:14:15.328 ], 00:14:15.328 "dhchap_dhgroups": [ 00:14:15.328 "null", 00:14:15.328 "ffdhe2048", 00:14:15.328 "ffdhe3072", 00:14:15.328 "ffdhe4096", 00:14:15.328 "ffdhe6144", 00:14:15.328 "ffdhe8192" 00:14:15.328 ] 00:14:15.328 } 00:14:15.328 }, 00:14:15.328 { 00:14:15.328 "method": "bdev_nvme_set_hotplug", 00:14:15.328 "params": { 00:14:15.328 "period_us": 100000, 00:14:15.328 "enable": false 00:14:15.328 } 00:14:15.328 }, 00:14:15.328 { 00:14:15.328 "method": "bdev_malloc_create", 00:14:15.328 "params": { 00:14:15.328 "name": "malloc0", 00:14:15.328 "num_blocks": 8192, 00:14:15.328 "block_size": 4096, 00:14:15.328 "physical_block_size": 4096, 00:14:15.328 "uuid": "e22a9d14-82a4-4500-9971-a13f94ade39a", 00:14:15.328 "optimal_io_boundary": 0, 00:14:15.328 "md_size": 0, 00:14:15.328 "dif_type": 0, 00:14:15.328 "dif_is_head_of_md": false, 00:14:15.328 "dif_pi_format": 0 00:14:15.328 } 00:14:15.328 }, 00:14:15.328 { 00:14:15.328 "method": "bdev_wait_for_examine" 00:14:15.328 } 00:14:15.328 ] 00:14:15.328 }, 00:14:15.328 { 00:14:15.328 "subsystem": "scsi", 00:14:15.328 "config": null 00:14:15.328 }, 00:14:15.328 { 00:14:15.328 "subsystem": "scheduler", 00:14:15.328 "config": [ 00:14:15.328 { 00:14:15.328 "method": "framework_set_scheduler", 00:14:15.328 "params": { 00:14:15.328 "name": "static" 00:14:15.328 } 00:14:15.328 } 00:14:15.328 ] 00:14:15.328 }, 00:14:15.328 { 00:14:15.328 "subsystem": "vhost_scsi", 00:14:15.328 "config": [] 00:14:15.328 }, 00:14:15.328 { 00:14:15.328 "subsystem": "vhost_blk", 00:14:15.328 "config": [] 00:14:15.328 }, 00:14:15.328 { 00:14:15.328 "subsystem": "ublk", 00:14:15.328 "config": [ 00:14:15.328 { 00:14:15.328 "method": "ublk_create_target", 00:14:15.328 "params": { 00:14:15.328 "cpumask": "1" 00:14:15.328 } 00:14:15.328 }, 00:14:15.328 { 00:14:15.328 "method": "ublk_start_disk", 00:14:15.328 "params": { 00:14:15.328 "bdev_name": "malloc0", 00:14:15.328 "ublk_id": 0, 00:14:15.328 "num_queues": 1, 00:14:15.328 "queue_depth": 128 00:14:15.328 } 00:14:15.328 } 00:14:15.328 ] 00:14:15.328 }, 00:14:15.328 { 00:14:15.328 "subsystem": "nbd", 00:14:15.328 "config": [] 00:14:15.328 }, 00:14:15.328 { 00:14:15.328 "subsystem": "nvmf", 00:14:15.328 "config": [ 00:14:15.328 { 00:14:15.328 "method": "nvmf_set_config", 00:14:15.328 "params": { 00:14:15.328 "discovery_filter": "match_any", 00:14:15.328 "admin_cmd_passthru": { 00:14:15.328 "identify_ctrlr": false 00:14:15.328 }, 00:14:15.328 "dhchap_digests": [ 00:14:15.328 "sha256", 00:14:15.328 "sha384", 00:14:15.328 "sha512" 00:14:15.328 ], 00:14:15.328 "dhchap_dhgroups": [ 00:14:15.328 "null", 00:14:15.328 "ffdhe2048", 00:14:15.328 "ffdhe3072", 00:14:15.328 "ffdhe4096", 00:14:15.328 "ffdhe6144", 00:14:15.328 "ffdhe8192" 00:14:15.328 ] 00:14:15.328 } 00:14:15.328 }, 00:14:15.328 { 00:14:15.328 "method": "nvmf_set_max_subsystems", 00:14:15.328 "params": { 00:14:15.328 "max_subsystems": 1024 00:14:15.328 } 00:14:15.328 }, 00:14:15.328 { 00:14:15.328 "method": "nvmf_set_crdt", 00:14:15.328 "params": { 00:14:15.328 "crdt1": 0, 00:14:15.328 "crdt2": 0, 00:14:15.328 "crdt3": 0 00:14:15.328 } 00:14:15.328 } 00:14:15.328 ] 00:14:15.328 }, 00:14:15.328 { 00:14:15.328 "subsystem": "iscsi", 00:14:15.328 "config": [ 00:14:15.328 { 00:14:15.328 "method": "iscsi_set_options", 00:14:15.328 "params": { 00:14:15.328 "node_base": "iqn.2016-06.io.spdk", 00:14:15.328 "max_sessions": 128, 00:14:15.328 "max_connections_per_session": 2, 00:14:15.328 "max_queue_depth": 64, 00:14:15.328 "default_time2wait": 2, 00:14:15.328 "default_time2retain": 20, 00:14:15.328 "first_burst_length": 8192, 00:14:15.328 "immediate_data": true, 00:14:15.328 "allow_duplicated_isid": false, 00:14:15.328 "error_recovery_level": 0, 00:14:15.328 "nop_timeout": 60, 00:14:15.328 "nop_in_interval": 30, 00:14:15.328 "disable_chap": false, 00:14:15.328 "require_chap": false, 00:14:15.328 "mutual_chap": false, 00:14:15.328 "chap_group": 0, 00:14:15.328 "max_large_datain_per_connection": 64, 00:14:15.328 "max_r2t_per_connection": 4, 00:14:15.328 "pdu_pool_size": 36864, 00:14:15.328 "immediate_data_pool_size": 16384, 00:14:15.328 "data_out_pool_size": 2048 00:14:15.328 } 00:14:15.328 } 00:14:15.328 ] 00:14:15.328 } 00:14:15.328 ] 00:14:15.328 }' 00:14:15.328 23:09:34 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 82880 00:14:15.329 23:09:34 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 82880 ']' 00:14:15.329 23:09:34 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 82880 00:14:15.329 23:09:34 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:14:15.329 23:09:34 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:15.329 23:09:34 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82880 00:14:15.329 23:09:34 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:15.329 killing process with pid 82880 00:14:15.329 23:09:34 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:15.329 23:09:34 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82880' 00:14:15.329 23:09:34 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 82880 00:14:15.329 23:09:34 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 82880 00:14:15.329 [2024-11-18 23:09:34.694400] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:15.587 [2024-11-18 23:09:34.731254] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:15.587 [2024-11-18 23:09:34.731390] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:15.587 [2024-11-18 23:09:34.739183] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:15.587 [2024-11-18 23:09:34.739241] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:15.587 [2024-11-18 23:09:34.739249] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:15.587 [2024-11-18 23:09:34.739273] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:15.587 [2024-11-18 23:09:34.739415] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:15.846 23:09:35 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=82918 00:14:15.846 23:09:35 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 82918 00:14:15.846 23:09:35 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 82918 ']' 00:14:15.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:15.846 23:09:35 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:15.846 23:09:35 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:15.846 23:09:35 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:15.846 23:09:35 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:15.846 23:09:35 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:15.846 23:09:35 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:14:15.846 23:09:35 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:14:15.846 "subsystems": [ 00:14:15.846 { 00:14:15.846 "subsystem": "fsdev", 00:14:15.846 "config": [ 00:14:15.846 { 00:14:15.846 "method": "fsdev_set_opts", 00:14:15.846 "params": { 00:14:15.846 "fsdev_io_pool_size": 65535, 00:14:15.846 "fsdev_io_cache_size": 256 00:14:15.846 } 00:14:15.846 } 00:14:15.846 ] 00:14:15.846 }, 00:14:15.846 { 00:14:15.846 "subsystem": "keyring", 00:14:15.846 "config": [] 00:14:15.846 }, 00:14:15.846 { 00:14:15.846 "subsystem": "iobuf", 00:14:15.846 "config": [ 00:14:15.846 { 00:14:15.846 "method": "iobuf_set_options", 00:14:15.846 "params": { 00:14:15.846 "small_pool_count": 8192, 00:14:15.846 "large_pool_count": 1024, 00:14:15.846 "small_bufsize": 8192, 00:14:15.846 "large_bufsize": 135168 00:14:15.846 } 00:14:15.846 } 00:14:15.846 ] 00:14:15.846 }, 00:14:15.846 { 00:14:15.846 "subsystem": "sock", 00:14:15.846 "config": [ 00:14:15.846 { 00:14:15.846 "method": "sock_set_default_impl", 00:14:15.846 "params": { 00:14:15.846 "impl_name": "posix" 00:14:15.846 } 00:14:15.846 }, 00:14:15.846 { 00:14:15.846 "method": "sock_impl_set_options", 00:14:15.846 "params": { 00:14:15.846 "impl_name": "ssl", 00:14:15.846 "recv_buf_size": 4096, 00:14:15.846 "send_buf_size": 4096, 00:14:15.846 "enable_recv_pipe": true, 00:14:15.846 "enable_quickack": false, 00:14:15.846 "enable_placement_id": 0, 00:14:15.846 "enable_zerocopy_send_server": true, 00:14:15.846 "enable_zerocopy_send_client": false, 00:14:15.846 "zerocopy_threshold": 0, 00:14:15.846 "tls_version": 0, 00:14:15.846 "enable_ktls": false 00:14:15.846 } 00:14:15.846 }, 00:14:15.846 { 00:14:15.846 "method": "sock_impl_set_options", 00:14:15.846 "params": { 00:14:15.846 "impl_name": "posix", 00:14:15.846 "recv_buf_size": 2097152, 00:14:15.846 "send_buf_size": 2097152, 00:14:15.846 "enable_recv_pipe": true, 00:14:15.846 "enable_quickack": false, 00:14:15.846 "enable_placement_id": 0, 00:14:15.846 "enable_zerocopy_send_server": true, 00:14:15.846 "enable_zerocopy_send_client": false, 00:14:15.846 "zerocopy_threshold": 0, 00:14:15.846 "tls_version": 0, 00:14:15.846 "enable_ktls": false 00:14:15.846 } 00:14:15.846 } 00:14:15.846 ] 00:14:15.846 }, 00:14:15.846 { 00:14:15.846 "subsystem": "vmd", 00:14:15.846 "config": [] 00:14:15.846 }, 00:14:15.846 { 00:14:15.846 "subsystem": "accel", 00:14:15.846 "config": [ 00:14:15.846 { 00:14:15.846 "method": "accel_set_options", 00:14:15.846 "params": { 00:14:15.846 "small_cache_size": 128, 00:14:15.846 "large_cache_size": 16, 00:14:15.846 "task_count": 2048, 00:14:15.846 "sequence_count": 2048, 00:14:15.846 "buf_count": 2048 00:14:15.846 } 00:14:15.846 } 00:14:15.846 ] 00:14:15.846 }, 00:14:15.846 { 00:14:15.846 "subsystem": "bdev", 00:14:15.846 "config": [ 00:14:15.846 { 00:14:15.846 "method": "bdev_set_options", 00:14:15.846 "params": { 00:14:15.846 "bdev_io_pool_size": 65535, 00:14:15.846 "bdev_io_cache_size": 256, 00:14:15.846 "bdev_auto_examine": true, 00:14:15.846 "iobuf_small_cache_size": 128, 00:14:15.846 "iobuf_large_cache_size": 16 00:14:15.846 } 00:14:15.846 }, 00:14:15.846 { 00:14:15.846 "method": "bdev_raid_set_options", 00:14:15.846 "params": { 00:14:15.846 "process_window_size_kb": 1024, 00:14:15.846 "process_max_bandwidth_mb_sec": 0 00:14:15.846 } 00:14:15.846 }, 00:14:15.846 { 00:14:15.846 "method": "bdev_iscsi_set_options", 00:14:15.846 "params": { 00:14:15.846 "timeout_sec": 30 00:14:15.846 } 00:14:15.846 }, 00:14:15.846 { 00:14:15.846 "method": "bdev_nvme_set_options", 00:14:15.846 "params": { 00:14:15.846 "action_on_timeout": "none", 00:14:15.846 "timeout_us": 0, 00:14:15.846 "timeout_admin_us": 0, 00:14:15.846 "keep_alive_timeout_ms": 10000, 00:14:15.846 "arbitration_burst": 0, 00:14:15.846 "low_priority_weight": 0, 00:14:15.846 "medium_priority_weight": 0, 00:14:15.846 "high_priority_weight": 0, 00:14:15.846 "nvme_adminq_poll_period_us": 10000, 00:14:15.846 "nvme_ioq_poll_period_us": 0, 00:14:15.846 "io_queue_requests": 0, 00:14:15.846 "delay_cmd_submit": true, 00:14:15.846 "transport_retry_count": 4, 00:14:15.846 "bdev_retry_count": 3, 00:14:15.846 "transport_ack_timeout": 0, 00:14:15.846 "ctrlr_loss_timeout_sec": 0, 00:14:15.846 "reconnect_delay_sec": 0, 00:14:15.846 "fast_io_fail_timeout_sec": 0, 00:14:15.846 "disable_auto_failback": false, 00:14:15.846 "generate_uuids": false, 00:14:15.846 "transport_tos": 0, 00:14:15.846 "nvme_error_stat": false, 00:14:15.846 "rdma_srq_size": 0, 00:14:15.846 "io_path_stat": false, 00:14:15.847 "allow_accel_sequence": false, 00:14:15.847 "rdma_max_cq_size": 0, 00:14:15.847 "rdma_cm_event_timeout_ms": 0, 00:14:15.847 "dhchap_digests": [ 00:14:15.847 "sha256", 00:14:15.847 "sha384", 00:14:15.847 "sha512" 00:14:15.847 ], 00:14:15.847 "dhchap_dhgroups": [ 00:14:15.847 "null", 00:14:15.847 "ffdhe2048", 00:14:15.847 "ffdhe3072", 00:14:15.847 "ffdhe4096", 00:14:15.847 "ffdhe6144", 00:14:15.847 "ffdhe8192" 00:14:15.847 ] 00:14:15.847 } 00:14:15.847 }, 00:14:15.847 { 00:14:15.847 "method": "bdev_nvme_set_hotplug", 00:14:15.847 "params": { 00:14:15.847 "period_us": 100000, 00:14:15.847 "enable": false 00:14:15.847 } 00:14:15.847 }, 00:14:15.847 { 00:14:15.847 "method": "bdev_malloc_create", 00:14:15.847 "params": { 00:14:15.847 "name": "malloc0", 00:14:15.847 "num_blocks": 8192, 00:14:15.847 "block_size": 4096, 00:14:15.847 "physical_block_size": 4096, 00:14:15.847 "uuid": "e22a9d14-82a4-4500-9971-a13f94ade39a", 00:14:15.847 "optimal_io_boundary": 0, 00:14:15.847 "md_size": 0, 00:14:15.847 "dif_type": 0, 00:14:15.847 "dif_is_head_of_md": false, 00:14:15.847 "dif_pi_format": 0 00:14:15.847 } 00:14:15.847 }, 00:14:15.847 { 00:14:15.847 "method": "bdev_wait_for_examine" 00:14:15.847 } 00:14:15.847 ] 00:14:15.847 }, 00:14:15.847 { 00:14:15.847 "subsystem": "scsi", 00:14:15.847 "config": null 00:14:15.847 }, 00:14:15.847 { 00:14:15.847 "subsystem": "scheduler", 00:14:15.847 "config": [ 00:14:15.847 { 00:14:15.847 "method": "framework_set_scheduler", 00:14:15.847 "params": { 00:14:15.847 "name": "static" 00:14:15.847 } 00:14:15.847 } 00:14:15.847 ] 00:14:15.847 }, 00:14:15.847 { 00:14:15.847 "subsystem": "vhost_scsi", 00:14:15.847 "config": [] 00:14:15.847 }, 00:14:15.847 { 00:14:15.847 "subsystem": "vhost_blk", 00:14:15.847 "config": [] 00:14:15.847 }, 00:14:15.847 { 00:14:15.847 "subsystem": "ublk", 00:14:15.847 "config": [ 00:14:15.847 { 00:14:15.847 "method": "ublk_create_target", 00:14:15.847 "params": { 00:14:15.847 "cpumask": "1" 00:14:15.847 } 00:14:15.847 }, 00:14:15.847 { 00:14:15.847 "method": "ublk_start_disk", 00:14:15.847 "params": { 00:14:15.847 "bdev_name": "malloc0", 00:14:15.847 "ublk_id": 0, 00:14:15.847 "num_queues": 1, 00:14:15.847 "queue_depth": 128 00:14:15.847 } 00:14:15.847 } 00:14:15.847 ] 00:14:15.847 }, 00:14:15.847 { 00:14:15.847 "subsystem": "nbd", 00:14:15.847 "config": [] 00:14:15.847 }, 00:14:15.847 { 00:14:15.847 "subsystem": "nvmf", 00:14:15.847 "config": [ 00:14:15.847 { 00:14:15.847 "method": "nvmf_set_config", 00:14:15.847 "params": { 00:14:15.847 "discovery_filter": "match_any", 00:14:15.847 "admin_cmd_passthru": { 00:14:15.847 "identify_ctrlr": false 00:14:15.847 }, 00:14:15.847 "dhchap_digests": [ 00:14:15.847 "sha256", 00:14:15.847 "sha384", 00:14:15.847 "sha512" 00:14:15.847 ], 00:14:15.847 "dhchap_dhgroups": [ 00:14:15.847 "null", 00:14:15.847 "ffdhe2048", 00:14:15.847 "ffdhe3072", 00:14:15.847 "ffdhe4096", 00:14:15.847 "ffdhe6144", 00:14:15.847 "ffdhe8192" 00:14:15.847 ] 00:14:15.847 } 00:14:15.847 }, 00:14:15.847 { 00:14:15.847 "method": "nvmf_set_max_subsystems", 00:14:15.847 "params": { 00:14:15.847 "max_subsystems": 1024 00:14:15.847 } 00:14:15.847 }, 00:14:15.847 { 00:14:15.847 "method": "nvmf_set_crdt", 00:14:15.847 "params": { 00:14:15.847 "crdt1": 0, 00:14:15.847 "crdt2": 0, 00:14:15.847 "crdt3": 0 00:14:15.847 } 00:14:15.847 } 00:14:15.847 ] 00:14:15.847 }, 00:14:15.847 { 00:14:15.847 "subsystem": "iscsi", 00:14:15.847 "config": [ 00:14:15.847 { 00:14:15.847 "method": "iscsi_set_options", 00:14:15.847 "params": { 00:14:15.847 "node_base": "iqn.2016-06.io.spdk", 00:14:15.847 "max_sessions": 128, 00:14:15.847 "max_connections_per_session": 2, 00:14:15.847 "max_queue_depth": 64, 00:14:15.847 "default_time2wait": 2, 00:14:15.847 "default_time2retain": 20, 00:14:15.847 "first_burst_length": 8192, 00:14:15.847 "immediate_data": true, 00:14:15.847 "allow_duplicated_isid": false, 00:14:15.847 "error_recovery_level": 0, 00:14:15.847 "nop_timeout": 60, 00:14:15.847 "nop_in_interval": 30, 00:14:15.847 "disable_chap": false, 00:14:15.847 "require_chap": false, 00:14:15.847 "mutual_chap": false, 00:14:15.847 "chap_group": 0, 00:14:15.847 "max_large_datain_per_connection": 64, 00:14:15.847 "max_r2t_per_connection": 4, 00:14:15.847 "pdu_pool_size": 36864, 00:14:15.847 "immediate_data_pool_size": 16384, 00:14:15.847 "data_out_pool_size": 2048 00:14:15.847 } 00:14:15.847 } 00:14:15.847 ] 00:14:15.847 } 00:14:15.847 ] 00:14:15.847 }' 00:14:15.847 [2024-11-18 23:09:35.128528] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:15.847 [2024-11-18 23:09:35.128644] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82918 ] 00:14:16.105 [2024-11-18 23:09:35.276556] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:16.105 [2024-11-18 23:09:35.316931] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:16.363 [2024-11-18 23:09:35.615172] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:16.363 [2024-11-18 23:09:35.615446] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:16.363 [2024-11-18 23:09:35.623300] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:14:16.363 [2024-11-18 23:09:35.623381] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:14:16.363 [2024-11-18 23:09:35.623388] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:16.363 [2024-11-18 23:09:35.623395] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:16.363 [2024-11-18 23:09:35.632242] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:16.363 [2024-11-18 23:09:35.632267] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:16.363 [2024-11-18 23:09:35.639180] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:16.363 [2024-11-18 23:09:35.639268] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:16.363 [2024-11-18 23:09:35.656182] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:16.621 23:09:35 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:16.621 23:09:35 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:14:16.621 23:09:35 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:14:16.621 23:09:35 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:14:16.621 23:09:35 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.621 23:09:35 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:16.621 23:09:35 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.621 23:09:35 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:16.621 23:09:35 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:14:16.621 23:09:35 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 82918 00:14:16.621 23:09:35 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 82918 ']' 00:14:16.621 23:09:35 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 82918 00:14:16.621 23:09:35 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:14:16.880 23:09:35 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:16.880 23:09:35 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82918 00:14:16.880 23:09:36 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:16.880 killing process with pid 82918 00:14:16.880 23:09:36 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:16.880 23:09:36 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82918' 00:14:16.880 23:09:36 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 82918 00:14:16.880 23:09:36 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 82918 00:14:16.880 [2024-11-18 23:09:36.191509] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:16.880 [2024-11-18 23:09:36.223258] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:16.880 [2024-11-18 23:09:36.223383] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:16.880 [2024-11-18 23:09:36.230170] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:16.880 [2024-11-18 23:09:36.230234] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:16.880 [2024-11-18 23:09:36.230242] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:16.880 [2024-11-18 23:09:36.230269] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:16.880 [2024-11-18 23:09:36.230401] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:17.445 23:09:36 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:14:17.445 00:14:17.445 real 0m3.304s 00:14:17.445 user 0m2.440s 00:14:17.445 sys 0m1.472s 00:14:17.445 23:09:36 ublk.test_save_ublk_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:17.445 23:09:36 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:17.445 ************************************ 00:14:17.445 END TEST test_save_ublk_config 00:14:17.445 ************************************ 00:14:17.445 23:09:36 ublk -- ublk/ublk.sh@139 -- # spdk_pid=82963 00:14:17.445 23:09:36 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:17.445 23:09:36 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:17.445 23:09:36 ublk -- ublk/ublk.sh@141 -- # waitforlisten 82963 00:14:17.445 23:09:36 ublk -- common/autotest_common.sh@831 -- # '[' -z 82963 ']' 00:14:17.445 23:09:36 ublk -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:17.445 23:09:36 ublk -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:17.445 23:09:36 ublk -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:17.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:17.445 23:09:36 ublk -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:17.445 23:09:36 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:17.445 [2024-11-18 23:09:36.657868] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:17.445 [2024-11-18 23:09:36.657976] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82963 ] 00:14:17.445 [2024-11-18 23:09:36.802980] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:17.704 [2024-11-18 23:09:36.835631] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:17.704 [2024-11-18 23:09:36.835677] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:18.271 23:09:37 ublk -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:18.271 23:09:37 ublk -- common/autotest_common.sh@864 -- # return 0 00:14:18.271 23:09:37 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:14:18.271 23:09:37 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:18.271 23:09:37 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:18.271 23:09:37 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:18.271 ************************************ 00:14:18.271 START TEST test_create_ublk 00:14:18.271 ************************************ 00:14:18.271 23:09:37 ublk.test_create_ublk -- common/autotest_common.sh@1125 -- # test_create_ublk 00:14:18.271 23:09:37 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:14:18.271 23:09:37 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.271 23:09:37 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:18.271 [2024-11-18 23:09:37.509174] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:18.271 [2024-11-18 23:09:37.510263] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:18.271 23:09:37 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.271 23:09:37 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:14:18.271 23:09:37 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:14:18.271 23:09:37 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.271 23:09:37 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:18.271 23:09:37 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.271 23:09:37 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:14:18.271 23:09:37 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:14:18.271 23:09:37 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.271 23:09:37 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:18.271 [2024-11-18 23:09:37.568294] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:14:18.271 [2024-11-18 23:09:37.568663] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:14:18.271 [2024-11-18 23:09:37.568678] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:18.271 [2024-11-18 23:09:37.568686] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:18.271 [2024-11-18 23:09:37.577351] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:18.271 [2024-11-18 23:09:37.577378] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:18.271 [2024-11-18 23:09:37.584185] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:18.271 [2024-11-18 23:09:37.584790] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:18.271 [2024-11-18 23:09:37.605183] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:18.271 23:09:37 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.271 23:09:37 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:14:18.271 23:09:37 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:14:18.271 23:09:37 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:14:18.271 23:09:37 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.271 23:09:37 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:18.271 23:09:37 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.271 23:09:37 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:14:18.271 { 00:14:18.271 "ublk_device": "/dev/ublkb0", 00:14:18.271 "id": 0, 00:14:18.271 "queue_depth": 512, 00:14:18.271 "num_queues": 4, 00:14:18.271 "bdev_name": "Malloc0" 00:14:18.271 } 00:14:18.271 ]' 00:14:18.271 23:09:37 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:14:18.530 23:09:37 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:18.530 23:09:37 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:14:18.530 23:09:37 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:14:18.530 23:09:37 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:14:18.530 23:09:37 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:14:18.530 23:09:37 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:14:18.530 23:09:37 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:14:18.530 23:09:37 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:14:18.530 23:09:37 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:14:18.530 23:09:37 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:14:18.530 23:09:37 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:14:18.530 23:09:37 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:14:18.530 23:09:37 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:14:18.530 23:09:37 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:14:18.530 23:09:37 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:14:18.530 23:09:37 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:14:18.530 23:09:37 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:18.530 23:09:37 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:14:18.530 23:09:37 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:18.530 23:09:37 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:18.530 23:09:37 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:14:18.530 fio: verification read phase will never start because write phase uses all of runtime 00:14:18.530 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:18.530 fio-3.35 00:14:18.530 Starting 1 process 00:14:30.743 00:14:30.743 fio_test: (groupid=0, jobs=1): err= 0: pid=83010: Mon Nov 18 23:09:48 2024 00:14:30.743 write: IOPS=20.4k, BW=79.7MiB/s (83.6MB/s)(797MiB/10001msec); 0 zone resets 00:14:30.743 clat (usec): min=33, max=3818, avg=48.18, stdev=82.35 00:14:30.743 lat (usec): min=34, max=3819, avg=48.66, stdev=82.36 00:14:30.743 clat percentiles (usec): 00:14:30.743 | 1.00th=[ 38], 5.00th=[ 40], 10.00th=[ 41], 20.00th=[ 42], 00:14:30.743 | 30.00th=[ 43], 40.00th=[ 44], 50.00th=[ 44], 60.00th=[ 45], 00:14:30.743 | 70.00th=[ 47], 80.00th=[ 48], 90.00th=[ 53], 95.00th=[ 59], 00:14:30.743 | 99.00th=[ 68], 99.50th=[ 75], 99.90th=[ 1156], 99.95th=[ 2540], 00:14:30.743 | 99.99th=[ 3523] 00:14:30.743 bw ( KiB/s): min=70872, max=85376, per=100.00%, avg=81679.16, stdev=4086.34, samples=19 00:14:30.743 iops : min=17718, max=21344, avg=20419.79, stdev=1021.59, samples=19 00:14:30.743 lat (usec) : 50=85.76%, 100=13.99%, 250=0.10%, 500=0.02%, 750=0.01% 00:14:30.743 lat (usec) : 1000=0.01% 00:14:30.743 lat (msec) : 2=0.04%, 4=0.07% 00:14:30.743 cpu : usr=3.53%, sys=16.33%, ctx=204095, majf=0, minf=797 00:14:30.743 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:30.743 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:30.743 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:30.743 issued rwts: total=0,204098,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:30.743 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:30.743 00:14:30.743 Run status group 0 (all jobs): 00:14:30.743 WRITE: bw=79.7MiB/s (83.6MB/s), 79.7MiB/s-79.7MiB/s (83.6MB/s-83.6MB/s), io=797MiB (836MB), run=10001-10001msec 00:14:30.743 00:14:30.743 Disk stats (read/write): 00:14:30.743 ublkb0: ios=0/201929, merge=0/0, ticks=0/8036, in_queue=8037, util=99.09% 00:14:30.743 23:09:48 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:14:30.743 23:09:48 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.743 23:09:48 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:30.743 [2024-11-18 23:09:48.025006] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:30.743 [2024-11-18 23:09:48.062612] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:30.743 [2024-11-18 23:09:48.063532] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:30.743 [2024-11-18 23:09:48.068181] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:30.743 [2024-11-18 23:09:48.068398] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:30.743 [2024-11-18 23:09:48.068409] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:30.744 23:09:48 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.744 23:09:48 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:14:30.744 23:09:48 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # local es=0 00:14:30.744 23:09:48 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:14:30.744 23:09:48 ublk.test_create_ublk -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:30.744 23:09:48 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:30.744 23:09:48 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:30.744 23:09:48 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:30.744 23:09:48 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:14:30.744 23:09:48 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.744 23:09:48 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:30.744 [2024-11-18 23:09:48.084262] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:14:30.744 request: 00:14:30.744 { 00:14:30.744 "ublk_id": 0, 00:14:30.744 "method": "ublk_stop_disk", 00:14:30.744 "req_id": 1 00:14:30.744 } 00:14:30.744 Got JSON-RPC error response 00:14:30.744 response: 00:14:30.744 { 00:14:30.744 "code": -19, 00:14:30.744 "message": "No such device" 00:14:30.744 } 00:14:30.744 23:09:48 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:30.744 23:09:48 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # es=1 00:14:30.744 23:09:48 ublk.test_create_ublk -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:30.744 23:09:48 ublk.test_create_ublk -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:30.744 23:09:48 ublk.test_create_ublk -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:30.744 23:09:48 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:14:30.744 23:09:48 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.744 23:09:48 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:30.744 [2024-11-18 23:09:48.108228] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:30.744 [2024-11-18 23:09:48.109658] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:30.744 [2024-11-18 23:09:48.109691] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:30.744 23:09:48 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.744 23:09:48 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:30.744 23:09:48 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.744 23:09:48 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:30.744 23:09:48 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.744 23:09:48 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:14:30.744 23:09:48 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:30.744 23:09:48 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.744 23:09:48 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:30.744 23:09:48 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.744 23:09:48 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:30.744 23:09:48 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:14:30.744 23:09:48 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:30.744 23:09:48 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:30.744 23:09:48 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.744 23:09:48 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:30.744 23:09:48 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.744 23:09:48 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:30.744 23:09:48 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:14:30.744 23:09:48 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:30.744 00:14:30.744 real 0m10.769s 00:14:30.744 user 0m0.660s 00:14:30.744 sys 0m1.708s 00:14:30.744 23:09:48 ublk.test_create_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:30.744 23:09:48 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:30.744 ************************************ 00:14:30.744 END TEST test_create_ublk 00:14:30.744 ************************************ 00:14:30.744 23:09:48 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:14:30.744 23:09:48 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:30.744 23:09:48 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:30.744 23:09:48 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:30.744 ************************************ 00:14:30.744 START TEST test_create_multi_ublk 00:14:30.744 ************************************ 00:14:30.744 23:09:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@1125 -- # test_create_multi_ublk 00:14:30.744 23:09:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:14:30.744 23:09:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.744 23:09:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:30.744 [2024-11-18 23:09:48.319167] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:30.744 [2024-11-18 23:09:48.320062] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:30.744 23:09:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.744 23:09:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:14:30.744 23:09:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:14:30.744 23:09:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:30.744 23:09:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:14:30.744 23:09:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.744 23:09:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:30.744 23:09:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.744 23:09:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:14:30.744 23:09:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:14:30.744 23:09:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.744 23:09:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:30.744 [2024-11-18 23:09:48.390279] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:14:30.744 [2024-11-18 23:09:48.390565] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:14:30.744 [2024-11-18 23:09:48.390578] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:30.744 [2024-11-18 23:09:48.390584] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:30.744 [2024-11-18 23:09:48.403190] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:30.744 [2024-11-18 23:09:48.403203] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:30.744 [2024-11-18 23:09:48.415187] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:30.744 [2024-11-18 23:09:48.415676] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:30.744 [2024-11-18 23:09:48.423431] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:30.744 23:09:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.744 23:09:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:14:30.744 23:09:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:30.744 23:09:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:14:30.744 23:09:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.744 23:09:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:30.744 23:09:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.744 23:09:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:14:30.744 23:09:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:14:30.744 23:09:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.744 23:09:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:30.744 [2024-11-18 23:09:48.505268] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:14:30.744 [2024-11-18 23:09:48.505554] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:14:30.744 [2024-11-18 23:09:48.505566] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:30.744 [2024-11-18 23:09:48.505573] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:30.744 [2024-11-18 23:09:48.517185] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:30.744 [2024-11-18 23:09:48.517206] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:30.744 [2024-11-18 23:09:48.529174] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:30.744 [2024-11-18 23:09:48.529658] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:30.744 [2024-11-18 23:09:48.565183] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:30.744 23:09:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.744 23:09:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:14:30.744 23:09:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:30.744 23:09:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:14:30.744 23:09:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.744 23:09:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:30.744 23:09:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.744 23:09:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:14:30.744 23:09:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:14:30.744 23:09:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.744 23:09:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:30.744 [2024-11-18 23:09:48.648270] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:14:30.744 [2024-11-18 23:09:48.648553] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:14:30.744 [2024-11-18 23:09:48.648566] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:14:30.744 [2024-11-18 23:09:48.648570] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:14:30.744 [2024-11-18 23:09:48.660197] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:30.745 [2024-11-18 23:09:48.660213] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:30.745 [2024-11-18 23:09:48.672180] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:30.745 [2024-11-18 23:09:48.672664] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:14:30.745 [2024-11-18 23:09:48.685208] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:14:30.745 23:09:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.745 23:09:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:14:30.745 23:09:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:30.745 23:09:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:14:30.745 23:09:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.745 23:09:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:30.745 23:09:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.745 23:09:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:14:30.745 23:09:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:14:30.745 23:09:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.745 23:09:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:30.745 [2024-11-18 23:09:48.768265] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:14:30.745 [2024-11-18 23:09:48.768559] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:14:30.745 [2024-11-18 23:09:48.768571] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:14:30.745 [2024-11-18 23:09:48.768578] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:14:30.745 [2024-11-18 23:09:48.781328] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:30.745 [2024-11-18 23:09:48.781350] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:30.745 [2024-11-18 23:09:48.792175] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:30.745 [2024-11-18 23:09:48.792658] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:14:30.745 [2024-11-18 23:09:48.832179] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:14:30.745 23:09:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.745 23:09:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:14:30.745 23:09:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:14:30.745 23:09:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.745 23:09:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:30.745 23:09:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.745 23:09:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:14:30.745 { 00:14:30.745 "ublk_device": "/dev/ublkb0", 00:14:30.745 "id": 0, 00:14:30.745 "queue_depth": 512, 00:14:30.745 "num_queues": 4, 00:14:30.745 "bdev_name": "Malloc0" 00:14:30.745 }, 00:14:30.745 { 00:14:30.745 "ublk_device": "/dev/ublkb1", 00:14:30.745 "id": 1, 00:14:30.745 "queue_depth": 512, 00:14:30.745 "num_queues": 4, 00:14:30.745 "bdev_name": "Malloc1" 00:14:30.745 }, 00:14:30.745 { 00:14:30.745 "ublk_device": "/dev/ublkb2", 00:14:30.745 "id": 2, 00:14:30.745 "queue_depth": 512, 00:14:30.745 "num_queues": 4, 00:14:30.745 "bdev_name": "Malloc2" 00:14:30.745 }, 00:14:30.745 { 00:14:30.745 "ublk_device": "/dev/ublkb3", 00:14:30.745 "id": 3, 00:14:30.745 "queue_depth": 512, 00:14:30.745 "num_queues": 4, 00:14:30.745 "bdev_name": "Malloc3" 00:14:30.745 } 00:14:30.745 ]' 00:14:30.745 23:09:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:14:30.745 23:09:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:30.745 23:09:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:14:30.745 23:09:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:30.745 23:09:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:14:30.745 23:09:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:14:30.745 23:09:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:14:30.745 23:09:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:30.745 23:09:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:14:30.745 23:09:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:30.745 23:09:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:14:30.745 23:09:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:14:30.745 23:09:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:30.745 23:09:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:14:30.745 23:09:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:14:30.745 23:09:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:14:30.745 23:09:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:14:30.745 23:09:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:14:30.745 23:09:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:30.745 23:09:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:14:30.745 23:09:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:30.745 23:09:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:14:30.745 23:09:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:14:30.745 23:09:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:30.745 23:09:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:14:30.745 23:09:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:14:30.745 23:09:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:14:30.745 23:09:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:14:30.745 23:09:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:14:30.745 23:09:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:30.745 23:09:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:14:30.745 23:09:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:30.745 23:09:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:14:30.745 23:09:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:14:30.745 23:09:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:30.745 23:09:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:14:30.745 23:09:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:14:30.745 23:09:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:14:30.745 23:09:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:14:30.745 23:09:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:14:30.745 23:09:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:30.745 23:09:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:14:30.745 23:09:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:30.745 23:09:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:14:30.745 23:09:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:14:30.745 23:09:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:14:30.745 23:09:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:14:30.745 23:09:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:30.745 23:09:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:14:30.745 23:09:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.745 23:09:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:30.745 [2024-11-18 23:09:49.540247] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:30.745 [2024-11-18 23:09:49.588212] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:30.745 [2024-11-18 23:09:49.588889] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:30.745 [2024-11-18 23:09:49.600179] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:30.745 [2024-11-18 23:09:49.600412] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:30.745 [2024-11-18 23:09:49.600422] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:30.745 23:09:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.745 23:09:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:30.745 23:09:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:14:30.745 23:09:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.745 23:09:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:30.745 [2024-11-18 23:09:49.624245] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:14:30.745 [2024-11-18 23:09:49.661607] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:30.745 [2024-11-18 23:09:49.662565] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:14:30.745 [2024-11-18 23:09:49.668178] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:30.745 [2024-11-18 23:09:49.668398] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:14:30.745 [2024-11-18 23:09:49.668409] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:14:30.745 23:09:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.745 23:09:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:30.745 23:09:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:14:30.745 23:09:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.745 23:09:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:30.745 [2024-11-18 23:09:49.692272] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:14:30.745 [2024-11-18 23:09:49.743211] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:30.745 [2024-11-18 23:09:49.743837] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:14:30.745 [2024-11-18 23:09:49.755195] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:30.745 [2024-11-18 23:09:49.755423] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:14:30.746 [2024-11-18 23:09:49.755434] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:14:30.746 23:09:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.746 23:09:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:30.746 23:09:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:14:30.746 23:09:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.746 23:09:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:30.746 [2024-11-18 23:09:49.779239] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:14:30.746 [2024-11-18 23:09:49.804607] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:30.746 [2024-11-18 23:09:49.805515] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:14:30.746 [2024-11-18 23:09:49.811179] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:30.746 [2024-11-18 23:09:49.811400] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:14:30.746 [2024-11-18 23:09:49.811410] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:14:30.746 23:09:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.746 23:09:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:14:30.746 [2024-11-18 23:09:50.007262] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:30.746 [2024-11-18 23:09:50.008183] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:30.746 [2024-11-18 23:09:50.008218] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:30.746 23:09:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:14:30.746 23:09:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:30.746 23:09:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:30.746 23:09:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.746 23:09:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:30.746 23:09:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.746 23:09:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:30.746 23:09:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:14:30.746 23:09:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.746 23:09:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:31.005 23:09:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.005 23:09:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:31.005 23:09:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:14:31.005 23:09:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.005 23:09:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:31.005 23:09:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.005 23:09:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:31.005 23:09:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:14:31.005 23:09:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.005 23:09:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:31.005 23:09:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.005 23:09:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:14:31.005 23:09:50 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:31.005 23:09:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.005 23:09:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:31.005 23:09:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.005 23:09:50 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:31.005 23:09:50 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:14:31.005 23:09:50 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:31.005 23:09:50 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:31.005 23:09:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.005 23:09:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:31.005 23:09:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.005 23:09:50 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:31.005 23:09:50 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:14:31.005 23:09:50 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:31.005 00:14:31.005 real 0m2.064s 00:14:31.005 user 0m0.825s 00:14:31.005 sys 0m0.152s 00:14:31.005 23:09:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:31.005 23:09:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:31.005 ************************************ 00:14:31.005 END TEST test_create_multi_ublk 00:14:31.005 ************************************ 00:14:31.263 23:09:50 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:14:31.263 23:09:50 ublk -- ublk/ublk.sh@147 -- # cleanup 00:14:31.263 23:09:50 ublk -- ublk/ublk.sh@130 -- # killprocess 82963 00:14:31.263 23:09:50 ublk -- common/autotest_common.sh@950 -- # '[' -z 82963 ']' 00:14:31.263 23:09:50 ublk -- common/autotest_common.sh@954 -- # kill -0 82963 00:14:31.263 23:09:50 ublk -- common/autotest_common.sh@955 -- # uname 00:14:31.263 23:09:50 ublk -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:31.263 23:09:50 ublk -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82963 00:14:31.263 23:09:50 ublk -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:31.263 23:09:50 ublk -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:31.263 killing process with pid 82963 00:14:31.263 23:09:50 ublk -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82963' 00:14:31.263 23:09:50 ublk -- common/autotest_common.sh@969 -- # kill 82963 00:14:31.263 23:09:50 ublk -- common/autotest_common.sh@974 -- # wait 82963 00:14:31.263 [2024-11-18 23:09:50.587447] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:31.263 [2024-11-18 23:09:50.587503] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:31.521 00:14:31.521 real 0m17.773s 00:14:31.521 user 0m27.866s 00:14:31.521 sys 0m8.009s 00:14:31.521 23:09:50 ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:31.521 23:09:50 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:31.521 ************************************ 00:14:31.521 END TEST ublk 00:14:31.521 ************************************ 00:14:31.779 23:09:50 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:31.779 23:09:50 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:31.779 23:09:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:31.779 23:09:50 -- common/autotest_common.sh@10 -- # set +x 00:14:31.779 ************************************ 00:14:31.779 START TEST ublk_recovery 00:14:31.779 ************************************ 00:14:31.779 23:09:50 ublk_recovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:31.779 * Looking for test storage... 00:14:31.779 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:14:31.779 23:09:50 ublk_recovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:31.779 23:09:50 ublk_recovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:31.779 23:09:50 ublk_recovery -- common/autotest_common.sh@1681 -- # lcov --version 00:14:31.779 23:09:51 ublk_recovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:31.779 23:09:51 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:31.779 23:09:51 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:31.779 23:09:51 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:31.779 23:09:51 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:14:31.779 23:09:51 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:14:31.779 23:09:51 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:14:31.779 23:09:51 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:14:31.779 23:09:51 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:14:31.779 23:09:51 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:14:31.779 23:09:51 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:14:31.779 23:09:51 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:31.779 23:09:51 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:14:31.779 23:09:51 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:14:31.779 23:09:51 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:31.779 23:09:51 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:31.779 23:09:51 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:14:31.779 23:09:51 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:14:31.779 23:09:51 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:31.779 23:09:51 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:14:31.779 23:09:51 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:14:31.779 23:09:51 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:14:31.779 23:09:51 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:14:31.779 23:09:51 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:31.779 23:09:51 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:14:31.779 23:09:51 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:14:31.779 23:09:51 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:31.779 23:09:51 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:31.779 23:09:51 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:14:31.779 23:09:51 ublk_recovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:31.779 23:09:51 ublk_recovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:31.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.779 --rc genhtml_branch_coverage=1 00:14:31.779 --rc genhtml_function_coverage=1 00:14:31.779 --rc genhtml_legend=1 00:14:31.779 --rc geninfo_all_blocks=1 00:14:31.779 --rc geninfo_unexecuted_blocks=1 00:14:31.779 00:14:31.779 ' 00:14:31.779 23:09:51 ublk_recovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:31.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.779 --rc genhtml_branch_coverage=1 00:14:31.779 --rc genhtml_function_coverage=1 00:14:31.779 --rc genhtml_legend=1 00:14:31.779 --rc geninfo_all_blocks=1 00:14:31.779 --rc geninfo_unexecuted_blocks=1 00:14:31.779 00:14:31.779 ' 00:14:31.780 23:09:51 ublk_recovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:31.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.780 --rc genhtml_branch_coverage=1 00:14:31.780 --rc genhtml_function_coverage=1 00:14:31.780 --rc genhtml_legend=1 00:14:31.780 --rc geninfo_all_blocks=1 00:14:31.780 --rc geninfo_unexecuted_blocks=1 00:14:31.780 00:14:31.780 ' 00:14:31.780 23:09:51 ublk_recovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:31.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.780 --rc genhtml_branch_coverage=1 00:14:31.780 --rc genhtml_function_coverage=1 00:14:31.780 --rc genhtml_legend=1 00:14:31.780 --rc geninfo_all_blocks=1 00:14:31.780 --rc geninfo_unexecuted_blocks=1 00:14:31.780 00:14:31.780 ' 00:14:31.780 23:09:51 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:14:31.780 23:09:51 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:14:31.780 23:09:51 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:14:31.780 23:09:51 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:14:31.780 23:09:51 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:14:31.780 23:09:51 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:14:31.780 23:09:51 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:14:31.780 23:09:51 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:14:31.780 23:09:51 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:14:31.780 23:09:51 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:14:31.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:31.780 23:09:51 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=83333 00:14:31.780 23:09:51 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:31.780 23:09:51 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 83333 00:14:31.780 23:09:51 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 83333 ']' 00:14:31.780 23:09:51 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:31.780 23:09:51 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:31.780 23:09:51 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:31.780 23:09:51 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:31.780 23:09:51 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:31.780 23:09:51 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:31.780 [2024-11-18 23:09:51.130002] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:31.780 [2024-11-18 23:09:51.130091] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83333 ] 00:14:32.097 [2024-11-18 23:09:51.270344] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:32.097 [2024-11-18 23:09:51.299533] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:32.097 [2024-11-18 23:09:51.299632] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:32.668 23:09:51 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:32.668 23:09:51 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:14:32.668 23:09:51 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:14:32.668 23:09:51 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.668 23:09:51 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:32.668 [2024-11-18 23:09:51.945171] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:32.668 [2024-11-18 23:09:51.946141] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:32.668 23:09:51 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.668 23:09:51 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:32.668 23:09:51 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.668 23:09:51 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:32.668 malloc0 00:14:32.668 23:09:51 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.668 23:09:51 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:14:32.668 23:09:51 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.668 23:09:51 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:32.668 [2024-11-18 23:09:51.977271] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:14:32.668 [2024-11-18 23:09:51.977371] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:14:32.668 [2024-11-18 23:09:51.977378] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:32.668 [2024-11-18 23:09:51.977384] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:32.668 [2024-11-18 23:09:51.986243] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:32.668 [2024-11-18 23:09:51.986265] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:32.668 [2024-11-18 23:09:51.989629] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:32.668 [2024-11-18 23:09:51.989742] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:32.668 [2024-11-18 23:09:52.000213] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:32.668 1 00:14:32.668 23:09:52 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.668 23:09:52 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:14:34.044 23:09:53 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=83363 00:14:34.044 23:09:53 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:14:34.044 23:09:53 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:14:34.044 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:34.044 fio-3.35 00:14:34.044 Starting 1 process 00:14:39.310 23:09:58 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 83333 00:14:39.310 23:09:58 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:14:44.590 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 83333 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:14:44.590 23:10:03 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=83478 00:14:44.590 23:10:03 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:44.590 23:10:03 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 83478 00:14:44.590 23:10:03 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:44.590 23:10:03 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 83478 ']' 00:14:44.590 23:10:03 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:44.590 23:10:03 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:44.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:44.590 23:10:03 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:44.590 23:10:03 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:44.590 23:10:03 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:44.590 [2024-11-18 23:10:03.091845] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:44.590 [2024-11-18 23:10:03.091967] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83478 ] 00:14:44.590 [2024-11-18 23:10:03.233006] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:44.590 [2024-11-18 23:10:03.265327] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:44.590 [2024-11-18 23:10:03.265419] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:44.590 23:10:03 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:44.590 23:10:03 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:14:44.590 23:10:03 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:14:44.590 23:10:03 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.590 23:10:03 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:44.590 [2024-11-18 23:10:03.937180] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:44.590 [2024-11-18 23:10:03.938319] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:44.590 23:10:03 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.590 23:10:03 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:44.590 23:10:03 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.590 23:10:03 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:44.590 malloc0 00:14:44.590 23:10:03 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.590 23:10:03 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:14:44.590 23:10:03 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.590 23:10:03 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:44.849 [2024-11-18 23:10:03.969302] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:14:44.849 [2024-11-18 23:10:03.969346] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:44.849 [2024-11-18 23:10:03.969360] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:44.849 [2024-11-18 23:10:03.977221] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:44.849 [2024-11-18 23:10:03.977241] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:14:44.849 [2024-11-18 23:10:03.977254] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:14:44.849 [2024-11-18 23:10:03.977327] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:14:44.849 1 00:14:44.849 23:10:03 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.849 23:10:03 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 83363 00:14:44.849 [2024-11-18 23:10:03.985197] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:14:44.849 [2024-11-18 23:10:03.992174] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:14:44.849 [2024-11-18 23:10:04.000402] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:14:44.849 [2024-11-18 23:10:04.000423] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:15:41.080 00:15:41.080 fio_test: (groupid=0, jobs=1): err= 0: pid=83372: Mon Nov 18 23:10:53 2024 00:15:41.080 read: IOPS=29.7k, BW=116MiB/s (122MB/s)(6965MiB/60002msec) 00:15:41.080 slat (nsec): min=903, max=469978, avg=4708.26, stdev=1479.91 00:15:41.080 clat (usec): min=930, max=5998.9k, avg=2123.59, stdev=36756.62 00:15:41.080 lat (usec): min=938, max=5998.9k, avg=2128.30, stdev=36756.62 00:15:41.080 clat percentiles (usec): 00:15:41.080 | 1.00th=[ 1598], 5.00th=[ 1696], 10.00th=[ 1713], 20.00th=[ 1745], 00:15:41.080 | 30.00th=[ 1762], 40.00th=[ 1778], 50.00th=[ 1778], 60.00th=[ 1795], 00:15:41.080 | 70.00th=[ 1811], 80.00th=[ 1827], 90.00th=[ 1876], 95.00th=[ 2737], 00:15:41.080 | 99.00th=[ 4686], 99.50th=[ 5080], 99.90th=[ 6390], 99.95th=[ 7046], 00:15:41.080 | 99.99th=[12780] 00:15:41.080 bw ( KiB/s): min=21416, max=136424, per=100.00%, avg=130968.83, stdev=14287.27, samples=108 00:15:41.080 iops : min= 5354, max=34106, avg=32742.20, stdev=3571.82, samples=108 00:15:41.080 write: IOPS=29.7k, BW=116MiB/s (122MB/s)(6957MiB/60002msec); 0 zone resets 00:15:41.080 slat (nsec): min=945, max=202712, avg=4737.16, stdev=1382.78 00:15:41.080 clat (usec): min=921, max=5999.0k, avg=2175.83, stdev=35092.65 00:15:41.080 lat (usec): min=931, max=5999.0k, avg=2180.57, stdev=35092.65 00:15:41.080 clat percentiles (usec): 00:15:41.080 | 1.00th=[ 1631], 5.00th=[ 1778], 10.00th=[ 1795], 20.00th=[ 1827], 00:15:41.080 | 30.00th=[ 1844], 40.00th=[ 1860], 50.00th=[ 1876], 60.00th=[ 1876], 00:15:41.080 | 70.00th=[ 1893], 80.00th=[ 1926], 90.00th=[ 1958], 95.00th=[ 2638], 00:15:41.080 | 99.00th=[ 4621], 99.50th=[ 5080], 99.90th=[ 6390], 99.95th=[ 7046], 00:15:41.080 | 99.99th=[12911] 00:15:41.080 bw ( KiB/s): min=20160, max=136480, per=100.00%, avg=130838.04, stdev=14447.73, samples=108 00:15:41.080 iops : min= 5040, max=34120, avg=32709.53, stdev=3611.93, samples=108 00:15:41.080 lat (usec) : 1000=0.01% 00:15:41.080 lat (msec) : 2=92.69%, 4=5.19%, 10=2.10%, 20=0.01%, >=2000=0.01% 00:15:41.080 cpu : usr=6.19%, sys=28.73%, ctx=121110, majf=0, minf=13 00:15:41.080 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:15:41.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:41.081 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:41.081 issued rwts: total=1783043,1781039,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:41.081 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:41.081 00:15:41.081 Run status group 0 (all jobs): 00:15:41.081 READ: bw=116MiB/s (122MB/s), 116MiB/s-116MiB/s (122MB/s-122MB/s), io=6965MiB (7303MB), run=60002-60002msec 00:15:41.081 WRITE: bw=116MiB/s (122MB/s), 116MiB/s-116MiB/s (122MB/s-122MB/s), io=6957MiB (7295MB), run=60002-60002msec 00:15:41.081 00:15:41.081 Disk stats (read/write): 00:15:41.081 ublkb1: ios=1779403/1777436, merge=0/0, ticks=3695847/3646162, in_queue=7342010, util=99.92% 00:15:41.081 23:10:53 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:15:41.081 23:10:53 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.081 23:10:53 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:41.081 [2024-11-18 23:10:53.257226] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:15:41.081 [2024-11-18 23:10:53.295185] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:41.081 [2024-11-18 23:10:53.295323] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:15:41.081 [2024-11-18 23:10:53.303183] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:41.081 [2024-11-18 23:10:53.303282] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:15:41.081 [2024-11-18 23:10:53.303293] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:15:41.081 23:10:53 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.081 23:10:53 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:15:41.081 23:10:53 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.081 23:10:53 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:41.081 [2024-11-18 23:10:53.319229] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:41.081 [2024-11-18 23:10:53.320202] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:41.081 [2024-11-18 23:10:53.320233] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:41.081 23:10:53 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.081 23:10:53 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:15:41.081 23:10:53 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:15:41.081 23:10:53 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 83478 00:15:41.081 23:10:53 ublk_recovery -- common/autotest_common.sh@950 -- # '[' -z 83478 ']' 00:15:41.081 23:10:53 ublk_recovery -- common/autotest_common.sh@954 -- # kill -0 83478 00:15:41.081 23:10:53 ublk_recovery -- common/autotest_common.sh@955 -- # uname 00:15:41.081 23:10:53 ublk_recovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:41.081 23:10:53 ublk_recovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83478 00:15:41.081 23:10:53 ublk_recovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:41.081 killing process with pid 83478 00:15:41.081 23:10:53 ublk_recovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:41.081 23:10:53 ublk_recovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83478' 00:15:41.081 23:10:53 ublk_recovery -- common/autotest_common.sh@969 -- # kill 83478 00:15:41.081 23:10:53 ublk_recovery -- common/autotest_common.sh@974 -- # wait 83478 00:15:41.081 [2024-11-18 23:10:53.513574] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:41.081 [2024-11-18 23:10:53.513625] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:41.081 00:15:41.081 real 1m2.892s 00:15:41.081 user 1m43.743s 00:15:41.081 sys 0m32.474s 00:15:41.081 23:10:53 ublk_recovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:41.081 23:10:53 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:41.081 ************************************ 00:15:41.081 END TEST ublk_recovery 00:15:41.081 ************************************ 00:15:41.081 23:10:53 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:15:41.081 23:10:53 -- spdk/autotest.sh@256 -- # timing_exit lib 00:15:41.081 23:10:53 -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:41.081 23:10:53 -- common/autotest_common.sh@10 -- # set +x 00:15:41.081 23:10:53 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:15:41.081 23:10:53 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:15:41.081 23:10:53 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:15:41.081 23:10:53 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:15:41.081 23:10:53 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:15:41.081 23:10:53 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:15:41.081 23:10:53 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:15:41.081 23:10:53 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:15:41.081 23:10:53 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:15:41.081 23:10:53 -- spdk/autotest.sh@338 -- # '[' 1 -eq 1 ']' 00:15:41.081 23:10:53 -- spdk/autotest.sh@339 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:41.081 23:10:53 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:41.081 23:10:53 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:41.081 23:10:53 -- common/autotest_common.sh@10 -- # set +x 00:15:41.081 ************************************ 00:15:41.081 START TEST ftl 00:15:41.081 ************************************ 00:15:41.081 23:10:53 ftl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:41.081 * Looking for test storage... 00:15:41.081 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:41.081 23:10:53 ftl -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:41.081 23:10:53 ftl -- common/autotest_common.sh@1681 -- # lcov --version 00:15:41.081 23:10:53 ftl -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:41.081 23:10:53 ftl -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:41.081 23:10:53 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:41.081 23:10:53 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:41.081 23:10:53 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:41.081 23:10:53 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:15:41.081 23:10:53 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:15:41.081 23:10:53 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:15:41.081 23:10:53 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:15:41.081 23:10:53 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:15:41.081 23:10:53 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:15:41.081 23:10:53 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:15:41.081 23:10:53 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:41.081 23:10:53 ftl -- scripts/common.sh@344 -- # case "$op" in 00:15:41.081 23:10:53 ftl -- scripts/common.sh@345 -- # : 1 00:15:41.081 23:10:53 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:41.081 23:10:53 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:41.081 23:10:53 ftl -- scripts/common.sh@365 -- # decimal 1 00:15:41.081 23:10:53 ftl -- scripts/common.sh@353 -- # local d=1 00:15:41.081 23:10:53 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:41.081 23:10:53 ftl -- scripts/common.sh@355 -- # echo 1 00:15:41.081 23:10:53 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:15:41.081 23:10:53 ftl -- scripts/common.sh@366 -- # decimal 2 00:15:41.081 23:10:53 ftl -- scripts/common.sh@353 -- # local d=2 00:15:41.081 23:10:54 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:41.081 23:10:54 ftl -- scripts/common.sh@355 -- # echo 2 00:15:41.081 23:10:54 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:15:41.081 23:10:54 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:41.081 23:10:54 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:41.081 23:10:54 ftl -- scripts/common.sh@368 -- # return 0 00:15:41.081 23:10:54 ftl -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:41.081 23:10:54 ftl -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:41.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.081 --rc genhtml_branch_coverage=1 00:15:41.081 --rc genhtml_function_coverage=1 00:15:41.081 --rc genhtml_legend=1 00:15:41.081 --rc geninfo_all_blocks=1 00:15:41.081 --rc geninfo_unexecuted_blocks=1 00:15:41.081 00:15:41.081 ' 00:15:41.081 23:10:54 ftl -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:41.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.081 --rc genhtml_branch_coverage=1 00:15:41.081 --rc genhtml_function_coverage=1 00:15:41.081 --rc genhtml_legend=1 00:15:41.081 --rc geninfo_all_blocks=1 00:15:41.081 --rc geninfo_unexecuted_blocks=1 00:15:41.081 00:15:41.081 ' 00:15:41.081 23:10:54 ftl -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:41.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.081 --rc genhtml_branch_coverage=1 00:15:41.081 --rc genhtml_function_coverage=1 00:15:41.081 --rc genhtml_legend=1 00:15:41.081 --rc geninfo_all_blocks=1 00:15:41.081 --rc geninfo_unexecuted_blocks=1 00:15:41.081 00:15:41.081 ' 00:15:41.081 23:10:54 ftl -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:41.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.081 --rc genhtml_branch_coverage=1 00:15:41.081 --rc genhtml_function_coverage=1 00:15:41.081 --rc genhtml_legend=1 00:15:41.081 --rc geninfo_all_blocks=1 00:15:41.081 --rc geninfo_unexecuted_blocks=1 00:15:41.081 00:15:41.081 ' 00:15:41.081 23:10:54 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:41.081 23:10:54 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:41.081 23:10:54 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:41.081 23:10:54 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:41.081 23:10:54 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:41.081 23:10:54 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:41.081 23:10:54 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:41.081 23:10:54 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:41.081 23:10:54 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:41.081 23:10:54 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:41.081 23:10:54 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:41.081 23:10:54 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:41.081 23:10:54 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:41.081 23:10:54 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:41.082 23:10:54 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:41.082 23:10:54 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:41.082 23:10:54 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:41.082 23:10:54 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:41.082 23:10:54 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:41.082 23:10:54 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:41.082 23:10:54 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:41.082 23:10:54 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:41.082 23:10:54 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:41.082 23:10:54 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:41.082 23:10:54 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:41.082 23:10:54 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:41.082 23:10:54 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:41.082 23:10:54 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:41.082 23:10:54 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:41.082 23:10:54 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:41.082 23:10:54 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:15:41.082 23:10:54 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:15:41.082 23:10:54 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:15:41.082 23:10:54 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:15:41.082 23:10:54 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:41.082 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:41.082 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:41.082 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:41.082 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:41.082 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:41.082 23:10:54 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=84272 00:15:41.082 23:10:54 ftl -- ftl/ftl.sh@38 -- # waitforlisten 84272 00:15:41.082 23:10:54 ftl -- common/autotest_common.sh@831 -- # '[' -z 84272 ']' 00:15:41.082 23:10:54 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:15:41.082 23:10:54 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:41.082 23:10:54 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:41.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:41.082 23:10:54 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:41.082 23:10:54 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:41.082 23:10:54 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:41.082 [2024-11-18 23:10:54.503955] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:41.082 [2024-11-18 23:10:54.504066] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84272 ] 00:15:41.082 [2024-11-18 23:10:54.649253] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:41.082 [2024-11-18 23:10:54.677671] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:41.082 23:10:55 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:41.082 23:10:55 ftl -- common/autotest_common.sh@864 -- # return 0 00:15:41.082 23:10:55 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:15:41.082 23:10:55 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:15:41.082 23:10:55 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:41.082 23:10:55 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:15:41.082 23:10:56 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:15:41.082 23:10:56 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:41.082 23:10:56 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:41.082 23:10:56 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:15:41.082 23:10:56 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:15:41.082 23:10:56 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:15:41.082 23:10:56 ftl -- ftl/ftl.sh@50 -- # break 00:15:41.082 23:10:56 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:15:41.082 23:10:56 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:15:41.082 23:10:56 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:41.082 23:10:56 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:41.082 23:10:56 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:15:41.082 23:10:56 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:15:41.082 23:10:56 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:15:41.082 23:10:56 ftl -- ftl/ftl.sh@63 -- # break 00:15:41.082 23:10:56 ftl -- ftl/ftl.sh@66 -- # killprocess 84272 00:15:41.082 23:10:56 ftl -- common/autotest_common.sh@950 -- # '[' -z 84272 ']' 00:15:41.082 23:10:56 ftl -- common/autotest_common.sh@954 -- # kill -0 84272 00:15:41.082 23:10:56 ftl -- common/autotest_common.sh@955 -- # uname 00:15:41.082 23:10:56 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:41.082 23:10:56 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84272 00:15:41.082 23:10:56 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:41.082 23:10:56 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:41.082 killing process with pid 84272 00:15:41.082 23:10:56 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84272' 00:15:41.082 23:10:56 ftl -- common/autotest_common.sh@969 -- # kill 84272 00:15:41.082 23:10:56 ftl -- common/autotest_common.sh@974 -- # wait 84272 00:15:41.082 23:10:56 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:15:41.082 23:10:56 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:15:41.082 23:10:56 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:41.082 23:10:56 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:41.082 23:10:56 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:41.082 ************************************ 00:15:41.082 START TEST ftl_fio_basic 00:15:41.082 ************************************ 00:15:41.082 23:10:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:15:41.082 * Looking for test storage... 00:15:41.082 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:41.082 23:10:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:41.082 23:10:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:41.082 23:10:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # lcov --version 00:15:41.082 23:10:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:41.082 23:10:57 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:41.082 23:10:57 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:41.082 23:10:57 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:41.082 23:10:57 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:15:41.082 23:10:57 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:15:41.082 23:10:57 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:15:41.082 23:10:57 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:15:41.082 23:10:57 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:15:41.082 23:10:57 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:15:41.082 23:10:57 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:15:41.082 23:10:57 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:41.082 23:10:57 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:15:41.082 23:10:57 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:15:41.082 23:10:57 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:41.082 23:10:57 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:41.082 23:10:57 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:15:41.082 23:10:57 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:15:41.082 23:10:57 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:41.082 23:10:57 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:15:41.082 23:10:57 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:15:41.082 23:10:57 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:15:41.082 23:10:57 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:15:41.082 23:10:57 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:41.082 23:10:57 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:15:41.082 23:10:57 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:15:41.082 23:10:57 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:41.082 23:10:57 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:41.082 23:10:57 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:15:41.082 23:10:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:41.082 23:10:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:41.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.082 --rc genhtml_branch_coverage=1 00:15:41.082 --rc genhtml_function_coverage=1 00:15:41.082 --rc genhtml_legend=1 00:15:41.082 --rc geninfo_all_blocks=1 00:15:41.082 --rc geninfo_unexecuted_blocks=1 00:15:41.082 00:15:41.082 ' 00:15:41.082 23:10:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:41.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.082 --rc genhtml_branch_coverage=1 00:15:41.082 --rc genhtml_function_coverage=1 00:15:41.082 --rc genhtml_legend=1 00:15:41.082 --rc geninfo_all_blocks=1 00:15:41.082 --rc geninfo_unexecuted_blocks=1 00:15:41.082 00:15:41.082 ' 00:15:41.082 23:10:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:41.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.082 --rc genhtml_branch_coverage=1 00:15:41.082 --rc genhtml_function_coverage=1 00:15:41.082 --rc genhtml_legend=1 00:15:41.082 --rc geninfo_all_blocks=1 00:15:41.082 --rc geninfo_unexecuted_blocks=1 00:15:41.082 00:15:41.082 ' 00:15:41.082 23:10:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:41.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.082 --rc genhtml_branch_coverage=1 00:15:41.082 --rc genhtml_function_coverage=1 00:15:41.083 --rc genhtml_legend=1 00:15:41.083 --rc geninfo_all_blocks=1 00:15:41.083 --rc geninfo_unexecuted_blocks=1 00:15:41.083 00:15:41.083 ' 00:15:41.083 23:10:57 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:41.083 23:10:57 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:15:41.083 23:10:57 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:41.083 23:10:57 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:41.083 23:10:57 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:41.083 23:10:57 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:41.083 23:10:57 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:41.083 23:10:57 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:41.083 23:10:57 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:41.083 23:10:57 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:41.083 23:10:57 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:41.083 23:10:57 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:41.083 23:10:57 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:41.083 23:10:57 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:41.083 23:10:57 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:41.083 23:10:57 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:41.083 23:10:57 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:41.083 23:10:57 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:41.083 23:10:57 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:41.083 23:10:57 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:41.083 23:10:57 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:41.083 23:10:57 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:41.083 23:10:57 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:41.083 23:10:57 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:41.083 23:10:57 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:41.083 23:10:57 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:41.083 23:10:57 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:41.083 23:10:57 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:41.083 23:10:57 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:41.083 23:10:57 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:15:41.083 23:10:57 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:41.083 23:10:57 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:15:41.083 23:10:57 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:15:41.083 23:10:57 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:41.083 23:10:57 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:15:41.083 23:10:57 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:15:41.083 23:10:57 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:41.083 23:10:57 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:15:41.083 23:10:57 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:15:41.083 23:10:57 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:15:41.083 23:10:57 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:15:41.083 23:10:57 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:15:41.083 23:10:57 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:15:41.083 23:10:57 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:41.083 23:10:57 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:41.083 23:10:57 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:15:41.083 23:10:57 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=84386 00:15:41.083 23:10:57 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 84386 00:15:41.083 23:10:57 ftl.ftl_fio_basic -- common/autotest_common.sh@831 -- # '[' -z 84386 ']' 00:15:41.083 23:10:57 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:15:41.083 23:10:57 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:41.083 23:10:57 ftl.ftl_fio_basic -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:41.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:41.083 23:10:57 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:41.083 23:10:57 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:41.083 23:10:57 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:41.083 [2024-11-18 23:10:57.170058] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:41.083 [2024-11-18 23:10:57.170192] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84386 ] 00:15:41.083 [2024-11-18 23:10:57.316486] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:41.083 [2024-11-18 23:10:57.348779] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:41.083 [2024-11-18 23:10:57.349084] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:41.083 [2024-11-18 23:10:57.349086] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:41.083 23:10:57 ftl.ftl_fio_basic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:41.083 23:10:57 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # return 0 00:15:41.083 23:10:58 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:15:41.083 23:10:58 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:15:41.083 23:10:58 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:15:41.083 23:10:58 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:15:41.083 23:10:58 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:15:41.083 23:10:58 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:15:41.083 23:10:58 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:15:41.083 23:10:58 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:15:41.083 23:10:58 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:15:41.083 23:10:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:15:41.083 23:10:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:41.083 23:10:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:41.083 23:10:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:41.083 23:10:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:15:41.083 23:10:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:41.083 { 00:15:41.083 "name": "nvme0n1", 00:15:41.083 "aliases": [ 00:15:41.083 "52d48d6d-98ca-4036-9d61-94d7130f36df" 00:15:41.083 ], 00:15:41.083 "product_name": "NVMe disk", 00:15:41.083 "block_size": 4096, 00:15:41.083 "num_blocks": 1310720, 00:15:41.083 "uuid": "52d48d6d-98ca-4036-9d61-94d7130f36df", 00:15:41.083 "numa_id": -1, 00:15:41.083 "assigned_rate_limits": { 00:15:41.083 "rw_ios_per_sec": 0, 00:15:41.083 "rw_mbytes_per_sec": 0, 00:15:41.083 "r_mbytes_per_sec": 0, 00:15:41.083 "w_mbytes_per_sec": 0 00:15:41.083 }, 00:15:41.083 "claimed": false, 00:15:41.083 "zoned": false, 00:15:41.083 "supported_io_types": { 00:15:41.083 "read": true, 00:15:41.083 "write": true, 00:15:41.083 "unmap": true, 00:15:41.083 "flush": true, 00:15:41.083 "reset": true, 00:15:41.083 "nvme_admin": true, 00:15:41.083 "nvme_io": true, 00:15:41.083 "nvme_io_md": false, 00:15:41.083 "write_zeroes": true, 00:15:41.083 "zcopy": false, 00:15:41.083 "get_zone_info": false, 00:15:41.083 "zone_management": false, 00:15:41.083 "zone_append": false, 00:15:41.083 "compare": true, 00:15:41.083 "compare_and_write": false, 00:15:41.083 "abort": true, 00:15:41.083 "seek_hole": false, 00:15:41.083 "seek_data": false, 00:15:41.083 "copy": true, 00:15:41.083 "nvme_iov_md": false 00:15:41.083 }, 00:15:41.083 "driver_specific": { 00:15:41.083 "nvme": [ 00:15:41.083 { 00:15:41.083 "pci_address": "0000:00:11.0", 00:15:41.083 "trid": { 00:15:41.083 "trtype": "PCIe", 00:15:41.083 "traddr": "0000:00:11.0" 00:15:41.083 }, 00:15:41.083 "ctrlr_data": { 00:15:41.083 "cntlid": 0, 00:15:41.083 "vendor_id": "0x1b36", 00:15:41.083 "model_number": "QEMU NVMe Ctrl", 00:15:41.083 "serial_number": "12341", 00:15:41.083 "firmware_revision": "8.0.0", 00:15:41.083 "subnqn": "nqn.2019-08.org.qemu:12341", 00:15:41.083 "oacs": { 00:15:41.083 "security": 0, 00:15:41.083 "format": 1, 00:15:41.083 "firmware": 0, 00:15:41.083 "ns_manage": 1 00:15:41.083 }, 00:15:41.083 "multi_ctrlr": false, 00:15:41.083 "ana_reporting": false 00:15:41.083 }, 00:15:41.083 "vs": { 00:15:41.083 "nvme_version": "1.4" 00:15:41.083 }, 00:15:41.083 "ns_data": { 00:15:41.083 "id": 1, 00:15:41.083 "can_share": false 00:15:41.083 } 00:15:41.083 } 00:15:41.083 ], 00:15:41.083 "mp_policy": "active_passive" 00:15:41.083 } 00:15:41.083 } 00:15:41.083 ]' 00:15:41.083 23:10:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:41.084 23:10:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:41.084 23:10:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:41.084 23:10:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=1310720 00:15:41.084 23:10:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:15:41.084 23:10:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 5120 00:15:41.084 23:10:58 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:15:41.084 23:10:58 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:15:41.084 23:10:58 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:15:41.084 23:10:58 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:41.084 23:10:58 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:15:41.084 23:10:58 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:15:41.084 23:10:58 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:15:41.084 23:10:58 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=0c1964ed-542c-40c4-9352-33de92520bde 00:15:41.084 23:10:58 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 0c1964ed-542c-40c4-9352-33de92520bde 00:15:41.084 23:10:59 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=6d329210-107f-4b83-9b60-3af3ae504941 00:15:41.084 23:10:59 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 6d329210-107f-4b83-9b60-3af3ae504941 00:15:41.084 23:10:59 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:15:41.084 23:10:59 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:15:41.084 23:10:59 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=6d329210-107f-4b83-9b60-3af3ae504941 00:15:41.084 23:10:59 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:15:41.084 23:10:59 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 6d329210-107f-4b83-9b60-3af3ae504941 00:15:41.084 23:10:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=6d329210-107f-4b83-9b60-3af3ae504941 00:15:41.084 23:10:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:41.084 23:10:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:41.084 23:10:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:41.084 23:10:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6d329210-107f-4b83-9b60-3af3ae504941 00:15:41.084 23:10:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:41.084 { 00:15:41.084 "name": "6d329210-107f-4b83-9b60-3af3ae504941", 00:15:41.084 "aliases": [ 00:15:41.084 "lvs/nvme0n1p0" 00:15:41.084 ], 00:15:41.084 "product_name": "Logical Volume", 00:15:41.084 "block_size": 4096, 00:15:41.084 "num_blocks": 26476544, 00:15:41.084 "uuid": "6d329210-107f-4b83-9b60-3af3ae504941", 00:15:41.084 "assigned_rate_limits": { 00:15:41.084 "rw_ios_per_sec": 0, 00:15:41.084 "rw_mbytes_per_sec": 0, 00:15:41.084 "r_mbytes_per_sec": 0, 00:15:41.084 "w_mbytes_per_sec": 0 00:15:41.084 }, 00:15:41.084 "claimed": false, 00:15:41.084 "zoned": false, 00:15:41.084 "supported_io_types": { 00:15:41.084 "read": true, 00:15:41.084 "write": true, 00:15:41.084 "unmap": true, 00:15:41.084 "flush": false, 00:15:41.084 "reset": true, 00:15:41.084 "nvme_admin": false, 00:15:41.084 "nvme_io": false, 00:15:41.084 "nvme_io_md": false, 00:15:41.084 "write_zeroes": true, 00:15:41.084 "zcopy": false, 00:15:41.084 "get_zone_info": false, 00:15:41.084 "zone_management": false, 00:15:41.084 "zone_append": false, 00:15:41.084 "compare": false, 00:15:41.084 "compare_and_write": false, 00:15:41.084 "abort": false, 00:15:41.084 "seek_hole": true, 00:15:41.084 "seek_data": true, 00:15:41.084 "copy": false, 00:15:41.084 "nvme_iov_md": false 00:15:41.084 }, 00:15:41.084 "driver_specific": { 00:15:41.084 "lvol": { 00:15:41.084 "lvol_store_uuid": "0c1964ed-542c-40c4-9352-33de92520bde", 00:15:41.084 "base_bdev": "nvme0n1", 00:15:41.084 "thin_provision": true, 00:15:41.084 "num_allocated_clusters": 0, 00:15:41.084 "snapshot": false, 00:15:41.084 "clone": false, 00:15:41.084 "esnap_clone": false 00:15:41.084 } 00:15:41.084 } 00:15:41.084 } 00:15:41.084 ]' 00:15:41.084 23:10:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:41.084 23:10:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:41.084 23:10:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:41.084 23:10:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:41.084 23:10:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:41.084 23:10:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:15:41.084 23:10:59 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:15:41.084 23:10:59 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:15:41.084 23:10:59 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:15:41.084 23:10:59 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:15:41.084 23:10:59 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:15:41.084 23:10:59 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 6d329210-107f-4b83-9b60-3af3ae504941 00:15:41.084 23:10:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=6d329210-107f-4b83-9b60-3af3ae504941 00:15:41.084 23:10:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:41.084 23:10:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:41.084 23:10:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:41.084 23:10:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6d329210-107f-4b83-9b60-3af3ae504941 00:15:41.084 23:10:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:41.084 { 00:15:41.084 "name": "6d329210-107f-4b83-9b60-3af3ae504941", 00:15:41.084 "aliases": [ 00:15:41.084 "lvs/nvme0n1p0" 00:15:41.084 ], 00:15:41.084 "product_name": "Logical Volume", 00:15:41.084 "block_size": 4096, 00:15:41.084 "num_blocks": 26476544, 00:15:41.084 "uuid": "6d329210-107f-4b83-9b60-3af3ae504941", 00:15:41.084 "assigned_rate_limits": { 00:15:41.084 "rw_ios_per_sec": 0, 00:15:41.084 "rw_mbytes_per_sec": 0, 00:15:41.084 "r_mbytes_per_sec": 0, 00:15:41.084 "w_mbytes_per_sec": 0 00:15:41.084 }, 00:15:41.084 "claimed": false, 00:15:41.084 "zoned": false, 00:15:41.084 "supported_io_types": { 00:15:41.084 "read": true, 00:15:41.084 "write": true, 00:15:41.084 "unmap": true, 00:15:41.084 "flush": false, 00:15:41.084 "reset": true, 00:15:41.084 "nvme_admin": false, 00:15:41.084 "nvme_io": false, 00:15:41.084 "nvme_io_md": false, 00:15:41.084 "write_zeroes": true, 00:15:41.084 "zcopy": false, 00:15:41.084 "get_zone_info": false, 00:15:41.084 "zone_management": false, 00:15:41.084 "zone_append": false, 00:15:41.084 "compare": false, 00:15:41.084 "compare_and_write": false, 00:15:41.084 "abort": false, 00:15:41.084 "seek_hole": true, 00:15:41.084 "seek_data": true, 00:15:41.084 "copy": false, 00:15:41.084 "nvme_iov_md": false 00:15:41.084 }, 00:15:41.084 "driver_specific": { 00:15:41.084 "lvol": { 00:15:41.084 "lvol_store_uuid": "0c1964ed-542c-40c4-9352-33de92520bde", 00:15:41.084 "base_bdev": "nvme0n1", 00:15:41.084 "thin_provision": true, 00:15:41.084 "num_allocated_clusters": 0, 00:15:41.084 "snapshot": false, 00:15:41.084 "clone": false, 00:15:41.084 "esnap_clone": false 00:15:41.084 } 00:15:41.084 } 00:15:41.084 } 00:15:41.084 ]' 00:15:41.084 23:10:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:41.084 23:10:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:41.084 23:10:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:41.084 23:10:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:41.084 23:10:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:41.084 23:10:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:15:41.084 23:10:59 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:15:41.084 23:10:59 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:15:41.084 23:11:00 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:15:41.084 23:11:00 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:15:41.084 23:11:00 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:15:41.084 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:15:41.084 23:11:00 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 6d329210-107f-4b83-9b60-3af3ae504941 00:15:41.084 23:11:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=6d329210-107f-4b83-9b60-3af3ae504941 00:15:41.084 23:11:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:41.084 23:11:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:41.084 23:11:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:41.084 23:11:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6d329210-107f-4b83-9b60-3af3ae504941 00:15:41.084 23:11:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:41.084 { 00:15:41.084 "name": "6d329210-107f-4b83-9b60-3af3ae504941", 00:15:41.084 "aliases": [ 00:15:41.084 "lvs/nvme0n1p0" 00:15:41.084 ], 00:15:41.084 "product_name": "Logical Volume", 00:15:41.084 "block_size": 4096, 00:15:41.084 "num_blocks": 26476544, 00:15:41.084 "uuid": "6d329210-107f-4b83-9b60-3af3ae504941", 00:15:41.084 "assigned_rate_limits": { 00:15:41.084 "rw_ios_per_sec": 0, 00:15:41.084 "rw_mbytes_per_sec": 0, 00:15:41.084 "r_mbytes_per_sec": 0, 00:15:41.084 "w_mbytes_per_sec": 0 00:15:41.084 }, 00:15:41.085 "claimed": false, 00:15:41.085 "zoned": false, 00:15:41.085 "supported_io_types": { 00:15:41.085 "read": true, 00:15:41.085 "write": true, 00:15:41.085 "unmap": true, 00:15:41.085 "flush": false, 00:15:41.085 "reset": true, 00:15:41.085 "nvme_admin": false, 00:15:41.085 "nvme_io": false, 00:15:41.085 "nvme_io_md": false, 00:15:41.085 "write_zeroes": true, 00:15:41.085 "zcopy": false, 00:15:41.085 "get_zone_info": false, 00:15:41.085 "zone_management": false, 00:15:41.085 "zone_append": false, 00:15:41.085 "compare": false, 00:15:41.085 "compare_and_write": false, 00:15:41.085 "abort": false, 00:15:41.085 "seek_hole": true, 00:15:41.085 "seek_data": true, 00:15:41.085 "copy": false, 00:15:41.085 "nvme_iov_md": false 00:15:41.085 }, 00:15:41.085 "driver_specific": { 00:15:41.085 "lvol": { 00:15:41.085 "lvol_store_uuid": "0c1964ed-542c-40c4-9352-33de92520bde", 00:15:41.085 "base_bdev": "nvme0n1", 00:15:41.085 "thin_provision": true, 00:15:41.085 "num_allocated_clusters": 0, 00:15:41.085 "snapshot": false, 00:15:41.085 "clone": false, 00:15:41.085 "esnap_clone": false 00:15:41.085 } 00:15:41.085 } 00:15:41.085 } 00:15:41.085 ]' 00:15:41.085 23:11:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:41.085 23:11:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:41.085 23:11:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:41.085 23:11:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:41.085 23:11:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:41.085 23:11:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:15:41.085 23:11:00 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:15:41.085 23:11:00 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:15:41.085 23:11:00 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 6d329210-107f-4b83-9b60-3af3ae504941 -c nvc0n1p0 --l2p_dram_limit 60 00:15:41.362 [2024-11-18 23:11:00.541672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:41.362 [2024-11-18 23:11:00.541719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:41.362 [2024-11-18 23:11:00.541730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:41.362 [2024-11-18 23:11:00.541738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:41.362 [2024-11-18 23:11:00.541808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:41.362 [2024-11-18 23:11:00.541817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:41.362 [2024-11-18 23:11:00.541835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:15:41.362 [2024-11-18 23:11:00.541846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:41.362 [2024-11-18 23:11:00.541881] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:41.362 [2024-11-18 23:11:00.542108] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:41.362 [2024-11-18 23:11:00.542122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:41.362 [2024-11-18 23:11:00.542138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:41.362 [2024-11-18 23:11:00.542152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.253 ms 00:15:41.362 [2024-11-18 23:11:00.542171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:41.363 [2024-11-18 23:11:00.542212] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID ed1d8789-27c0-44fe-adc2-5552cf8d2357 00:15:41.363 [2024-11-18 23:11:00.543200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:41.363 [2024-11-18 23:11:00.543227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:15:41.363 [2024-11-18 23:11:00.543239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:15:41.363 [2024-11-18 23:11:00.543246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:41.363 [2024-11-18 23:11:00.548324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:41.363 [2024-11-18 23:11:00.548349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:41.363 [2024-11-18 23:11:00.548358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.979 ms 00:15:41.363 [2024-11-18 23:11:00.548364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:41.363 [2024-11-18 23:11:00.548462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:41.363 [2024-11-18 23:11:00.548475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:41.363 [2024-11-18 23:11:00.548483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:15:41.363 [2024-11-18 23:11:00.548489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:41.363 [2024-11-18 23:11:00.548566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:41.363 [2024-11-18 23:11:00.548584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:41.363 [2024-11-18 23:11:00.548592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:15:41.363 [2024-11-18 23:11:00.548598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:41.363 [2024-11-18 23:11:00.548627] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:41.363 [2024-11-18 23:11:00.549888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:41.363 [2024-11-18 23:11:00.549914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:41.363 [2024-11-18 23:11:00.549922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.268 ms 00:15:41.363 [2024-11-18 23:11:00.549929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:41.364 [2024-11-18 23:11:00.549965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:41.364 [2024-11-18 23:11:00.549973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:41.364 [2024-11-18 23:11:00.549979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:15:41.364 [2024-11-18 23:11:00.549988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:41.364 [2024-11-18 23:11:00.550012] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:15:41.364 [2024-11-18 23:11:00.550167] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:15:41.364 [2024-11-18 23:11:00.550194] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:41.364 [2024-11-18 23:11:00.550204] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:15:41.364 [2024-11-18 23:11:00.550213] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:41.364 [2024-11-18 23:11:00.550222] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:41.364 [2024-11-18 23:11:00.550228] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:15:41.364 [2024-11-18 23:11:00.550238] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:41.364 [2024-11-18 23:11:00.550244] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:15:41.364 [2024-11-18 23:11:00.550251] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:15:41.364 [2024-11-18 23:11:00.550256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:41.364 [2024-11-18 23:11:00.550263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:41.364 [2024-11-18 23:11:00.550269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.246 ms 00:15:41.364 [2024-11-18 23:11:00.550276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:41.364 [2024-11-18 23:11:00.550352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:41.364 [2024-11-18 23:11:00.550360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:41.364 [2024-11-18 23:11:00.550366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:15:41.364 [2024-11-18 23:11:00.550381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:41.364 [2024-11-18 23:11:00.550478] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:41.364 [2024-11-18 23:11:00.550495] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:41.364 [2024-11-18 23:11:00.550510] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:41.364 [2024-11-18 23:11:00.550517] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:41.364 [2024-11-18 23:11:00.550523] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:41.364 [2024-11-18 23:11:00.550530] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:41.364 [2024-11-18 23:11:00.550535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:15:41.364 [2024-11-18 23:11:00.550543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:41.365 [2024-11-18 23:11:00.550549] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:15:41.365 [2024-11-18 23:11:00.550557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:41.365 [2024-11-18 23:11:00.550563] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:41.365 [2024-11-18 23:11:00.550570] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:15:41.365 [2024-11-18 23:11:00.550576] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:41.365 [2024-11-18 23:11:00.550585] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:41.365 [2024-11-18 23:11:00.550591] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:15:41.365 [2024-11-18 23:11:00.550599] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:41.365 [2024-11-18 23:11:00.550604] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:41.365 [2024-11-18 23:11:00.550612] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:15:41.365 [2024-11-18 23:11:00.550618] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:41.365 [2024-11-18 23:11:00.550626] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:41.365 [2024-11-18 23:11:00.550642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:15:41.365 [2024-11-18 23:11:00.550649] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:41.365 [2024-11-18 23:11:00.550655] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:41.366 [2024-11-18 23:11:00.550662] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:15:41.366 [2024-11-18 23:11:00.550668] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:41.366 [2024-11-18 23:11:00.550676] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:41.367 [2024-11-18 23:11:00.550682] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:15:41.367 [2024-11-18 23:11:00.550689] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:41.367 [2024-11-18 23:11:00.550695] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:41.367 [2024-11-18 23:11:00.550703] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:15:41.367 [2024-11-18 23:11:00.550709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:41.367 [2024-11-18 23:11:00.550717] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:41.367 [2024-11-18 23:11:00.550723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:15:41.367 [2024-11-18 23:11:00.550730] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:41.367 [2024-11-18 23:11:00.550740] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:41.367 [2024-11-18 23:11:00.550747] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:15:41.367 [2024-11-18 23:11:00.550753] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:41.367 [2024-11-18 23:11:00.550760] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:15:41.367 [2024-11-18 23:11:00.550765] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:15:41.367 [2024-11-18 23:11:00.550773] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:41.367 [2024-11-18 23:11:00.550779] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:15:41.367 [2024-11-18 23:11:00.550787] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:15:41.367 [2024-11-18 23:11:00.550793] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:41.367 [2024-11-18 23:11:00.550800] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:41.367 [2024-11-18 23:11:00.550807] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:41.367 [2024-11-18 23:11:00.550825] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:41.367 [2024-11-18 23:11:00.550831] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:41.367 [2024-11-18 23:11:00.550840] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:41.367 [2024-11-18 23:11:00.550846] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:41.367 [2024-11-18 23:11:00.550853] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:41.367 [2024-11-18 23:11:00.550859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:41.367 [2024-11-18 23:11:00.550866] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:41.367 [2024-11-18 23:11:00.550872] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:41.367 [2024-11-18 23:11:00.550882] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:41.368 [2024-11-18 23:11:00.550893] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:41.368 [2024-11-18 23:11:00.550902] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:15:41.368 [2024-11-18 23:11:00.550908] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:15:41.368 [2024-11-18 23:11:00.550915] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:15:41.368 [2024-11-18 23:11:00.550921] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:15:41.368 [2024-11-18 23:11:00.550928] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:15:41.368 [2024-11-18 23:11:00.550933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:15:41.368 [2024-11-18 23:11:00.550941] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:15:41.368 [2024-11-18 23:11:00.550947] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:15:41.368 [2024-11-18 23:11:00.550953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:15:41.368 [2024-11-18 23:11:00.550958] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:15:41.368 [2024-11-18 23:11:00.550965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:15:41.368 [2024-11-18 23:11:00.550972] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:15:41.368 [2024-11-18 23:11:00.550979] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:15:41.368 [2024-11-18 23:11:00.550985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:15:41.368 [2024-11-18 23:11:00.550992] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:41.369 [2024-11-18 23:11:00.550998] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:41.369 [2024-11-18 23:11:00.551005] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:41.369 [2024-11-18 23:11:00.551011] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:41.369 [2024-11-18 23:11:00.551018] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:41.369 [2024-11-18 23:11:00.551024] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:41.369 [2024-11-18 23:11:00.551031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:41.369 [2024-11-18 23:11:00.551037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:41.369 [2024-11-18 23:11:00.551045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.606 ms 00:15:41.369 [2024-11-18 23:11:00.551050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:41.369 [2024-11-18 23:11:00.551116] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:15:41.369 [2024-11-18 23:11:00.551129] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:15:43.949 [2024-11-18 23:11:02.718775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.949 [2024-11-18 23:11:02.718838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:15:43.949 [2024-11-18 23:11:02.718864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2167.645 ms 00:15:43.949 [2024-11-18 23:11:02.718873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.949 [2024-11-18 23:11:02.735349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.949 [2024-11-18 23:11:02.735396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:43.949 [2024-11-18 23:11:02.735412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.391 ms 00:15:43.949 [2024-11-18 23:11:02.735420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.949 [2024-11-18 23:11:02.735542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.949 [2024-11-18 23:11:02.735552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:43.949 [2024-11-18 23:11:02.735562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:15:43.949 [2024-11-18 23:11:02.735569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.949 [2024-11-18 23:11:02.744742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.949 [2024-11-18 23:11:02.744787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:43.949 [2024-11-18 23:11:02.744804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.110 ms 00:15:43.949 [2024-11-18 23:11:02.744816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.949 [2024-11-18 23:11:02.744861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.949 [2024-11-18 23:11:02.744885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:43.949 [2024-11-18 23:11:02.744898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:43.949 [2024-11-18 23:11:02.744909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.949 [2024-11-18 23:11:02.745296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.949 [2024-11-18 23:11:02.745335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:43.949 [2024-11-18 23:11:02.745350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.329 ms 00:15:43.949 [2024-11-18 23:11:02.745360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.949 [2024-11-18 23:11:02.745535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.949 [2024-11-18 23:11:02.745556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:43.949 [2024-11-18 23:11:02.745584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:15:43.949 [2024-11-18 23:11:02.745596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.949 [2024-11-18 23:11:02.751552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.949 [2024-11-18 23:11:02.751588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:43.949 [2024-11-18 23:11:02.751603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.915 ms 00:15:43.949 [2024-11-18 23:11:02.751615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.949 [2024-11-18 23:11:02.759785] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:15:43.949 [2024-11-18 23:11:02.773393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.949 [2024-11-18 23:11:02.773429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:43.949 [2024-11-18 23:11:02.773441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.647 ms 00:15:43.949 [2024-11-18 23:11:02.773450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.949 [2024-11-18 23:11:02.809017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.949 [2024-11-18 23:11:02.809059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:15:43.949 [2024-11-18 23:11:02.809073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.536 ms 00:15:43.949 [2024-11-18 23:11:02.809084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.949 [2024-11-18 23:11:02.809269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.949 [2024-11-18 23:11:02.809281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:43.949 [2024-11-18 23:11:02.809293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:15:43.949 [2024-11-18 23:11:02.809302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.949 [2024-11-18 23:11:02.812011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.949 [2024-11-18 23:11:02.812057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:15:43.949 [2024-11-18 23:11:02.812074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.688 ms 00:15:43.949 [2024-11-18 23:11:02.812086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.949 [2024-11-18 23:11:02.814319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.949 [2024-11-18 23:11:02.814352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:15:43.949 [2024-11-18 23:11:02.814361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.196 ms 00:15:43.949 [2024-11-18 23:11:02.814370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.949 [2024-11-18 23:11:02.814663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.949 [2024-11-18 23:11:02.814693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:43.949 [2024-11-18 23:11:02.814702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:15:43.949 [2024-11-18 23:11:02.814712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.949 [2024-11-18 23:11:02.837891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.949 [2024-11-18 23:11:02.837942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:15:43.949 [2024-11-18 23:11:02.837958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.132 ms 00:15:43.949 [2024-11-18 23:11:02.837968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.949 [2024-11-18 23:11:02.841455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.949 [2024-11-18 23:11:02.841492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:15:43.949 [2024-11-18 23:11:02.841502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.431 ms 00:15:43.949 [2024-11-18 23:11:02.841511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.949 [2024-11-18 23:11:02.844237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.949 [2024-11-18 23:11:02.844270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:15:43.949 [2024-11-18 23:11:02.844279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.685 ms 00:15:43.949 [2024-11-18 23:11:02.844288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.949 [2024-11-18 23:11:02.847106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.949 [2024-11-18 23:11:02.847141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:43.949 [2024-11-18 23:11:02.847151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.783 ms 00:15:43.949 [2024-11-18 23:11:02.847183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.949 [2024-11-18 23:11:02.847225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.949 [2024-11-18 23:11:02.847236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:43.949 [2024-11-18 23:11:02.847245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:15:43.949 [2024-11-18 23:11:02.847255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.949 [2024-11-18 23:11:02.847326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:43.949 [2024-11-18 23:11:02.847362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:43.949 [2024-11-18 23:11:02.847371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:15:43.949 [2024-11-18 23:11:02.847383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:43.949 [2024-11-18 23:11:02.848285] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2306.190 ms, result 0 00:15:43.949 { 00:15:43.949 "name": "ftl0", 00:15:43.949 "uuid": "ed1d8789-27c0-44fe-adc2-5552cf8d2357" 00:15:43.949 } 00:15:43.949 23:11:02 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:15:43.949 23:11:02 ftl.ftl_fio_basic -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:15:43.949 23:11:02 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:43.949 23:11:02 ftl.ftl_fio_basic -- common/autotest_common.sh@901 -- # local i 00:15:43.949 23:11:02 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:43.949 23:11:02 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:43.949 23:11:02 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:43.949 23:11:03 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:15:43.949 [ 00:15:43.949 { 00:15:43.949 "name": "ftl0", 00:15:43.949 "aliases": [ 00:15:43.949 "ed1d8789-27c0-44fe-adc2-5552cf8d2357" 00:15:43.949 ], 00:15:43.949 "product_name": "FTL disk", 00:15:43.949 "block_size": 4096, 00:15:43.949 "num_blocks": 20971520, 00:15:43.949 "uuid": "ed1d8789-27c0-44fe-adc2-5552cf8d2357", 00:15:43.949 "assigned_rate_limits": { 00:15:43.949 "rw_ios_per_sec": 0, 00:15:43.949 "rw_mbytes_per_sec": 0, 00:15:43.949 "r_mbytes_per_sec": 0, 00:15:43.949 "w_mbytes_per_sec": 0 00:15:43.949 }, 00:15:43.949 "claimed": false, 00:15:43.949 "zoned": false, 00:15:43.949 "supported_io_types": { 00:15:43.949 "read": true, 00:15:43.949 "write": true, 00:15:43.949 "unmap": true, 00:15:43.949 "flush": true, 00:15:43.949 "reset": false, 00:15:43.949 "nvme_admin": false, 00:15:43.949 "nvme_io": false, 00:15:43.949 "nvme_io_md": false, 00:15:43.949 "write_zeroes": true, 00:15:43.949 "zcopy": false, 00:15:43.949 "get_zone_info": false, 00:15:43.949 "zone_management": false, 00:15:43.949 "zone_append": false, 00:15:43.949 "compare": false, 00:15:43.949 "compare_and_write": false, 00:15:43.949 "abort": false, 00:15:43.949 "seek_hole": false, 00:15:43.949 "seek_data": false, 00:15:43.949 "copy": false, 00:15:43.949 "nvme_iov_md": false 00:15:43.949 }, 00:15:43.949 "driver_specific": { 00:15:43.949 "ftl": { 00:15:43.949 "base_bdev": "6d329210-107f-4b83-9b60-3af3ae504941", 00:15:43.949 "cache": "nvc0n1p0" 00:15:43.949 } 00:15:43.949 } 00:15:43.949 } 00:15:43.949 ] 00:15:43.949 23:11:03 ftl.ftl_fio_basic -- common/autotest_common.sh@907 -- # return 0 00:15:43.950 23:11:03 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:15:43.950 23:11:03 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:15:44.208 23:11:03 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:15:44.208 23:11:03 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:15:44.468 [2024-11-18 23:11:03.630264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.469 [2024-11-18 23:11:03.630309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:44.469 [2024-11-18 23:11:03.630323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:44.469 [2024-11-18 23:11:03.630340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.469 [2024-11-18 23:11:03.630382] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:44.469 [2024-11-18 23:11:03.630799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.469 [2024-11-18 23:11:03.630826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:44.469 [2024-11-18 23:11:03.630834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.403 ms 00:15:44.469 [2024-11-18 23:11:03.630846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.469 [2024-11-18 23:11:03.631250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.469 [2024-11-18 23:11:03.631269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:44.469 [2024-11-18 23:11:03.631278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.379 ms 00:15:44.469 [2024-11-18 23:11:03.631289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.469 [2024-11-18 23:11:03.634525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.469 [2024-11-18 23:11:03.634556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:44.469 [2024-11-18 23:11:03.634566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.211 ms 00:15:44.469 [2024-11-18 23:11:03.634576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.469 [2024-11-18 23:11:03.640770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.469 [2024-11-18 23:11:03.640799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:15:44.469 [2024-11-18 23:11:03.640808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.173 ms 00:15:44.469 [2024-11-18 23:11:03.640818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.469 [2024-11-18 23:11:03.642535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.469 [2024-11-18 23:11:03.642570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:44.469 [2024-11-18 23:11:03.642579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.641 ms 00:15:44.469 [2024-11-18 23:11:03.642588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.469 [2024-11-18 23:11:03.646884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.469 [2024-11-18 23:11:03.646942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:44.469 [2024-11-18 23:11:03.646955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.198 ms 00:15:44.469 [2024-11-18 23:11:03.646967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.469 [2024-11-18 23:11:03.647123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.469 [2024-11-18 23:11:03.647144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:44.469 [2024-11-18 23:11:03.647153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:15:44.469 [2024-11-18 23:11:03.647182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.469 [2024-11-18 23:11:03.648545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.469 [2024-11-18 23:11:03.648579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:15:44.469 [2024-11-18 23:11:03.648588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.339 ms 00:15:44.469 [2024-11-18 23:11:03.648597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.469 [2024-11-18 23:11:03.649622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.469 [2024-11-18 23:11:03.649656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:15:44.469 [2024-11-18 23:11:03.649664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.971 ms 00:15:44.469 [2024-11-18 23:11:03.649672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.469 [2024-11-18 23:11:03.650501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.469 [2024-11-18 23:11:03.650533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:44.469 [2024-11-18 23:11:03.650542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.769 ms 00:15:44.469 [2024-11-18 23:11:03.650550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.469 [2024-11-18 23:11:03.651399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.469 [2024-11-18 23:11:03.651430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:44.469 [2024-11-18 23:11:03.651439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.752 ms 00:15:44.469 [2024-11-18 23:11:03.651448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.469 [2024-11-18 23:11:03.651496] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:44.469 [2024-11-18 23:11:03.651519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:44.469 [2024-11-18 23:11:03.651541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:44.469 [2024-11-18 23:11:03.651551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:44.469 [2024-11-18 23:11:03.651562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:44.469 [2024-11-18 23:11:03.651573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:44.469 [2024-11-18 23:11:03.651584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:44.469 [2024-11-18 23:11:03.651593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:44.469 [2024-11-18 23:11:03.651600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:44.469 [2024-11-18 23:11:03.651609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:44.469 [2024-11-18 23:11:03.651616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:44.469 [2024-11-18 23:11:03.651625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:44.469 [2024-11-18 23:11:03.651632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:44.469 [2024-11-18 23:11:03.651641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:44.469 [2024-11-18 23:11:03.651649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:44.469 [2024-11-18 23:11:03.651658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:44.469 [2024-11-18 23:11:03.651665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:44.469 [2024-11-18 23:11:03.651674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:44.469 [2024-11-18 23:11:03.651682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:44.469 [2024-11-18 23:11:03.651691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:44.469 [2024-11-18 23:11:03.651699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:44.469 [2024-11-18 23:11:03.651710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:44.469 [2024-11-18 23:11:03.651718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:44.469 [2024-11-18 23:11:03.651727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:44.469 [2024-11-18 23:11:03.651735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:44.469 [2024-11-18 23:11:03.651745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:44.469 [2024-11-18 23:11:03.651753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:44.469 [2024-11-18 23:11:03.651762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:44.469 [2024-11-18 23:11:03.651769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:44.469 [2024-11-18 23:11:03.651778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:44.469 [2024-11-18 23:11:03.651786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:44.469 [2024-11-18 23:11:03.651795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:44.469 [2024-11-18 23:11:03.651802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:44.469 [2024-11-18 23:11:03.651815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:44.469 [2024-11-18 23:11:03.651823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:44.469 [2024-11-18 23:11:03.651832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:44.469 [2024-11-18 23:11:03.651840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:44.469 [2024-11-18 23:11:03.651850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:44.469 [2024-11-18 23:11:03.651860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:44.469 [2024-11-18 23:11:03.651869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:44.469 [2024-11-18 23:11:03.651877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:44.469 [2024-11-18 23:11:03.651886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:44.469 [2024-11-18 23:11:03.651894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:44.469 [2024-11-18 23:11:03.651902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:44.469 [2024-11-18 23:11:03.651909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:44.470 [2024-11-18 23:11:03.651919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:44.470 [2024-11-18 23:11:03.651929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:44.470 [2024-11-18 23:11:03.651942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:44.470 [2024-11-18 23:11:03.651952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:44.470 [2024-11-18 23:11:03.651961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:44.470 [2024-11-18 23:11:03.651969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:44.470 [2024-11-18 23:11:03.651977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:44.470 [2024-11-18 23:11:03.651984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:44.470 [2024-11-18 23:11:03.651996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:44.470 [2024-11-18 23:11:03.652004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:44.470 [2024-11-18 23:11:03.652013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:44.470 [2024-11-18 23:11:03.652020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:44.470 [2024-11-18 23:11:03.652029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:44.470 [2024-11-18 23:11:03.652036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:44.470 [2024-11-18 23:11:03.652045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:44.470 [2024-11-18 23:11:03.652053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:44.470 [2024-11-18 23:11:03.652061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:44.470 [2024-11-18 23:11:03.652069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:44.470 [2024-11-18 23:11:03.652081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:44.470 [2024-11-18 23:11:03.652089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:44.470 [2024-11-18 23:11:03.652098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:44.470 [2024-11-18 23:11:03.652108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:44.470 [2024-11-18 23:11:03.652117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:44.470 [2024-11-18 23:11:03.652125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:44.470 [2024-11-18 23:11:03.652135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:44.470 [2024-11-18 23:11:03.652142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:44.470 [2024-11-18 23:11:03.652151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:44.470 [2024-11-18 23:11:03.652170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:44.470 [2024-11-18 23:11:03.652179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:44.470 [2024-11-18 23:11:03.652187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:44.470 [2024-11-18 23:11:03.652197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:44.470 [2024-11-18 23:11:03.652204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:44.470 [2024-11-18 23:11:03.652214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:44.470 [2024-11-18 23:11:03.652224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:44.470 [2024-11-18 23:11:03.652236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:44.470 [2024-11-18 23:11:03.652246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:44.470 [2024-11-18 23:11:03.652255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:44.470 [2024-11-18 23:11:03.652263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:44.470 [2024-11-18 23:11:03.652272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:44.470 [2024-11-18 23:11:03.652279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:44.470 [2024-11-18 23:11:03.652290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:44.470 [2024-11-18 23:11:03.652297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:44.470 [2024-11-18 23:11:03.652306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:44.470 [2024-11-18 23:11:03.652313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:44.470 [2024-11-18 23:11:03.652322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:44.470 [2024-11-18 23:11:03.652329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:44.470 [2024-11-18 23:11:03.652348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:44.470 [2024-11-18 23:11:03.652356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:44.470 [2024-11-18 23:11:03.652364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:44.470 [2024-11-18 23:11:03.652372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:44.470 [2024-11-18 23:11:03.652381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:44.470 [2024-11-18 23:11:03.652391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:44.470 [2024-11-18 23:11:03.652400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:44.470 [2024-11-18 23:11:03.652411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:44.470 [2024-11-18 23:11:03.652419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:44.470 [2024-11-18 23:11:03.652427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:44.470 [2024-11-18 23:11:03.652446] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:44.470 [2024-11-18 23:11:03.652456] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ed1d8789-27c0-44fe-adc2-5552cf8d2357 00:15:44.470 [2024-11-18 23:11:03.652467] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:44.470 [2024-11-18 23:11:03.652474] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:44.470 [2024-11-18 23:11:03.652484] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:44.470 [2024-11-18 23:11:03.652491] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:44.470 [2024-11-18 23:11:03.652499] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:44.470 [2024-11-18 23:11:03.652506] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:44.470 [2024-11-18 23:11:03.652514] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:44.470 [2024-11-18 23:11:03.652520] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:44.470 [2024-11-18 23:11:03.652529] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:44.470 [2024-11-18 23:11:03.652536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.470 [2024-11-18 23:11:03.652545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:44.470 [2024-11-18 23:11:03.652553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.041 ms 00:15:44.470 [2024-11-18 23:11:03.652561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.470 [2024-11-18 23:11:03.653930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.470 [2024-11-18 23:11:03.653956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:44.470 [2024-11-18 23:11:03.653965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.346 ms 00:15:44.470 [2024-11-18 23:11:03.653974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.470 [2024-11-18 23:11:03.654076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:44.470 [2024-11-18 23:11:03.654092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:44.470 [2024-11-18 23:11:03.654101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:15:44.470 [2024-11-18 23:11:03.654109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.470 [2024-11-18 23:11:03.659025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:44.470 [2024-11-18 23:11:03.659059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:44.470 [2024-11-18 23:11:03.659068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:44.470 [2024-11-18 23:11:03.659077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.470 [2024-11-18 23:11:03.659130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:44.470 [2024-11-18 23:11:03.659139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:44.470 [2024-11-18 23:11:03.659147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:44.470 [2024-11-18 23:11:03.659165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.470 [2024-11-18 23:11:03.659244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:44.470 [2024-11-18 23:11:03.659266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:44.470 [2024-11-18 23:11:03.659274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:44.470 [2024-11-18 23:11:03.659283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.470 [2024-11-18 23:11:03.659304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:44.470 [2024-11-18 23:11:03.659314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:44.470 [2024-11-18 23:11:03.659321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:44.470 [2024-11-18 23:11:03.659337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.471 [2024-11-18 23:11:03.667967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:44.471 [2024-11-18 23:11:03.668008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:44.471 [2024-11-18 23:11:03.668019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:44.471 [2024-11-18 23:11:03.668028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.471 [2024-11-18 23:11:03.675322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:44.471 [2024-11-18 23:11:03.675390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:44.471 [2024-11-18 23:11:03.675399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:44.471 [2024-11-18 23:11:03.675409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.471 [2024-11-18 23:11:03.675459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:44.471 [2024-11-18 23:11:03.675471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:44.471 [2024-11-18 23:11:03.675489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:44.471 [2024-11-18 23:11:03.675499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.471 [2024-11-18 23:11:03.675564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:44.471 [2024-11-18 23:11:03.675575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:44.471 [2024-11-18 23:11:03.675583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:44.471 [2024-11-18 23:11:03.675591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.471 [2024-11-18 23:11:03.675666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:44.471 [2024-11-18 23:11:03.675683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:44.471 [2024-11-18 23:11:03.675691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:44.471 [2024-11-18 23:11:03.675702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.471 [2024-11-18 23:11:03.675739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:44.471 [2024-11-18 23:11:03.675750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:44.471 [2024-11-18 23:11:03.675757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:44.471 [2024-11-18 23:11:03.675765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.471 [2024-11-18 23:11:03.675811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:44.471 [2024-11-18 23:11:03.675823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:44.471 [2024-11-18 23:11:03.675831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:44.471 [2024-11-18 23:11:03.675851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.471 [2024-11-18 23:11:03.675899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:44.471 [2024-11-18 23:11:03.675917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:44.471 [2024-11-18 23:11:03.675924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:44.471 [2024-11-18 23:11:03.675934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:44.471 [2024-11-18 23:11:03.676093] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 45.800 ms, result 0 00:15:44.471 true 00:15:44.471 23:11:03 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 84386 00:15:44.471 23:11:03 ftl.ftl_fio_basic -- common/autotest_common.sh@950 -- # '[' -z 84386 ']' 00:15:44.471 23:11:03 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # kill -0 84386 00:15:44.471 23:11:03 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # uname 00:15:44.471 23:11:03 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:44.471 23:11:03 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84386 00:15:44.471 23:11:03 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:44.471 killing process with pid 84386 00:15:44.471 23:11:03 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:44.471 23:11:03 ftl.ftl_fio_basic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84386' 00:15:44.471 23:11:03 ftl.ftl_fio_basic -- common/autotest_common.sh@969 -- # kill 84386 00:15:44.471 23:11:03 ftl.ftl_fio_basic -- common/autotest_common.sh@974 -- # wait 84386 00:15:49.728 23:11:08 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:15:49.728 23:11:08 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:49.728 23:11:08 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:15:49.728 23:11:08 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:49.728 23:11:08 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:49.728 23:11:08 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:49.728 23:11:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:49.728 23:11:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:49.728 23:11:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:49.728 23:11:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:49.728 23:11:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:49.728 23:11:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:15:49.728 23:11:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:49.728 23:11:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:49.728 23:11:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:49.728 23:11:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:49.728 23:11:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:15:49.728 23:11:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:49.728 23:11:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:49.728 23:11:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:15:49.728 23:11:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:49.728 23:11:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:49.728 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:15:49.728 fio-3.35 00:15:49.728 Starting 1 thread 00:15:53.007 00:15:53.007 test: (groupid=0, jobs=1): err= 0: pid=84546: Mon Nov 18 23:11:12 2024 00:15:53.007 read: IOPS=1267, BW=84.1MiB/s (88.2MB/s)(255MiB/3025msec) 00:15:53.007 slat (nsec): min=2881, max=21772, avg=4449.14, stdev=1889.21 00:15:53.007 clat (usec): min=249, max=1408, avg=348.20, stdev=79.95 00:15:53.007 lat (usec): min=253, max=1412, avg=352.65, stdev=80.41 00:15:53.007 clat percentiles (usec): 00:15:53.007 | 1.00th=[ 285], 5.00th=[ 306], 10.00th=[ 314], 20.00th=[ 318], 00:15:53.007 | 30.00th=[ 318], 40.00th=[ 318], 50.00th=[ 322], 60.00th=[ 322], 00:15:53.007 | 70.00th=[ 326], 80.00th=[ 343], 90.00th=[ 445], 95.00th=[ 498], 00:15:53.007 | 99.00th=[ 709], 99.50th=[ 775], 99.90th=[ 906], 99.95th=[ 1020], 00:15:53.007 | 99.99th=[ 1401] 00:15:53.007 write: IOPS=1276, BW=84.8MiB/s (88.9MB/s)(256MiB/3021msec); 0 zone resets 00:15:53.007 slat (usec): min=13, max=115, avg=24.89, stdev= 5.14 00:15:53.007 clat (usec): min=280, max=1295, avg=393.04, stdev=97.26 00:15:53.007 lat (usec): min=298, max=1335, avg=417.93, stdev=97.62 00:15:53.007 clat percentiles (usec): 00:15:53.007 | 1.00th=[ 322], 5.00th=[ 330], 10.00th=[ 334], 20.00th=[ 338], 00:15:53.007 | 30.00th=[ 343], 40.00th=[ 347], 50.00th=[ 355], 60.00th=[ 396], 00:15:53.007 | 70.00th=[ 404], 80.00th=[ 412], 90.00th=[ 474], 95.00th=[ 603], 00:15:53.007 | 99.00th=[ 824], 99.50th=[ 881], 99.90th=[ 988], 99.95th=[ 1172], 00:15:53.007 | 99.99th=[ 1303] 00:15:53.007 bw ( KiB/s): min=85272, max=89624, per=100.00%, avg=86836.00, stdev=1684.97, samples=6 00:15:53.007 iops : min= 1254, max= 1318, avg=1277.00, stdev=24.78, samples=6 00:15:53.007 lat (usec) : 250=0.01%, 500=93.68%, 750=4.76%, 1000=1.48% 00:15:53.007 lat (msec) : 2=0.07% 00:15:53.007 cpu : usr=99.34%, sys=0.00%, ctx=8, majf=0, minf=1181 00:15:53.007 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:53.007 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:53.007 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:53.007 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:53.007 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:53.007 00:15:53.007 Run status group 0 (all jobs): 00:15:53.007 READ: bw=84.1MiB/s (88.2MB/s), 84.1MiB/s-84.1MiB/s (88.2MB/s-88.2MB/s), io=255MiB (267MB), run=3025-3025msec 00:15:53.007 WRITE: bw=84.8MiB/s (88.9MB/s), 84.8MiB/s-84.8MiB/s (88.9MB/s-88.9MB/s), io=256MiB (269MB), run=3021-3021msec 00:15:53.573 ----------------------------------------------------- 00:15:53.573 Suppressions used: 00:15:53.573 count bytes template 00:15:53.573 1 5 /usr/src/fio/parse.c 00:15:53.573 1 8 libtcmalloc_minimal.so 00:15:53.573 1 904 libcrypto.so 00:15:53.573 ----------------------------------------------------- 00:15:53.573 00:15:53.573 23:11:12 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:15:53.573 23:11:12 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:53.573 23:11:12 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:53.573 23:11:12 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:53.573 23:11:12 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:15:53.573 23:11:12 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:53.573 23:11:12 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:53.573 23:11:12 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:53.573 23:11:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:53.573 23:11:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:53.573 23:11:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:53.573 23:11:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:53.573 23:11:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:53.573 23:11:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:15:53.573 23:11:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:53.573 23:11:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:53.573 23:11:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:53.573 23:11:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:15:53.573 23:11:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:53.573 23:11:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:53.573 23:11:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:53.573 23:11:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:15:53.573 23:11:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:53.573 23:11:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:53.832 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:53.832 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:53.832 fio-3.35 00:15:53.832 Starting 2 threads 00:16:15.747 00:16:15.747 first_half: (groupid=0, jobs=1): err= 0: pid=84628: Mon Nov 18 23:11:34 2024 00:16:15.747 read: IOPS=3081, BW=12.0MiB/s (12.6MB/s)(255MiB/21169msec) 00:16:15.747 slat (nsec): min=2901, max=59730, avg=3553.20, stdev=705.14 00:16:15.747 clat (usec): min=647, max=266158, avg=32902.78, stdev=16053.96 00:16:15.747 lat (usec): min=651, max=266162, avg=32906.33, stdev=16053.98 00:16:15.747 clat percentiles (msec): 00:16:15.747 | 1.00th=[ 7], 5.00th=[ 26], 10.00th=[ 27], 20.00th=[ 29], 00:16:15.747 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 30], 60.00th=[ 30], 00:16:15.747 | 70.00th=[ 31], 80.00th=[ 34], 90.00th=[ 37], 95.00th=[ 45], 00:16:15.747 | 99.00th=[ 123], 99.50th=[ 140], 99.90th=[ 157], 99.95th=[ 192], 00:16:15.747 | 99.99th=[ 259] 00:16:15.747 write: IOPS=4007, BW=15.7MiB/s (16.4MB/s)(256MiB/16354msec); 0 zone resets 00:16:15.747 slat (usec): min=3, max=790, avg= 5.31, stdev= 3.87 00:16:15.747 clat (usec): min=361, max=73930, avg=8564.66, stdev=14625.96 00:16:15.747 lat (usec): min=372, max=73936, avg=8569.98, stdev=14625.99 00:16:15.747 clat percentiles (usec): 00:16:15.747 | 1.00th=[ 660], 5.00th=[ 848], 10.00th=[ 988], 20.00th=[ 1188], 00:16:15.747 | 30.00th=[ 2343], 40.00th=[ 3458], 50.00th=[ 4359], 60.00th=[ 5014], 00:16:15.747 | 70.00th=[ 5735], 80.00th=[ 9503], 90.00th=[13042], 95.00th=[57410], 00:16:15.747 | 99.00th=[66323], 99.50th=[67634], 99.90th=[71828], 99.95th=[71828], 00:16:15.747 | 99.99th=[73925] 00:16:15.747 bw ( KiB/s): min= 152, max=41624, per=95.93%, avg=27592.32, stdev=13349.35, samples=19 00:16:15.747 iops : min= 38, max=10406, avg=6898.05, stdev=3337.36, samples=19 00:16:15.747 lat (usec) : 500=0.03%, 750=1.36%, 1000=4.08% 00:16:15.747 lat (msec) : 2=8.94%, 4=8.63%, 10=18.49%, 20=5.66%, 50=47.32% 00:16:15.747 lat (msec) : 100=4.56%, 250=0.92%, 500=0.01% 00:16:15.747 cpu : usr=99.27%, sys=0.13%, ctx=30, majf=0, minf=5587 00:16:15.747 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:15.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.747 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:15.747 issued rwts: total=65240,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.747 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.747 second_half: (groupid=0, jobs=1): err= 0: pid=84629: Mon Nov 18 23:11:34 2024 00:16:15.747 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(255MiB/21327msec) 00:16:15.747 slat (nsec): min=2892, max=16213, avg=3534.06, stdev=559.07 00:16:15.747 clat (usec): min=602, max=270311, avg=32224.82, stdev=17817.49 00:16:15.747 lat (usec): min=606, max=270316, avg=32228.35, stdev=17817.53 00:16:15.747 clat percentiles (msec): 00:16:15.747 | 1.00th=[ 7], 5.00th=[ 18], 10.00th=[ 26], 20.00th=[ 29], 00:16:15.747 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 30], 60.00th=[ 30], 00:16:15.747 | 70.00th=[ 31], 80.00th=[ 33], 90.00th=[ 36], 95.00th=[ 44], 00:16:15.747 | 99.00th=[ 128], 99.50th=[ 150], 99.90th=[ 186], 99.95th=[ 205], 00:16:15.747 | 99.99th=[ 266] 00:16:15.747 write: IOPS=3595, BW=14.0MiB/s (14.7MB/s)(256MiB/18228msec); 0 zone resets 00:16:15.747 slat (usec): min=3, max=797, avg= 5.36, stdev= 4.68 00:16:15.747 clat (usec): min=368, max=74371, avg=9551.79, stdev=15600.33 00:16:15.747 lat (usec): min=374, max=74376, avg=9557.15, stdev=15600.39 00:16:15.747 clat percentiles (usec): 00:16:15.747 | 1.00th=[ 652], 5.00th=[ 775], 10.00th=[ 914], 20.00th=[ 1123], 00:16:15.747 | 30.00th=[ 2040], 40.00th=[ 3130], 50.00th=[ 4080], 60.00th=[ 5014], 00:16:15.747 | 70.00th=[ 6128], 80.00th=[10683], 90.00th=[28443], 95.00th=[58983], 00:16:15.747 | 99.00th=[66847], 99.50th=[68682], 99.90th=[72877], 99.95th=[73925], 00:16:15.747 | 99.99th=[73925] 00:16:15.747 bw ( KiB/s): min= 944, max=53216, per=82.84%, avg=23828.77, stdev=13243.12, samples=22 00:16:15.747 iops : min= 236, max=13304, avg=5957.18, stdev=3310.79, samples=22 00:16:15.747 lat (usec) : 500=0.03%, 750=2.05%, 1000=4.86% 00:16:15.747 lat (msec) : 2=8.13%, 4=9.67%, 10=16.54%, 20=5.31%, 50=48.05% 00:16:15.747 lat (msec) : 100=4.25%, 250=1.10%, 500=0.01% 00:16:15.747 cpu : usr=99.45%, sys=0.10%, ctx=43, majf=0, minf=5553 00:16:15.747 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:15.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.748 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:15.748 issued rwts: total=65250,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.748 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.748 00:16:15.748 Run status group 0 (all jobs): 00:16:15.748 READ: bw=23.9MiB/s (25.1MB/s), 12.0MiB/s-12.0MiB/s (12.5MB/s-12.6MB/s), io=510MiB (534MB), run=21169-21327msec 00:16:15.748 WRITE: bw=28.1MiB/s (29.5MB/s), 14.0MiB/s-15.7MiB/s (14.7MB/s-16.4MB/s), io=512MiB (537MB), run=16354-18228msec 00:16:16.689 ----------------------------------------------------- 00:16:16.689 Suppressions used: 00:16:16.689 count bytes template 00:16:16.689 2 10 /usr/src/fio/parse.c 00:16:16.689 2 192 /usr/src/fio/iolog.c 00:16:16.689 1 8 libtcmalloc_minimal.so 00:16:16.689 1 904 libcrypto.so 00:16:16.689 ----------------------------------------------------- 00:16:16.689 00:16:16.689 23:11:35 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:16:16.689 23:11:35 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:16.689 23:11:35 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:16.689 23:11:35 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:16.689 23:11:35 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:16:16.689 23:11:35 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:16.689 23:11:35 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:16.689 23:11:35 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:16.689 23:11:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:16.689 23:11:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:16:16.689 23:11:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:16.689 23:11:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:16:16.689 23:11:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:16.689 23:11:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:16:16.689 23:11:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:16:16.689 23:11:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:16.689 23:11:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:16.689 23:11:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:16:16.689 23:11:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:16.689 23:11:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:16.689 23:11:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:16.689 23:11:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:16:16.689 23:11:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:16.689 23:11:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:16.689 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:16.689 fio-3.35 00:16:16.689 Starting 1 thread 00:16:31.630 00:16:31.630 test: (groupid=0, jobs=1): err= 0: pid=84902: Mon Nov 18 23:11:48 2024 00:16:31.630 read: IOPS=8368, BW=32.7MiB/s (34.3MB/s)(255MiB/7791msec) 00:16:31.630 slat (nsec): min=2895, max=46316, avg=3399.64, stdev=967.40 00:16:31.630 clat (usec): min=465, max=39027, avg=15286.08, stdev=1867.08 00:16:31.630 lat (usec): min=469, max=39030, avg=15289.48, stdev=1867.09 00:16:31.630 clat percentiles (usec): 00:16:31.630 | 1.00th=[12649], 5.00th=[12911], 10.00th=[14484], 20.00th=[14615], 00:16:31.630 | 30.00th=[14746], 40.00th=[14877], 50.00th=[15008], 60.00th=[15139], 00:16:31.630 | 70.00th=[15401], 80.00th=[15533], 90.00th=[15795], 95.00th=[18482], 00:16:31.630 | 99.00th=[23725], 99.50th=[24249], 99.90th=[31327], 99.95th=[35390], 00:16:31.630 | 99.99th=[38536] 00:16:31.630 write: IOPS=15.1k, BW=58.9MiB/s (61.8MB/s)(256MiB/4347msec); 0 zone resets 00:16:31.630 slat (usec): min=4, max=541, avg= 6.33, stdev= 3.54 00:16:31.630 clat (usec): min=425, max=51648, avg=8436.88, stdev=11356.26 00:16:31.630 lat (usec): min=451, max=51654, avg=8443.21, stdev=11356.27 00:16:31.630 clat percentiles (usec): 00:16:31.630 | 1.00th=[ 668], 5.00th=[ 799], 10.00th=[ 914], 20.00th=[ 1123], 00:16:31.630 | 30.00th=[ 1336], 40.00th=[ 1942], 50.00th=[ 4817], 60.00th=[ 5800], 00:16:31.630 | 70.00th=[ 6652], 80.00th=[ 8160], 90.00th=[32637], 95.00th=[36439], 00:16:31.630 | 99.00th=[40633], 99.50th=[43254], 99.90th=[48497], 99.95th=[49021], 00:16:31.630 | 99.99th=[50594] 00:16:31.630 bw ( KiB/s): min=45560, max=94544, per=96.60%, avg=58254.22, stdev=16529.56, samples=9 00:16:31.630 iops : min=11390, max=23636, avg=14563.56, stdev=4132.39, samples=9 00:16:31.630 lat (usec) : 500=0.02%, 750=1.52%, 1000=5.50% 00:16:31.630 lat (msec) : 2=13.19%, 4=1.64%, 10=19.98%, 20=48.30%, 50=9.85% 00:16:31.630 lat (msec) : 100=0.01% 00:16:31.630 cpu : usr=99.15%, sys=0.12%, ctx=26, majf=0, minf=5577 00:16:31.630 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:16:31.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:31.630 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:31.630 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:31.630 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:31.630 00:16:31.630 Run status group 0 (all jobs): 00:16:31.630 READ: bw=32.7MiB/s (34.3MB/s), 32.7MiB/s-32.7MiB/s (34.3MB/s-34.3MB/s), io=255MiB (267MB), run=7791-7791msec 00:16:31.630 WRITE: bw=58.9MiB/s (61.8MB/s), 58.9MiB/s-58.9MiB/s (61.8MB/s-61.8MB/s), io=256MiB (268MB), run=4347-4347msec 00:16:31.630 ----------------------------------------------------- 00:16:31.630 Suppressions used: 00:16:31.630 count bytes template 00:16:31.630 1 5 /usr/src/fio/parse.c 00:16:31.630 2 192 /usr/src/fio/iolog.c 00:16:31.630 1 8 libtcmalloc_minimal.so 00:16:31.630 1 904 libcrypto.so 00:16:31.630 ----------------------------------------------------- 00:16:31.630 00:16:31.630 23:11:49 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:16:31.630 23:11:49 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:31.630 23:11:49 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:31.630 23:11:49 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:31.630 Remove shared memory files 00:16:31.630 23:11:49 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:16:31.630 23:11:49 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:31.630 23:11:49 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:16:31.631 23:11:49 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:16:31.631 23:11:49 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid69887 /dev/shm/spdk_tgt_trace.pid83333 00:16:31.631 23:11:49 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:31.631 23:11:49 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:16:31.631 ************************************ 00:16:31.631 END TEST ftl_fio_basic 00:16:31.631 ************************************ 00:16:31.631 00:16:31.631 real 0m52.748s 00:16:31.631 user 1m58.983s 00:16:31.631 sys 0m2.504s 00:16:31.631 23:11:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:31.631 23:11:49 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:31.631 23:11:49 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:16:31.631 23:11:49 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:31.631 23:11:49 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:31.631 23:11:49 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:31.631 ************************************ 00:16:31.631 START TEST ftl_bdevperf 00:16:31.631 ************************************ 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:16:31.631 * Looking for test storage... 00:16:31.631 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # lcov --version 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:31.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:31.631 --rc genhtml_branch_coverage=1 00:16:31.631 --rc genhtml_function_coverage=1 00:16:31.631 --rc genhtml_legend=1 00:16:31.631 --rc geninfo_all_blocks=1 00:16:31.631 --rc geninfo_unexecuted_blocks=1 00:16:31.631 00:16:31.631 ' 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:31.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:31.631 --rc genhtml_branch_coverage=1 00:16:31.631 --rc genhtml_function_coverage=1 00:16:31.631 --rc genhtml_legend=1 00:16:31.631 --rc geninfo_all_blocks=1 00:16:31.631 --rc geninfo_unexecuted_blocks=1 00:16:31.631 00:16:31.631 ' 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:31.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:31.631 --rc genhtml_branch_coverage=1 00:16:31.631 --rc genhtml_function_coverage=1 00:16:31.631 --rc genhtml_legend=1 00:16:31.631 --rc geninfo_all_blocks=1 00:16:31.631 --rc geninfo_unexecuted_blocks=1 00:16:31.631 00:16:31.631 ' 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:31.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:31.631 --rc genhtml_branch_coverage=1 00:16:31.631 --rc genhtml_function_coverage=1 00:16:31.631 --rc genhtml_legend=1 00:16:31.631 --rc geninfo_all_blocks=1 00:16:31.631 --rc geninfo_unexecuted_blocks=1 00:16:31.631 00:16:31.631 ' 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=85124 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 85124 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 85124 ']' 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:31.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:31.631 23:11:49 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:31.631 [2024-11-18 23:11:49.979829] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:31.631 [2024-11-18 23:11:49.980094] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85124 ] 00:16:31.631 [2024-11-18 23:11:50.130588] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:31.631 [2024-11-18 23:11:50.174825] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:31.631 23:11:50 ftl.ftl_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:31.631 23:11:50 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:16:31.632 23:11:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:31.632 23:11:50 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:16:31.632 23:11:50 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:31.632 23:11:50 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:16:31.632 23:11:50 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:16:31.632 23:11:50 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:31.911 23:11:51 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:31.911 23:11:51 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:16:31.911 23:11:51 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:31.911 23:11:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:16:31.911 23:11:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:31.911 23:11:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:16:31.911 23:11:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:16:31.911 23:11:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:32.171 23:11:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:32.171 { 00:16:32.171 "name": "nvme0n1", 00:16:32.171 "aliases": [ 00:16:32.171 "85462184-7a85-4b00-a1a0-bf832618e290" 00:16:32.171 ], 00:16:32.171 "product_name": "NVMe disk", 00:16:32.171 "block_size": 4096, 00:16:32.171 "num_blocks": 1310720, 00:16:32.171 "uuid": "85462184-7a85-4b00-a1a0-bf832618e290", 00:16:32.171 "numa_id": -1, 00:16:32.171 "assigned_rate_limits": { 00:16:32.171 "rw_ios_per_sec": 0, 00:16:32.171 "rw_mbytes_per_sec": 0, 00:16:32.171 "r_mbytes_per_sec": 0, 00:16:32.171 "w_mbytes_per_sec": 0 00:16:32.171 }, 00:16:32.172 "claimed": true, 00:16:32.172 "claim_type": "read_many_write_one", 00:16:32.172 "zoned": false, 00:16:32.172 "supported_io_types": { 00:16:32.172 "read": true, 00:16:32.172 "write": true, 00:16:32.172 "unmap": true, 00:16:32.172 "flush": true, 00:16:32.172 "reset": true, 00:16:32.172 "nvme_admin": true, 00:16:32.172 "nvme_io": true, 00:16:32.172 "nvme_io_md": false, 00:16:32.172 "write_zeroes": true, 00:16:32.172 "zcopy": false, 00:16:32.172 "get_zone_info": false, 00:16:32.172 "zone_management": false, 00:16:32.172 "zone_append": false, 00:16:32.172 "compare": true, 00:16:32.172 "compare_and_write": false, 00:16:32.172 "abort": true, 00:16:32.172 "seek_hole": false, 00:16:32.172 "seek_data": false, 00:16:32.172 "copy": true, 00:16:32.172 "nvme_iov_md": false 00:16:32.172 }, 00:16:32.172 "driver_specific": { 00:16:32.172 "nvme": [ 00:16:32.172 { 00:16:32.172 "pci_address": "0000:00:11.0", 00:16:32.172 "trid": { 00:16:32.172 "trtype": "PCIe", 00:16:32.172 "traddr": "0000:00:11.0" 00:16:32.172 }, 00:16:32.172 "ctrlr_data": { 00:16:32.172 "cntlid": 0, 00:16:32.172 "vendor_id": "0x1b36", 00:16:32.172 "model_number": "QEMU NVMe Ctrl", 00:16:32.172 "serial_number": "12341", 00:16:32.172 "firmware_revision": "8.0.0", 00:16:32.172 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:32.172 "oacs": { 00:16:32.172 "security": 0, 00:16:32.172 "format": 1, 00:16:32.172 "firmware": 0, 00:16:32.172 "ns_manage": 1 00:16:32.172 }, 00:16:32.172 "multi_ctrlr": false, 00:16:32.172 "ana_reporting": false 00:16:32.172 }, 00:16:32.172 "vs": { 00:16:32.172 "nvme_version": "1.4" 00:16:32.172 }, 00:16:32.172 "ns_data": { 00:16:32.172 "id": 1, 00:16:32.172 "can_share": false 00:16:32.172 } 00:16:32.172 } 00:16:32.172 ], 00:16:32.172 "mp_policy": "active_passive" 00:16:32.172 } 00:16:32.172 } 00:16:32.172 ]' 00:16:32.172 23:11:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:32.172 23:11:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:16:32.172 23:11:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:32.172 23:11:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=1310720 00:16:32.172 23:11:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:16:32.172 23:11:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 5120 00:16:32.172 23:11:51 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:16:32.172 23:11:51 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:32.172 23:11:51 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:16:32.172 23:11:51 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:32.172 23:11:51 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:32.431 23:11:51 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=0c1964ed-542c-40c4-9352-33de92520bde 00:16:32.431 23:11:51 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:16:32.431 23:11:51 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0c1964ed-542c-40c4-9352-33de92520bde 00:16:32.690 23:11:51 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:32.952 23:11:52 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=d2968cda-b4ad-40de-933e-851121450248 00:16:32.952 23:11:52 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u d2968cda-b4ad-40de-933e-851121450248 00:16:32.952 23:11:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=79f8d331-d08e-4994-8192-d0fc9b052ab2 00:16:32.952 23:11:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 79f8d331-d08e-4994-8192-d0fc9b052ab2 00:16:32.952 23:11:52 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:16:32.952 23:11:52 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:32.952 23:11:52 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=79f8d331-d08e-4994-8192-d0fc9b052ab2 00:16:32.952 23:11:52 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:16:32.952 23:11:52 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 79f8d331-d08e-4994-8192-d0fc9b052ab2 00:16:32.952 23:11:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=79f8d331-d08e-4994-8192-d0fc9b052ab2 00:16:32.952 23:11:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:32.952 23:11:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:16:32.952 23:11:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:16:32.952 23:11:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 79f8d331-d08e-4994-8192-d0fc9b052ab2 00:16:33.211 23:11:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:33.211 { 00:16:33.211 "name": "79f8d331-d08e-4994-8192-d0fc9b052ab2", 00:16:33.211 "aliases": [ 00:16:33.211 "lvs/nvme0n1p0" 00:16:33.211 ], 00:16:33.211 "product_name": "Logical Volume", 00:16:33.211 "block_size": 4096, 00:16:33.211 "num_blocks": 26476544, 00:16:33.211 "uuid": "79f8d331-d08e-4994-8192-d0fc9b052ab2", 00:16:33.211 "assigned_rate_limits": { 00:16:33.211 "rw_ios_per_sec": 0, 00:16:33.211 "rw_mbytes_per_sec": 0, 00:16:33.211 "r_mbytes_per_sec": 0, 00:16:33.211 "w_mbytes_per_sec": 0 00:16:33.211 }, 00:16:33.211 "claimed": false, 00:16:33.211 "zoned": false, 00:16:33.211 "supported_io_types": { 00:16:33.211 "read": true, 00:16:33.211 "write": true, 00:16:33.211 "unmap": true, 00:16:33.211 "flush": false, 00:16:33.211 "reset": true, 00:16:33.211 "nvme_admin": false, 00:16:33.211 "nvme_io": false, 00:16:33.211 "nvme_io_md": false, 00:16:33.211 "write_zeroes": true, 00:16:33.212 "zcopy": false, 00:16:33.212 "get_zone_info": false, 00:16:33.212 "zone_management": false, 00:16:33.212 "zone_append": false, 00:16:33.212 "compare": false, 00:16:33.212 "compare_and_write": false, 00:16:33.212 "abort": false, 00:16:33.212 "seek_hole": true, 00:16:33.212 "seek_data": true, 00:16:33.212 "copy": false, 00:16:33.212 "nvme_iov_md": false 00:16:33.212 }, 00:16:33.212 "driver_specific": { 00:16:33.212 "lvol": { 00:16:33.212 "lvol_store_uuid": "d2968cda-b4ad-40de-933e-851121450248", 00:16:33.212 "base_bdev": "nvme0n1", 00:16:33.212 "thin_provision": true, 00:16:33.212 "num_allocated_clusters": 0, 00:16:33.212 "snapshot": false, 00:16:33.212 "clone": false, 00:16:33.212 "esnap_clone": false 00:16:33.212 } 00:16:33.212 } 00:16:33.212 } 00:16:33.212 ]' 00:16:33.212 23:11:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:33.212 23:11:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:16:33.212 23:11:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:33.212 23:11:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:33.212 23:11:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:33.212 23:11:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:16:33.212 23:11:52 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:16:33.212 23:11:52 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:16:33.212 23:11:52 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:33.469 23:11:52 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:33.469 23:11:52 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:33.469 23:11:52 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 79f8d331-d08e-4994-8192-d0fc9b052ab2 00:16:33.469 23:11:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=79f8d331-d08e-4994-8192-d0fc9b052ab2 00:16:33.469 23:11:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:33.469 23:11:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:16:33.469 23:11:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:16:33.469 23:11:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 79f8d331-d08e-4994-8192-d0fc9b052ab2 00:16:33.726 23:11:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:33.726 { 00:16:33.726 "name": "79f8d331-d08e-4994-8192-d0fc9b052ab2", 00:16:33.726 "aliases": [ 00:16:33.726 "lvs/nvme0n1p0" 00:16:33.726 ], 00:16:33.726 "product_name": "Logical Volume", 00:16:33.726 "block_size": 4096, 00:16:33.726 "num_blocks": 26476544, 00:16:33.726 "uuid": "79f8d331-d08e-4994-8192-d0fc9b052ab2", 00:16:33.726 "assigned_rate_limits": { 00:16:33.726 "rw_ios_per_sec": 0, 00:16:33.726 "rw_mbytes_per_sec": 0, 00:16:33.726 "r_mbytes_per_sec": 0, 00:16:33.726 "w_mbytes_per_sec": 0 00:16:33.726 }, 00:16:33.726 "claimed": false, 00:16:33.726 "zoned": false, 00:16:33.726 "supported_io_types": { 00:16:33.726 "read": true, 00:16:33.726 "write": true, 00:16:33.726 "unmap": true, 00:16:33.726 "flush": false, 00:16:33.726 "reset": true, 00:16:33.726 "nvme_admin": false, 00:16:33.726 "nvme_io": false, 00:16:33.726 "nvme_io_md": false, 00:16:33.726 "write_zeroes": true, 00:16:33.726 "zcopy": false, 00:16:33.726 "get_zone_info": false, 00:16:33.726 "zone_management": false, 00:16:33.726 "zone_append": false, 00:16:33.726 "compare": false, 00:16:33.726 "compare_and_write": false, 00:16:33.726 "abort": false, 00:16:33.726 "seek_hole": true, 00:16:33.726 "seek_data": true, 00:16:33.726 "copy": false, 00:16:33.726 "nvme_iov_md": false 00:16:33.726 }, 00:16:33.726 "driver_specific": { 00:16:33.726 "lvol": { 00:16:33.726 "lvol_store_uuid": "d2968cda-b4ad-40de-933e-851121450248", 00:16:33.726 "base_bdev": "nvme0n1", 00:16:33.726 "thin_provision": true, 00:16:33.726 "num_allocated_clusters": 0, 00:16:33.726 "snapshot": false, 00:16:33.726 "clone": false, 00:16:33.726 "esnap_clone": false 00:16:33.726 } 00:16:33.726 } 00:16:33.726 } 00:16:33.726 ]' 00:16:33.726 23:11:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:33.726 23:11:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:16:33.726 23:11:52 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:33.726 23:11:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:33.726 23:11:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:33.726 23:11:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:16:33.726 23:11:53 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:16:33.726 23:11:53 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:33.984 23:11:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:16:33.984 23:11:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 79f8d331-d08e-4994-8192-d0fc9b052ab2 00:16:33.984 23:11:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=79f8d331-d08e-4994-8192-d0fc9b052ab2 00:16:33.984 23:11:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:33.984 23:11:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:16:33.984 23:11:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:16:33.984 23:11:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 79f8d331-d08e-4994-8192-d0fc9b052ab2 00:16:34.242 23:11:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:34.242 { 00:16:34.242 "name": "79f8d331-d08e-4994-8192-d0fc9b052ab2", 00:16:34.242 "aliases": [ 00:16:34.242 "lvs/nvme0n1p0" 00:16:34.242 ], 00:16:34.242 "product_name": "Logical Volume", 00:16:34.242 "block_size": 4096, 00:16:34.242 "num_blocks": 26476544, 00:16:34.242 "uuid": "79f8d331-d08e-4994-8192-d0fc9b052ab2", 00:16:34.242 "assigned_rate_limits": { 00:16:34.242 "rw_ios_per_sec": 0, 00:16:34.242 "rw_mbytes_per_sec": 0, 00:16:34.242 "r_mbytes_per_sec": 0, 00:16:34.242 "w_mbytes_per_sec": 0 00:16:34.242 }, 00:16:34.242 "claimed": false, 00:16:34.242 "zoned": false, 00:16:34.242 "supported_io_types": { 00:16:34.242 "read": true, 00:16:34.242 "write": true, 00:16:34.242 "unmap": true, 00:16:34.242 "flush": false, 00:16:34.242 "reset": true, 00:16:34.242 "nvme_admin": false, 00:16:34.242 "nvme_io": false, 00:16:34.242 "nvme_io_md": false, 00:16:34.242 "write_zeroes": true, 00:16:34.242 "zcopy": false, 00:16:34.242 "get_zone_info": false, 00:16:34.242 "zone_management": false, 00:16:34.242 "zone_append": false, 00:16:34.242 "compare": false, 00:16:34.242 "compare_and_write": false, 00:16:34.242 "abort": false, 00:16:34.242 "seek_hole": true, 00:16:34.242 "seek_data": true, 00:16:34.242 "copy": false, 00:16:34.242 "nvme_iov_md": false 00:16:34.242 }, 00:16:34.242 "driver_specific": { 00:16:34.242 "lvol": { 00:16:34.242 "lvol_store_uuid": "d2968cda-b4ad-40de-933e-851121450248", 00:16:34.242 "base_bdev": "nvme0n1", 00:16:34.242 "thin_provision": true, 00:16:34.242 "num_allocated_clusters": 0, 00:16:34.242 "snapshot": false, 00:16:34.242 "clone": false, 00:16:34.242 "esnap_clone": false 00:16:34.242 } 00:16:34.242 } 00:16:34.242 } 00:16:34.242 ]' 00:16:34.242 23:11:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:34.242 23:11:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:16:34.242 23:11:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:34.242 23:11:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:34.242 23:11:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:34.242 23:11:53 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:16:34.242 23:11:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:16:34.242 23:11:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 79f8d331-d08e-4994-8192-d0fc9b052ab2 -c nvc0n1p0 --l2p_dram_limit 20 00:16:34.502 [2024-11-18 23:11:53.678311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.502 [2024-11-18 23:11:53.678435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:34.502 [2024-11-18 23:11:53.678455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:34.503 [2024-11-18 23:11:53.678463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.503 [2024-11-18 23:11:53.678510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.503 [2024-11-18 23:11:53.678517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:34.503 [2024-11-18 23:11:53.678527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:16:34.503 [2024-11-18 23:11:53.678532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.503 [2024-11-18 23:11:53.678548] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:34.503 [2024-11-18 23:11:53.678736] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:34.503 [2024-11-18 23:11:53.678749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.503 [2024-11-18 23:11:53.678755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:34.503 [2024-11-18 23:11:53.678764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.205 ms 00:16:34.503 [2024-11-18 23:11:53.678770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.503 [2024-11-18 23:11:53.678823] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID a80860f6-287f-440d-aa13-850c18db5f1a 00:16:34.503 [2024-11-18 23:11:53.679787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.503 [2024-11-18 23:11:53.679810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:34.503 [2024-11-18 23:11:53.679818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:16:34.503 [2024-11-18 23:11:53.679825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.503 [2024-11-18 23:11:53.684521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.503 [2024-11-18 23:11:53.684548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:34.503 [2024-11-18 23:11:53.684555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.666 ms 00:16:34.503 [2024-11-18 23:11:53.684564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.503 [2024-11-18 23:11:53.684620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.503 [2024-11-18 23:11:53.684628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:34.503 [2024-11-18 23:11:53.684635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:16:34.503 [2024-11-18 23:11:53.684643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.503 [2024-11-18 23:11:53.684675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.503 [2024-11-18 23:11:53.684685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:34.503 [2024-11-18 23:11:53.684692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:34.503 [2024-11-18 23:11:53.684699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.503 [2024-11-18 23:11:53.684714] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:34.503 [2024-11-18 23:11:53.685976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.503 [2024-11-18 23:11:53.686001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:34.503 [2024-11-18 23:11:53.686013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.264 ms 00:16:34.503 [2024-11-18 23:11:53.686022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.503 [2024-11-18 23:11:53.686050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.503 [2024-11-18 23:11:53.686056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:34.503 [2024-11-18 23:11:53.686065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:34.503 [2024-11-18 23:11:53.686070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.503 [2024-11-18 23:11:53.686087] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:34.503 [2024-11-18 23:11:53.686209] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:34.503 [2024-11-18 23:11:53.686223] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:34.503 [2024-11-18 23:11:53.686232] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:34.503 [2024-11-18 23:11:53.686244] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:34.503 [2024-11-18 23:11:53.686251] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:34.503 [2024-11-18 23:11:53.686258] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:34.503 [2024-11-18 23:11:53.686264] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:34.503 [2024-11-18 23:11:53.686272] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:34.503 [2024-11-18 23:11:53.686279] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:34.503 [2024-11-18 23:11:53.686289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.503 [2024-11-18 23:11:53.686294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:34.503 [2024-11-18 23:11:53.686306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.202 ms 00:16:34.503 [2024-11-18 23:11:53.686311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.503 [2024-11-18 23:11:53.686378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.503 [2024-11-18 23:11:53.686384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:34.503 [2024-11-18 23:11:53.686391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:16:34.503 [2024-11-18 23:11:53.686396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.503 [2024-11-18 23:11:53.686471] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:34.503 [2024-11-18 23:11:53.686482] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:34.503 [2024-11-18 23:11:53.686489] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:34.503 [2024-11-18 23:11:53.686499] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:34.503 [2024-11-18 23:11:53.686508] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:34.503 [2024-11-18 23:11:53.686513] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:34.503 [2024-11-18 23:11:53.686520] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:34.503 [2024-11-18 23:11:53.686526] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:34.503 [2024-11-18 23:11:53.686532] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:34.503 [2024-11-18 23:11:53.686537] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:34.503 [2024-11-18 23:11:53.686544] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:34.503 [2024-11-18 23:11:53.686549] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:34.503 [2024-11-18 23:11:53.686557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:34.503 [2024-11-18 23:11:53.686562] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:34.503 [2024-11-18 23:11:53.686570] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:16:34.503 [2024-11-18 23:11:53.686575] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:34.503 [2024-11-18 23:11:53.686581] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:34.503 [2024-11-18 23:11:53.686586] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:16:34.503 [2024-11-18 23:11:53.686593] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:34.503 [2024-11-18 23:11:53.686599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:34.503 [2024-11-18 23:11:53.686605] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:34.503 [2024-11-18 23:11:53.686610] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:34.503 [2024-11-18 23:11:53.686617] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:34.503 [2024-11-18 23:11:53.686622] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:34.503 [2024-11-18 23:11:53.686629] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:34.503 [2024-11-18 23:11:53.686635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:34.503 [2024-11-18 23:11:53.686643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:34.503 [2024-11-18 23:11:53.686648] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:34.503 [2024-11-18 23:11:53.686656] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:34.503 [2024-11-18 23:11:53.686663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:16:34.503 [2024-11-18 23:11:53.686670] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:34.503 [2024-11-18 23:11:53.686676] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:34.503 [2024-11-18 23:11:53.686683] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:16:34.503 [2024-11-18 23:11:53.686688] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:34.503 [2024-11-18 23:11:53.686695] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:34.503 [2024-11-18 23:11:53.686701] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:16:34.503 [2024-11-18 23:11:53.686707] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:34.503 [2024-11-18 23:11:53.686713] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:34.503 [2024-11-18 23:11:53.686720] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:16:34.503 [2024-11-18 23:11:53.686725] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:34.503 [2024-11-18 23:11:53.686733] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:34.503 [2024-11-18 23:11:53.686738] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:16:34.503 [2024-11-18 23:11:53.686745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:34.503 [2024-11-18 23:11:53.686750] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:34.503 [2024-11-18 23:11:53.686760] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:34.503 [2024-11-18 23:11:53.686769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:34.503 [2024-11-18 23:11:53.686777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:34.503 [2024-11-18 23:11:53.686783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:34.503 [2024-11-18 23:11:53.686791] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:34.504 [2024-11-18 23:11:53.686796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:34.504 [2024-11-18 23:11:53.686803] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:34.504 [2024-11-18 23:11:53.686809] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:34.504 [2024-11-18 23:11:53.686816] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:34.504 [2024-11-18 23:11:53.686824] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:34.504 [2024-11-18 23:11:53.686833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:34.504 [2024-11-18 23:11:53.686840] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:34.504 [2024-11-18 23:11:53.686848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:16:34.504 [2024-11-18 23:11:53.686855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:16:34.504 [2024-11-18 23:11:53.686862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:16:34.504 [2024-11-18 23:11:53.686868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:16:34.504 [2024-11-18 23:11:53.686877] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:16:34.504 [2024-11-18 23:11:53.686883] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:16:34.504 [2024-11-18 23:11:53.686891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:16:34.504 [2024-11-18 23:11:53.686897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:16:34.504 [2024-11-18 23:11:53.686905] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:16:34.504 [2024-11-18 23:11:53.686911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:16:34.504 [2024-11-18 23:11:53.686918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:16:34.504 [2024-11-18 23:11:53.686924] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:16:34.504 [2024-11-18 23:11:53.686934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:16:34.504 [2024-11-18 23:11:53.686940] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:34.504 [2024-11-18 23:11:53.686948] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:34.504 [2024-11-18 23:11:53.686955] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:34.504 [2024-11-18 23:11:53.686963] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:34.504 [2024-11-18 23:11:53.686969] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:34.504 [2024-11-18 23:11:53.686976] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:34.504 [2024-11-18 23:11:53.686983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:34.504 [2024-11-18 23:11:53.686994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:34.504 [2024-11-18 23:11:53.687000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.564 ms 00:16:34.504 [2024-11-18 23:11:53.687008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:34.504 [2024-11-18 23:11:53.687031] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:34.504 [2024-11-18 23:11:53.687048] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:37.797 [2024-11-18 23:11:56.785634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.797 [2024-11-18 23:11:56.785725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:37.797 [2024-11-18 23:11:56.785743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3098.589 ms 00:16:37.797 [2024-11-18 23:11:56.785756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.797 [2024-11-18 23:11:56.808682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.797 [2024-11-18 23:11:56.808771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:37.797 [2024-11-18 23:11:56.808794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.804 ms 00:16:37.797 [2024-11-18 23:11:56.808813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.797 [2024-11-18 23:11:56.808960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.797 [2024-11-18 23:11:56.808980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:37.797 [2024-11-18 23:11:56.809003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:16:37.797 [2024-11-18 23:11:56.809018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.797 [2024-11-18 23:11:56.822731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.797 [2024-11-18 23:11:56.822788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:37.797 [2024-11-18 23:11:56.822800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.633 ms 00:16:37.797 [2024-11-18 23:11:56.822810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.797 [2024-11-18 23:11:56.822841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.797 [2024-11-18 23:11:56.822851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:37.797 [2024-11-18 23:11:56.822860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:37.797 [2024-11-18 23:11:56.822871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.797 [2024-11-18 23:11:56.823540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.797 [2024-11-18 23:11:56.823567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:37.797 [2024-11-18 23:11:56.823583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.616 ms 00:16:37.797 [2024-11-18 23:11:56.823597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.797 [2024-11-18 23:11:56.823723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.797 [2024-11-18 23:11:56.823737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:37.797 [2024-11-18 23:11:56.823747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:16:37.797 [2024-11-18 23:11:56.823758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.797 [2024-11-18 23:11:56.831805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.797 [2024-11-18 23:11:56.831859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:37.797 [2024-11-18 23:11:56.831870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.028 ms 00:16:37.797 [2024-11-18 23:11:56.831881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.797 [2024-11-18 23:11:56.842144] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:16:37.797 [2024-11-18 23:11:56.849629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.797 [2024-11-18 23:11:56.849679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:37.797 [2024-11-18 23:11:56.849693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.673 ms 00:16:37.797 [2024-11-18 23:11:56.849706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.797 [2024-11-18 23:11:56.931873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.797 [2024-11-18 23:11:56.931928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:37.797 [2024-11-18 23:11:56.931946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.131 ms 00:16:37.797 [2024-11-18 23:11:56.931954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.797 [2024-11-18 23:11:56.932176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.797 [2024-11-18 23:11:56.932192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:37.797 [2024-11-18 23:11:56.932204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.151 ms 00:16:37.797 [2024-11-18 23:11:56.932212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.797 [2024-11-18 23:11:56.938144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.797 [2024-11-18 23:11:56.938208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:37.797 [2024-11-18 23:11:56.938223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.906 ms 00:16:37.797 [2024-11-18 23:11:56.938232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.797 [2024-11-18 23:11:56.943396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.797 [2024-11-18 23:11:56.943596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:37.797 [2024-11-18 23:11:56.943623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.111 ms 00:16:37.797 [2024-11-18 23:11:56.943631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.797 [2024-11-18 23:11:56.944011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.797 [2024-11-18 23:11:56.944028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:37.797 [2024-11-18 23:11:56.944046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:16:37.797 [2024-11-18 23:11:56.944054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.797 [2024-11-18 23:11:56.988594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.797 [2024-11-18 23:11:56.988648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:37.797 [2024-11-18 23:11:56.988663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.513 ms 00:16:37.797 [2024-11-18 23:11:56.988671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.797 [2024-11-18 23:11:56.995803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.797 [2024-11-18 23:11:56.995854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:37.797 [2024-11-18 23:11:56.995876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.049 ms 00:16:37.797 [2024-11-18 23:11:56.995884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.797 [2024-11-18 23:11:57.001798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.797 [2024-11-18 23:11:57.001846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:37.797 [2024-11-18 23:11:57.001859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.863 ms 00:16:37.797 [2024-11-18 23:11:57.001867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.797 [2024-11-18 23:11:57.008206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.797 [2024-11-18 23:11:57.008253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:37.797 [2024-11-18 23:11:57.008270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.289 ms 00:16:37.797 [2024-11-18 23:11:57.008278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.797 [2024-11-18 23:11:57.008330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.797 [2024-11-18 23:11:57.008341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:37.797 [2024-11-18 23:11:57.008356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:37.797 [2024-11-18 23:11:57.008368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.797 [2024-11-18 23:11:57.008441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.797 [2024-11-18 23:11:57.008450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:37.797 [2024-11-18 23:11:57.008466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:16:37.797 [2024-11-18 23:11:57.008474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.797 [2024-11-18 23:11:57.009593] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3330.765 ms, result 0 00:16:37.797 { 00:16:37.797 "name": "ftl0", 00:16:37.797 "uuid": "a80860f6-287f-440d-aa13-850c18db5f1a" 00:16:37.797 } 00:16:37.797 23:11:57 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:16:37.797 23:11:57 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:16:37.797 23:11:57 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:16:38.059 23:11:57 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:16:38.059 [2024-11-18 23:11:57.335681] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:38.059 I/O size of 69632 is greater than zero copy threshold (65536). 00:16:38.059 Zero copy mechanism will not be used. 00:16:38.059 Running I/O for 4 seconds... 00:16:40.385 820.00 IOPS, 54.45 MiB/s [2024-11-18T23:12:00.698Z] 810.00 IOPS, 53.79 MiB/s [2024-11-18T23:12:01.631Z] 855.33 IOPS, 56.80 MiB/s [2024-11-18T23:12:01.631Z] 852.25 IOPS, 56.59 MiB/s 00:16:42.253 Latency(us) 00:16:42.253 [2024-11-18T23:12:01.631Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:42.253 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:16:42.253 ftl0 : 4.00 852.22 56.59 0.00 0.00 1235.96 154.39 2520.62 00:16:42.253 [2024-11-18T23:12:01.631Z] =================================================================================================================== 00:16:42.253 [2024-11-18T23:12:01.631Z] Total : 852.22 56.59 0.00 0.00 1235.96 154.39 2520.62 00:16:42.253 [2024-11-18 23:12:01.343307] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:42.253 { 00:16:42.253 "results": [ 00:16:42.253 { 00:16:42.253 "job": "ftl0", 00:16:42.253 "core_mask": "0x1", 00:16:42.253 "workload": "randwrite", 00:16:42.253 "status": "finished", 00:16:42.253 "queue_depth": 1, 00:16:42.253 "io_size": 69632, 00:16:42.253 "runtime": 4.001301, 00:16:42.253 "iops": 852.2228145295743, 00:16:42.253 "mibps": 56.59292127735454, 00:16:42.253 "io_failed": 0, 00:16:42.253 "io_timeout": 0, 00:16:42.253 "avg_latency_us": 1235.9629181141438, 00:16:42.253 "min_latency_us": 154.3876923076923, 00:16:42.253 "max_latency_us": 2520.6153846153848 00:16:42.253 } 00:16:42.253 ], 00:16:42.253 "core_count": 1 00:16:42.253 } 00:16:42.253 23:12:01 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:16:42.253 [2024-11-18 23:12:01.445689] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftlRunning I/O for 4 seconds... 00:16:42.253 0 00:16:44.139 8163.00 IOPS, 31.89 MiB/s [2024-11-18T23:12:04.461Z] 6567.00 IOPS, 25.65 MiB/s [2024-11-18T23:12:05.851Z] 6118.67 IOPS, 23.90 MiB/s [2024-11-18T23:12:05.851Z] 5837.25 IOPS, 22.80 MiB/s 00:16:46.473 Latency(us) 00:16:46.473 [2024-11-18T23:12:05.851Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:46.473 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:16:46.473 ftl0 : 4.03 5817.86 22.73 0.00 0.00 21912.09 255.21 60494.77 00:16:46.473 [2024-11-18T23:12:05.851Z] =================================================================================================================== 00:16:46.473 [2024-11-18T23:12:05.851Z] Total : 5817.86 22.73 0.00 0.00 21912.09 0.00 60494.77 00:16:46.473 [2024-11-18 23:12:05.486377] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:46.473 { 00:16:46.473 "results": [ 00:16:46.473 { 00:16:46.473 "job": "ftl0", 00:16:46.473 "core_mask": "0x1", 00:16:46.473 "workload": "randwrite", 00:16:46.473 "status": "finished", 00:16:46.473 "queue_depth": 128, 00:16:46.473 "io_size": 4096, 00:16:46.473 "runtime": 4.034126, 00:16:46.473 "iops": 5817.864885727417, 00:16:46.473 "mibps": 22.72603470987272, 00:16:46.473 "io_failed": 0, 00:16:46.473 "io_timeout": 0, 00:16:46.473 "avg_latency_us": 21912.085798302254, 00:16:46.473 "min_latency_us": 255.2123076923077, 00:16:46.473 "max_latency_us": 60494.769230769234 00:16:46.473 } 00:16:46.473 ], 00:16:46.473 "core_count": 1 00:16:46.473 } 00:16:46.473 23:12:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:16:46.473 [2024-11-18 23:12:05.601555] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:46.473 Running I/O for 4 seconds... 00:16:48.432 4661.00 IOPS, 18.21 MiB/s [2024-11-18T23:12:08.796Z] 4463.50 IOPS, 17.44 MiB/s [2024-11-18T23:12:09.732Z] 4529.00 IOPS, 17.69 MiB/s [2024-11-18T23:12:09.732Z] 4653.00 IOPS, 18.18 MiB/s 00:16:50.354 Latency(us) 00:16:50.354 [2024-11-18T23:12:09.732Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:50.354 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:50.354 Verification LBA range: start 0x0 length 0x1400000 00:16:50.354 ftl0 : 4.01 4673.08 18.25 0.00 0.00 27322.14 343.43 83079.48 00:16:50.354 [2024-11-18T23:12:09.732Z] =================================================================================================================== 00:16:50.354 [2024-11-18T23:12:09.732Z] Total : 4673.08 18.25 0.00 0.00 27322.14 0.00 83079.48 00:16:50.354 [2024-11-18 23:12:09.619714] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ft{ 00:16:50.354 "results": [ 00:16:50.354 { 00:16:50.354 "job": "ftl0", 00:16:50.354 "core_mask": "0x1", 00:16:50.354 "workload": "verify", 00:16:50.354 "status": "finished", 00:16:50.354 "verify_range": { 00:16:50.354 "start": 0, 00:16:50.354 "length": 20971520 00:16:50.354 }, 00:16:50.354 "queue_depth": 128, 00:16:50.354 "io_size": 4096, 00:16:50.354 "runtime": 4.010204, 00:16:50.354 "iops": 4673.078975533414, 00:16:50.354 "mibps": 18.2542147481774, 00:16:50.354 "io_failed": 0, 00:16:50.354 "io_timeout": 0, 00:16:50.354 "avg_latency_us": 27322.13766094738, 00:16:50.354 "min_latency_us": 343.43384615384616, 00:16:50.354 "max_latency_us": 83079.48307692308 00:16:50.354 } 00:16:50.354 ], 00:16:50.354 "core_count": 1 00:16:50.354 } 00:16:50.354 l0 00:16:50.354 23:12:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:16:50.612 [2024-11-18 23:12:09.825483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.612 [2024-11-18 23:12:09.825530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:50.612 [2024-11-18 23:12:09.825544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:50.612 [2024-11-18 23:12:09.825552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.612 [2024-11-18 23:12:09.825574] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:50.612 [2024-11-18 23:12:09.825962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.612 [2024-11-18 23:12:09.825979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:50.612 [2024-11-18 23:12:09.825992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.375 ms 00:16:50.612 [2024-11-18 23:12:09.826004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.612 [2024-11-18 23:12:09.828919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.612 [2024-11-18 23:12:09.828952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:50.612 [2024-11-18 23:12:09.828962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.897 ms 00:16:50.612 [2024-11-18 23:12:09.828974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.875 [2024-11-18 23:12:10.018134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.875 [2024-11-18 23:12:10.018190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:50.875 [2024-11-18 23:12:10.018202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 189.140 ms 00:16:50.875 [2024-11-18 23:12:10.018212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.875 [2024-11-18 23:12:10.024343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.875 [2024-11-18 23:12:10.024376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:50.875 [2024-11-18 23:12:10.024391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.099 ms 00:16:50.875 [2024-11-18 23:12:10.024400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.875 [2024-11-18 23:12:10.026537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.875 [2024-11-18 23:12:10.026570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:50.875 [2024-11-18 23:12:10.026578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.097 ms 00:16:50.875 [2024-11-18 23:12:10.026591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.875 [2024-11-18 23:12:10.031619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.875 [2024-11-18 23:12:10.031655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:50.875 [2024-11-18 23:12:10.031664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.999 ms 00:16:50.875 [2024-11-18 23:12:10.031678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.875 [2024-11-18 23:12:10.031783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.875 [2024-11-18 23:12:10.031795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:50.875 [2024-11-18 23:12:10.031804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:16:50.875 [2024-11-18 23:12:10.031814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.875 [2024-11-18 23:12:10.034549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.875 [2024-11-18 23:12:10.034581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:50.875 [2024-11-18 23:12:10.034590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.717 ms 00:16:50.875 [2024-11-18 23:12:10.034598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.875 [2024-11-18 23:12:10.037002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.875 [2024-11-18 23:12:10.037035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:50.875 [2024-11-18 23:12:10.037044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.376 ms 00:16:50.875 [2024-11-18 23:12:10.037054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.875 [2024-11-18 23:12:10.038937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.875 [2024-11-18 23:12:10.038970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:50.875 [2024-11-18 23:12:10.038978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.853 ms 00:16:50.875 [2024-11-18 23:12:10.039003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.875 [2024-11-18 23:12:10.040873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.875 [2024-11-18 23:12:10.041003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:50.875 [2024-11-18 23:12:10.041017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.823 ms 00:16:50.875 [2024-11-18 23:12:10.041025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.875 [2024-11-18 23:12:10.041052] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:50.875 [2024-11-18 23:12:10.041068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:50.875 [2024-11-18 23:12:10.041077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:50.876 [2024-11-18 23:12:10.041835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:50.877 [2024-11-18 23:12:10.041846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:50.877 [2024-11-18 23:12:10.041854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:50.877 [2024-11-18 23:12:10.041862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:50.877 [2024-11-18 23:12:10.041870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:50.877 [2024-11-18 23:12:10.041879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:50.877 [2024-11-18 23:12:10.041887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:50.877 [2024-11-18 23:12:10.041896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:50.877 [2024-11-18 23:12:10.041903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:50.877 [2024-11-18 23:12:10.041912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:50.877 [2024-11-18 23:12:10.041919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:50.877 [2024-11-18 23:12:10.041936] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:50.877 [2024-11-18 23:12:10.041944] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a80860f6-287f-440d-aa13-850c18db5f1a 00:16:50.877 [2024-11-18 23:12:10.041952] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:50.877 [2024-11-18 23:12:10.041961] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:50.877 [2024-11-18 23:12:10.041970] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:50.877 [2024-11-18 23:12:10.041978] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:50.877 [2024-11-18 23:12:10.041987] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:50.877 [2024-11-18 23:12:10.041995] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:50.877 [2024-11-18 23:12:10.042003] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:50.877 [2024-11-18 23:12:10.042009] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:50.877 [2024-11-18 23:12:10.042017] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:50.877 [2024-11-18 23:12:10.042024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.877 [2024-11-18 23:12:10.042032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:50.877 [2024-11-18 23:12:10.042040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.973 ms 00:16:50.877 [2024-11-18 23:12:10.042051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.877 [2024-11-18 23:12:10.043446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.877 [2024-11-18 23:12:10.043468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:50.877 [2024-11-18 23:12:10.043477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.375 ms 00:16:50.877 [2024-11-18 23:12:10.043486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.877 [2024-11-18 23:12:10.043559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:50.877 [2024-11-18 23:12:10.043569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:50.877 [2024-11-18 23:12:10.043582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:16:50.877 [2024-11-18 23:12:10.043593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.877 [2024-11-18 23:12:10.048056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:50.877 [2024-11-18 23:12:10.048169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:50.877 [2024-11-18 23:12:10.048220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:50.877 [2024-11-18 23:12:10.048246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.877 [2024-11-18 23:12:10.048308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:50.877 [2024-11-18 23:12:10.048336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:50.877 [2024-11-18 23:12:10.048356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:50.877 [2024-11-18 23:12:10.048375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.877 [2024-11-18 23:12:10.048448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:50.877 [2024-11-18 23:12:10.048476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:50.877 [2024-11-18 23:12:10.048498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:50.877 [2024-11-18 23:12:10.048560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.877 [2024-11-18 23:12:10.048592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:50.877 [2024-11-18 23:12:10.048615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:50.877 [2024-11-18 23:12:10.048635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:50.877 [2024-11-18 23:12:10.048658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.877 [2024-11-18 23:12:10.057369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:50.877 [2024-11-18 23:12:10.057514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:50.877 [2024-11-18 23:12:10.057565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:50.877 [2024-11-18 23:12:10.057590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.877 [2024-11-18 23:12:10.065086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:50.877 [2024-11-18 23:12:10.065231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:50.877 [2024-11-18 23:12:10.065283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:50.877 [2024-11-18 23:12:10.065308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.877 [2024-11-18 23:12:10.065366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:50.877 [2024-11-18 23:12:10.065395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:50.877 [2024-11-18 23:12:10.065415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:50.877 [2024-11-18 23:12:10.065436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.877 [2024-11-18 23:12:10.065546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:50.877 [2024-11-18 23:12:10.065561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:50.877 [2024-11-18 23:12:10.065569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:50.877 [2024-11-18 23:12:10.065581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.877 [2024-11-18 23:12:10.065646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:50.877 [2024-11-18 23:12:10.065660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:50.877 [2024-11-18 23:12:10.065668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:50.877 [2024-11-18 23:12:10.065676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.877 [2024-11-18 23:12:10.065703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:50.877 [2024-11-18 23:12:10.065713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:50.877 [2024-11-18 23:12:10.065726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:50.877 [2024-11-18 23:12:10.065735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.877 [2024-11-18 23:12:10.065765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:50.877 [2024-11-18 23:12:10.065776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:50.877 [2024-11-18 23:12:10.065785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:50.877 [2024-11-18 23:12:10.065793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.877 [2024-11-18 23:12:10.065830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:50.877 [2024-11-18 23:12:10.065842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:50.877 [2024-11-18 23:12:10.065850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:50.877 [2024-11-18 23:12:10.065860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:50.877 [2024-11-18 23:12:10.065971] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 240.456 ms, result 0 00:16:50.877 true 00:16:50.877 23:12:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 85124 00:16:50.877 23:12:10 ftl.ftl_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 85124 ']' 00:16:50.877 23:12:10 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # kill -0 85124 00:16:50.877 23:12:10 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # uname 00:16:50.877 23:12:10 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:50.877 23:12:10 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85124 00:16:50.877 killing process with pid 85124 00:16:50.877 Received shutdown signal, test time was about 4.000000 seconds 00:16:50.877 00:16:50.877 Latency(us) 00:16:50.877 [2024-11-18T23:12:10.255Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:50.877 [2024-11-18T23:12:10.255Z] =================================================================================================================== 00:16:50.877 [2024-11-18T23:12:10.255Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:50.877 23:12:10 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:50.877 23:12:10 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:50.877 23:12:10 ftl.ftl_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85124' 00:16:50.877 23:12:10 ftl.ftl_bdevperf -- common/autotest_common.sh@969 -- # kill 85124 00:16:50.877 23:12:10 ftl.ftl_bdevperf -- common/autotest_common.sh@974 -- # wait 85124 00:16:51.139 23:12:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:16:51.139 23:12:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:16:51.139 Remove shared memory files 00:16:51.139 23:12:10 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:51.139 23:12:10 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:16:51.139 23:12:10 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:16:51.139 23:12:10 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:16:51.139 23:12:10 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:51.139 23:12:10 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:16:51.139 ************************************ 00:16:51.139 END TEST ftl_bdevperf 00:16:51.140 ************************************ 00:16:51.140 00:16:51.140 real 0m20.601s 00:16:51.140 user 0m23.181s 00:16:51.140 sys 0m0.926s 00:16:51.140 23:12:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:51.140 23:12:10 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:51.140 23:12:10 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:16:51.140 23:12:10 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:51.140 23:12:10 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:51.140 23:12:10 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:51.140 ************************************ 00:16:51.140 START TEST ftl_trim 00:16:51.140 ************************************ 00:16:51.140 23:12:10 ftl.ftl_trim -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:16:51.140 * Looking for test storage... 00:16:51.140 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:51.140 23:12:10 ftl.ftl_trim -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:51.140 23:12:10 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # lcov --version 00:16:51.140 23:12:10 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:51.401 23:12:10 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:51.401 23:12:10 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:51.401 23:12:10 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:51.401 23:12:10 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:51.401 23:12:10 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:16:51.401 23:12:10 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:16:51.401 23:12:10 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:16:51.401 23:12:10 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:16:51.401 23:12:10 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:16:51.401 23:12:10 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:16:51.401 23:12:10 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:16:51.401 23:12:10 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:51.401 23:12:10 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:16:51.401 23:12:10 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:16:51.401 23:12:10 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:51.401 23:12:10 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:51.401 23:12:10 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:16:51.401 23:12:10 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:16:51.401 23:12:10 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:51.401 23:12:10 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:16:51.401 23:12:10 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:16:51.401 23:12:10 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:16:51.401 23:12:10 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:16:51.401 23:12:10 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:51.401 23:12:10 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:16:51.401 23:12:10 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:16:51.401 23:12:10 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:51.401 23:12:10 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:51.401 23:12:10 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:16:51.401 23:12:10 ftl.ftl_trim -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:51.401 23:12:10 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:51.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.401 --rc genhtml_branch_coverage=1 00:16:51.401 --rc genhtml_function_coverage=1 00:16:51.401 --rc genhtml_legend=1 00:16:51.401 --rc geninfo_all_blocks=1 00:16:51.401 --rc geninfo_unexecuted_blocks=1 00:16:51.401 00:16:51.401 ' 00:16:51.401 23:12:10 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:51.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.401 --rc genhtml_branch_coverage=1 00:16:51.401 --rc genhtml_function_coverage=1 00:16:51.401 --rc genhtml_legend=1 00:16:51.401 --rc geninfo_all_blocks=1 00:16:51.401 --rc geninfo_unexecuted_blocks=1 00:16:51.401 00:16:51.401 ' 00:16:51.401 23:12:10 ftl.ftl_trim -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:51.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.401 --rc genhtml_branch_coverage=1 00:16:51.401 --rc genhtml_function_coverage=1 00:16:51.401 --rc genhtml_legend=1 00:16:51.401 --rc geninfo_all_blocks=1 00:16:51.401 --rc geninfo_unexecuted_blocks=1 00:16:51.401 00:16:51.401 ' 00:16:51.401 23:12:10 ftl.ftl_trim -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:51.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.401 --rc genhtml_branch_coverage=1 00:16:51.401 --rc genhtml_function_coverage=1 00:16:51.401 --rc genhtml_legend=1 00:16:51.401 --rc geninfo_all_blocks=1 00:16:51.401 --rc geninfo_unexecuted_blocks=1 00:16:51.401 00:16:51.401 ' 00:16:51.401 23:12:10 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:51.401 23:12:10 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:16:51.401 23:12:10 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:51.401 23:12:10 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:51.401 23:12:10 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:51.401 23:12:10 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:51.401 23:12:10 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:51.401 23:12:10 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:51.401 23:12:10 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:51.401 23:12:10 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:51.401 23:12:10 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:51.401 23:12:10 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:51.401 23:12:10 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:51.401 23:12:10 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:51.401 23:12:10 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:51.401 23:12:10 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:51.401 23:12:10 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:51.402 23:12:10 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:51.402 23:12:10 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:51.402 23:12:10 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:51.402 23:12:10 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:51.402 23:12:10 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:51.402 23:12:10 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:51.402 23:12:10 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:51.402 23:12:10 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:51.402 23:12:10 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:51.402 23:12:10 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:51.402 23:12:10 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:51.402 23:12:10 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:51.402 23:12:10 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:51.402 23:12:10 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:16:51.402 23:12:10 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:16:51.402 23:12:10 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:16:51.402 23:12:10 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:16:51.402 23:12:10 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:16:51.402 23:12:10 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:16:51.402 23:12:10 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:16:51.402 23:12:10 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:16:51.402 23:12:10 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:51.402 23:12:10 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:51.402 23:12:10 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:16:51.402 23:12:10 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=85464 00:16:51.402 23:12:10 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 85464 00:16:51.402 23:12:10 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 85464 ']' 00:16:51.402 23:12:10 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:51.402 23:12:10 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:51.402 23:12:10 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:51.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:51.402 23:12:10 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:16:51.402 23:12:10 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:51.402 23:12:10 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:16:51.402 [2024-11-18 23:12:10.667259] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:51.402 [2024-11-18 23:12:10.667568] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85464 ] 00:16:51.661 [2024-11-18 23:12:10.822925] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:51.661 [2024-11-18 23:12:10.875465] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:51.661 [2024-11-18 23:12:10.875532] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:51.661 [2024-11-18 23:12:10.875565] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:52.227 23:12:11 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:52.227 23:12:11 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:16:52.227 23:12:11 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:52.227 23:12:11 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:16:52.227 23:12:11 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:52.227 23:12:11 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:16:52.227 23:12:11 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:16:52.228 23:12:11 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:52.486 23:12:11 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:52.486 23:12:11 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:16:52.486 23:12:11 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:52.486 23:12:11 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:16:52.486 23:12:11 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:52.486 23:12:11 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:16:52.486 23:12:11 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:16:52.486 23:12:11 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:52.744 23:12:12 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:52.744 { 00:16:52.744 "name": "nvme0n1", 00:16:52.744 "aliases": [ 00:16:52.744 "15606ede-7a34-46a9-be6e-db3879f5085f" 00:16:52.744 ], 00:16:52.744 "product_name": "NVMe disk", 00:16:52.744 "block_size": 4096, 00:16:52.744 "num_blocks": 1310720, 00:16:52.744 "uuid": "15606ede-7a34-46a9-be6e-db3879f5085f", 00:16:52.744 "numa_id": -1, 00:16:52.744 "assigned_rate_limits": { 00:16:52.744 "rw_ios_per_sec": 0, 00:16:52.744 "rw_mbytes_per_sec": 0, 00:16:52.744 "r_mbytes_per_sec": 0, 00:16:52.744 "w_mbytes_per_sec": 0 00:16:52.744 }, 00:16:52.744 "claimed": true, 00:16:52.744 "claim_type": "read_many_write_one", 00:16:52.744 "zoned": false, 00:16:52.744 "supported_io_types": { 00:16:52.744 "read": true, 00:16:52.744 "write": true, 00:16:52.744 "unmap": true, 00:16:52.744 "flush": true, 00:16:52.744 "reset": true, 00:16:52.744 "nvme_admin": true, 00:16:52.744 "nvme_io": true, 00:16:52.744 "nvme_io_md": false, 00:16:52.744 "write_zeroes": true, 00:16:52.744 "zcopy": false, 00:16:52.744 "get_zone_info": false, 00:16:52.744 "zone_management": false, 00:16:52.744 "zone_append": false, 00:16:52.744 "compare": true, 00:16:52.744 "compare_and_write": false, 00:16:52.744 "abort": true, 00:16:52.744 "seek_hole": false, 00:16:52.744 "seek_data": false, 00:16:52.744 "copy": true, 00:16:52.744 "nvme_iov_md": false 00:16:52.744 }, 00:16:52.744 "driver_specific": { 00:16:52.744 "nvme": [ 00:16:52.744 { 00:16:52.744 "pci_address": "0000:00:11.0", 00:16:52.744 "trid": { 00:16:52.744 "trtype": "PCIe", 00:16:52.744 "traddr": "0000:00:11.0" 00:16:52.744 }, 00:16:52.744 "ctrlr_data": { 00:16:52.744 "cntlid": 0, 00:16:52.744 "vendor_id": "0x1b36", 00:16:52.744 "model_number": "QEMU NVMe Ctrl", 00:16:52.744 "serial_number": "12341", 00:16:52.744 "firmware_revision": "8.0.0", 00:16:52.744 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:52.744 "oacs": { 00:16:52.744 "security": 0, 00:16:52.744 "format": 1, 00:16:52.744 "firmware": 0, 00:16:52.744 "ns_manage": 1 00:16:52.744 }, 00:16:52.744 "multi_ctrlr": false, 00:16:52.744 "ana_reporting": false 00:16:52.744 }, 00:16:52.744 "vs": { 00:16:52.744 "nvme_version": "1.4" 00:16:52.744 }, 00:16:52.744 "ns_data": { 00:16:52.744 "id": 1, 00:16:52.744 "can_share": false 00:16:52.744 } 00:16:52.745 } 00:16:52.745 ], 00:16:52.745 "mp_policy": "active_passive" 00:16:52.745 } 00:16:52.745 } 00:16:52.745 ]' 00:16:52.745 23:12:12 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:52.745 23:12:12 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:16:52.745 23:12:12 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:52.745 23:12:12 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=1310720 00:16:52.745 23:12:12 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:16:52.745 23:12:12 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 5120 00:16:52.745 23:12:12 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:16:52.745 23:12:12 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:52.745 23:12:12 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:16:52.745 23:12:12 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:52.745 23:12:12 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:53.003 23:12:12 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=d2968cda-b4ad-40de-933e-851121450248 00:16:53.003 23:12:12 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:16:53.003 23:12:12 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d2968cda-b4ad-40de-933e-851121450248 00:16:53.262 23:12:12 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:53.520 23:12:12 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=051ca3a0-d80b-4595-9d69-ca3ffb60917e 00:16:53.520 23:12:12 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 051ca3a0-d80b-4595-9d69-ca3ffb60917e 00:16:53.520 23:12:12 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=93abb09d-e8f5-4329-b59d-62894a7ddfad 00:16:53.520 23:12:12 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 93abb09d-e8f5-4329-b59d-62894a7ddfad 00:16:53.520 23:12:12 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:16:53.520 23:12:12 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:53.520 23:12:12 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=93abb09d-e8f5-4329-b59d-62894a7ddfad 00:16:53.520 23:12:12 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:16:53.520 23:12:12 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 93abb09d-e8f5-4329-b59d-62894a7ddfad 00:16:53.520 23:12:12 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=93abb09d-e8f5-4329-b59d-62894a7ddfad 00:16:53.520 23:12:12 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:53.520 23:12:12 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:16:53.520 23:12:12 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:16:53.520 23:12:12 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 93abb09d-e8f5-4329-b59d-62894a7ddfad 00:16:53.778 23:12:13 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:53.778 { 00:16:53.778 "name": "93abb09d-e8f5-4329-b59d-62894a7ddfad", 00:16:53.778 "aliases": [ 00:16:53.778 "lvs/nvme0n1p0" 00:16:53.778 ], 00:16:53.778 "product_name": "Logical Volume", 00:16:53.778 "block_size": 4096, 00:16:53.778 "num_blocks": 26476544, 00:16:53.778 "uuid": "93abb09d-e8f5-4329-b59d-62894a7ddfad", 00:16:53.778 "assigned_rate_limits": { 00:16:53.778 "rw_ios_per_sec": 0, 00:16:53.778 "rw_mbytes_per_sec": 0, 00:16:53.778 "r_mbytes_per_sec": 0, 00:16:53.778 "w_mbytes_per_sec": 0 00:16:53.778 }, 00:16:53.778 "claimed": false, 00:16:53.778 "zoned": false, 00:16:53.778 "supported_io_types": { 00:16:53.778 "read": true, 00:16:53.778 "write": true, 00:16:53.778 "unmap": true, 00:16:53.778 "flush": false, 00:16:53.778 "reset": true, 00:16:53.778 "nvme_admin": false, 00:16:53.778 "nvme_io": false, 00:16:53.778 "nvme_io_md": false, 00:16:53.778 "write_zeroes": true, 00:16:53.778 "zcopy": false, 00:16:53.778 "get_zone_info": false, 00:16:53.778 "zone_management": false, 00:16:53.778 "zone_append": false, 00:16:53.778 "compare": false, 00:16:53.778 "compare_and_write": false, 00:16:53.778 "abort": false, 00:16:53.778 "seek_hole": true, 00:16:53.778 "seek_data": true, 00:16:53.779 "copy": false, 00:16:53.779 "nvme_iov_md": false 00:16:53.779 }, 00:16:53.779 "driver_specific": { 00:16:53.779 "lvol": { 00:16:53.779 "lvol_store_uuid": "051ca3a0-d80b-4595-9d69-ca3ffb60917e", 00:16:53.779 "base_bdev": "nvme0n1", 00:16:53.779 "thin_provision": true, 00:16:53.779 "num_allocated_clusters": 0, 00:16:53.779 "snapshot": false, 00:16:53.779 "clone": false, 00:16:53.779 "esnap_clone": false 00:16:53.779 } 00:16:53.779 } 00:16:53.779 } 00:16:53.779 ]' 00:16:53.779 23:12:13 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:53.779 23:12:13 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:16:53.779 23:12:13 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:53.779 23:12:13 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:53.779 23:12:13 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:53.779 23:12:13 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:16:53.779 23:12:13 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:16:53.779 23:12:13 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:16:53.779 23:12:13 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:54.038 23:12:13 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:54.038 23:12:13 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:54.038 23:12:13 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 93abb09d-e8f5-4329-b59d-62894a7ddfad 00:16:54.038 23:12:13 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=93abb09d-e8f5-4329-b59d-62894a7ddfad 00:16:54.038 23:12:13 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:54.038 23:12:13 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:16:54.038 23:12:13 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:16:54.295 23:12:13 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 93abb09d-e8f5-4329-b59d-62894a7ddfad 00:16:54.295 23:12:13 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:54.295 { 00:16:54.295 "name": "93abb09d-e8f5-4329-b59d-62894a7ddfad", 00:16:54.295 "aliases": [ 00:16:54.295 "lvs/nvme0n1p0" 00:16:54.295 ], 00:16:54.295 "product_name": "Logical Volume", 00:16:54.295 "block_size": 4096, 00:16:54.295 "num_blocks": 26476544, 00:16:54.295 "uuid": "93abb09d-e8f5-4329-b59d-62894a7ddfad", 00:16:54.295 "assigned_rate_limits": { 00:16:54.295 "rw_ios_per_sec": 0, 00:16:54.295 "rw_mbytes_per_sec": 0, 00:16:54.295 "r_mbytes_per_sec": 0, 00:16:54.295 "w_mbytes_per_sec": 0 00:16:54.295 }, 00:16:54.295 "claimed": false, 00:16:54.295 "zoned": false, 00:16:54.295 "supported_io_types": { 00:16:54.295 "read": true, 00:16:54.295 "write": true, 00:16:54.295 "unmap": true, 00:16:54.295 "flush": false, 00:16:54.295 "reset": true, 00:16:54.295 "nvme_admin": false, 00:16:54.295 "nvme_io": false, 00:16:54.295 "nvme_io_md": false, 00:16:54.295 "write_zeroes": true, 00:16:54.295 "zcopy": false, 00:16:54.295 "get_zone_info": false, 00:16:54.295 "zone_management": false, 00:16:54.295 "zone_append": false, 00:16:54.295 "compare": false, 00:16:54.295 "compare_and_write": false, 00:16:54.295 "abort": false, 00:16:54.295 "seek_hole": true, 00:16:54.295 "seek_data": true, 00:16:54.295 "copy": false, 00:16:54.295 "nvme_iov_md": false 00:16:54.295 }, 00:16:54.295 "driver_specific": { 00:16:54.295 "lvol": { 00:16:54.295 "lvol_store_uuid": "051ca3a0-d80b-4595-9d69-ca3ffb60917e", 00:16:54.295 "base_bdev": "nvme0n1", 00:16:54.295 "thin_provision": true, 00:16:54.295 "num_allocated_clusters": 0, 00:16:54.295 "snapshot": false, 00:16:54.295 "clone": false, 00:16:54.295 "esnap_clone": false 00:16:54.295 } 00:16:54.295 } 00:16:54.295 } 00:16:54.295 ]' 00:16:54.295 23:12:13 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:54.295 23:12:13 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:16:54.295 23:12:13 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:54.554 23:12:13 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:54.554 23:12:13 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:54.554 23:12:13 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:16:54.554 23:12:13 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:16:54.554 23:12:13 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:54.554 23:12:13 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:16:54.554 23:12:13 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:16:54.554 23:12:13 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 93abb09d-e8f5-4329-b59d-62894a7ddfad 00:16:54.554 23:12:13 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=93abb09d-e8f5-4329-b59d-62894a7ddfad 00:16:54.554 23:12:13 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:54.554 23:12:13 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:16:54.554 23:12:13 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:16:54.554 23:12:13 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 93abb09d-e8f5-4329-b59d-62894a7ddfad 00:16:54.813 23:12:14 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:54.813 { 00:16:54.813 "name": "93abb09d-e8f5-4329-b59d-62894a7ddfad", 00:16:54.813 "aliases": [ 00:16:54.813 "lvs/nvme0n1p0" 00:16:54.813 ], 00:16:54.813 "product_name": "Logical Volume", 00:16:54.813 "block_size": 4096, 00:16:54.813 "num_blocks": 26476544, 00:16:54.813 "uuid": "93abb09d-e8f5-4329-b59d-62894a7ddfad", 00:16:54.813 "assigned_rate_limits": { 00:16:54.813 "rw_ios_per_sec": 0, 00:16:54.813 "rw_mbytes_per_sec": 0, 00:16:54.813 "r_mbytes_per_sec": 0, 00:16:54.813 "w_mbytes_per_sec": 0 00:16:54.813 }, 00:16:54.813 "claimed": false, 00:16:54.813 "zoned": false, 00:16:54.813 "supported_io_types": { 00:16:54.813 "read": true, 00:16:54.813 "write": true, 00:16:54.813 "unmap": true, 00:16:54.813 "flush": false, 00:16:54.813 "reset": true, 00:16:54.813 "nvme_admin": false, 00:16:54.813 "nvme_io": false, 00:16:54.813 "nvme_io_md": false, 00:16:54.813 "write_zeroes": true, 00:16:54.813 "zcopy": false, 00:16:54.813 "get_zone_info": false, 00:16:54.813 "zone_management": false, 00:16:54.813 "zone_append": false, 00:16:54.813 "compare": false, 00:16:54.813 "compare_and_write": false, 00:16:54.813 "abort": false, 00:16:54.813 "seek_hole": true, 00:16:54.813 "seek_data": true, 00:16:54.813 "copy": false, 00:16:54.813 "nvme_iov_md": false 00:16:54.813 }, 00:16:54.813 "driver_specific": { 00:16:54.813 "lvol": { 00:16:54.813 "lvol_store_uuid": "051ca3a0-d80b-4595-9d69-ca3ffb60917e", 00:16:54.813 "base_bdev": "nvme0n1", 00:16:54.813 "thin_provision": true, 00:16:54.813 "num_allocated_clusters": 0, 00:16:54.813 "snapshot": false, 00:16:54.813 "clone": false, 00:16:54.813 "esnap_clone": false 00:16:54.813 } 00:16:54.813 } 00:16:54.813 } 00:16:54.813 ]' 00:16:54.813 23:12:14 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:54.813 23:12:14 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:16:54.813 23:12:14 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:54.813 23:12:14 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:54.813 23:12:14 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:54.813 23:12:14 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:16:54.813 23:12:14 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:16:54.813 23:12:14 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 93abb09d-e8f5-4329-b59d-62894a7ddfad -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:16:55.073 [2024-11-18 23:12:14.330368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.073 [2024-11-18 23:12:14.330792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:55.073 [2024-11-18 23:12:14.330814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:55.073 [2024-11-18 23:12:14.330823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.073 [2024-11-18 23:12:14.332833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.073 [2024-11-18 23:12:14.332863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:55.073 [2024-11-18 23:12:14.332871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.984 ms 00:16:55.073 [2024-11-18 23:12:14.332880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.073 [2024-11-18 23:12:14.332971] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:55.073 [2024-11-18 23:12:14.333184] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:55.073 [2024-11-18 23:12:14.333203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.073 [2024-11-18 23:12:14.333211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:55.073 [2024-11-18 23:12:14.333218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.237 ms 00:16:55.073 [2024-11-18 23:12:14.333225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.073 [2024-11-18 23:12:14.333305] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID ec74e3b2-a15b-4a81-b737-658f06791b00 00:16:55.073 [2024-11-18 23:12:14.334548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.073 [2024-11-18 23:12:14.334580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:55.073 [2024-11-18 23:12:14.334590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:16:55.073 [2024-11-18 23:12:14.334596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.073 [2024-11-18 23:12:14.341334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.073 [2024-11-18 23:12:14.341434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:55.073 [2024-11-18 23:12:14.341448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.666 ms 00:16:55.073 [2024-11-18 23:12:14.341455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.073 [2024-11-18 23:12:14.341551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.073 [2024-11-18 23:12:14.341561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:55.073 [2024-11-18 23:12:14.341569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:16:55.073 [2024-11-18 23:12:14.341593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.073 [2024-11-18 23:12:14.341634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.073 [2024-11-18 23:12:14.341642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:55.073 [2024-11-18 23:12:14.341650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:55.073 [2024-11-18 23:12:14.341656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.073 [2024-11-18 23:12:14.341687] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:55.073 [2024-11-18 23:12:14.343295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.073 [2024-11-18 23:12:14.343337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:55.073 [2024-11-18 23:12:14.343344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.614 ms 00:16:55.073 [2024-11-18 23:12:14.343352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.073 [2024-11-18 23:12:14.343398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.073 [2024-11-18 23:12:14.343409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:55.073 [2024-11-18 23:12:14.343416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:16:55.073 [2024-11-18 23:12:14.343425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.073 [2024-11-18 23:12:14.343453] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:55.073 [2024-11-18 23:12:14.343570] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:55.073 [2024-11-18 23:12:14.343584] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:55.073 [2024-11-18 23:12:14.343595] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:55.073 [2024-11-18 23:12:14.343603] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:55.073 [2024-11-18 23:12:14.343611] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:55.073 [2024-11-18 23:12:14.343617] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:55.073 [2024-11-18 23:12:14.343626] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:55.073 [2024-11-18 23:12:14.343631] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:55.073 [2024-11-18 23:12:14.343646] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:55.073 [2024-11-18 23:12:14.343652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.073 [2024-11-18 23:12:14.343659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:55.073 [2024-11-18 23:12:14.343665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.201 ms 00:16:55.073 [2024-11-18 23:12:14.343674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.073 [2024-11-18 23:12:14.343748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.073 [2024-11-18 23:12:14.343760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:55.073 [2024-11-18 23:12:14.343766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:16:55.073 [2024-11-18 23:12:14.343772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.073 [2024-11-18 23:12:14.343881] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:55.073 [2024-11-18 23:12:14.343890] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:55.073 [2024-11-18 23:12:14.343897] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:55.073 [2024-11-18 23:12:14.343904] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:55.073 [2024-11-18 23:12:14.343911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:55.073 [2024-11-18 23:12:14.343918] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:55.073 [2024-11-18 23:12:14.343923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:55.073 [2024-11-18 23:12:14.343930] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:55.073 [2024-11-18 23:12:14.343935] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:55.073 [2024-11-18 23:12:14.343941] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:55.073 [2024-11-18 23:12:14.343948] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:55.073 [2024-11-18 23:12:14.343956] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:55.073 [2024-11-18 23:12:14.343962] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:55.073 [2024-11-18 23:12:14.343980] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:55.073 [2024-11-18 23:12:14.343986] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:16:55.073 [2024-11-18 23:12:14.343993] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:55.073 [2024-11-18 23:12:14.343999] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:55.073 [2024-11-18 23:12:14.344006] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:16:55.073 [2024-11-18 23:12:14.344012] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:55.073 [2024-11-18 23:12:14.344019] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:55.073 [2024-11-18 23:12:14.344025] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:55.073 [2024-11-18 23:12:14.344032] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:55.074 [2024-11-18 23:12:14.344038] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:55.074 [2024-11-18 23:12:14.344045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:55.074 [2024-11-18 23:12:14.344051] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:55.074 [2024-11-18 23:12:14.344058] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:55.074 [2024-11-18 23:12:14.344064] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:55.074 [2024-11-18 23:12:14.344071] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:55.074 [2024-11-18 23:12:14.344077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:55.074 [2024-11-18 23:12:14.344085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:16:55.074 [2024-11-18 23:12:14.344091] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:55.074 [2024-11-18 23:12:14.344098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:55.074 [2024-11-18 23:12:14.344104] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:16:55.074 [2024-11-18 23:12:14.344112] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:55.074 [2024-11-18 23:12:14.344119] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:55.074 [2024-11-18 23:12:14.344127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:16:55.074 [2024-11-18 23:12:14.344134] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:55.074 [2024-11-18 23:12:14.344142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:55.074 [2024-11-18 23:12:14.344148] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:16:55.074 [2024-11-18 23:12:14.344168] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:55.074 [2024-11-18 23:12:14.344175] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:55.074 [2024-11-18 23:12:14.344182] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:16:55.074 [2024-11-18 23:12:14.344188] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:55.074 [2024-11-18 23:12:14.344195] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:55.074 [2024-11-18 23:12:14.344210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:55.074 [2024-11-18 23:12:14.344221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:55.074 [2024-11-18 23:12:14.344233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:55.074 [2024-11-18 23:12:14.344241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:55.074 [2024-11-18 23:12:14.344248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:55.074 [2024-11-18 23:12:14.344254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:55.074 [2024-11-18 23:12:14.344260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:55.074 [2024-11-18 23:12:14.344267] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:55.074 [2024-11-18 23:12:14.344274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:55.074 [2024-11-18 23:12:14.344284] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:55.074 [2024-11-18 23:12:14.344292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:55.074 [2024-11-18 23:12:14.344301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:55.074 [2024-11-18 23:12:14.344307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:16:55.074 [2024-11-18 23:12:14.344314] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:16:55.074 [2024-11-18 23:12:14.344321] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:16:55.074 [2024-11-18 23:12:14.344328] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:16:55.074 [2024-11-18 23:12:14.344334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:16:55.074 [2024-11-18 23:12:14.344344] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:16:55.074 [2024-11-18 23:12:14.344351] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:16:55.074 [2024-11-18 23:12:14.344358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:16:55.074 [2024-11-18 23:12:14.344364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:16:55.074 [2024-11-18 23:12:14.344372] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:16:55.074 [2024-11-18 23:12:14.344381] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:16:55.074 [2024-11-18 23:12:14.344388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:16:55.074 [2024-11-18 23:12:14.344394] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:16:55.074 [2024-11-18 23:12:14.344400] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:55.074 [2024-11-18 23:12:14.344407] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:55.074 [2024-11-18 23:12:14.344414] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:55.074 [2024-11-18 23:12:14.344420] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:55.074 [2024-11-18 23:12:14.344426] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:55.074 [2024-11-18 23:12:14.344431] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:55.074 [2024-11-18 23:12:14.344438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.074 [2024-11-18 23:12:14.344444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:55.074 [2024-11-18 23:12:14.344454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.607 ms 00:16:55.074 [2024-11-18 23:12:14.344459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.074 [2024-11-18 23:12:14.344525] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:55.074 [2024-11-18 23:12:14.344532] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:57.603 [2024-11-18 23:12:16.690888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.603 [2024-11-18 23:12:16.690959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:57.603 [2024-11-18 23:12:16.690979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2346.346 ms 00:16:57.603 [2024-11-18 23:12:16.690988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.603 [2024-11-18 23:12:16.711333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.603 [2024-11-18 23:12:16.711391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:57.603 [2024-11-18 23:12:16.711413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.215 ms 00:16:57.603 [2024-11-18 23:12:16.711424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.603 [2024-11-18 23:12:16.711641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.603 [2024-11-18 23:12:16.711656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:57.603 [2024-11-18 23:12:16.711670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:16:57.603 [2024-11-18 23:12:16.711680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.603 [2024-11-18 23:12:16.723002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.603 [2024-11-18 23:12:16.723051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:57.603 [2024-11-18 23:12:16.723063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.278 ms 00:16:57.603 [2024-11-18 23:12:16.723072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.603 [2024-11-18 23:12:16.723148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.603 [2024-11-18 23:12:16.723188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:57.603 [2024-11-18 23:12:16.723200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:57.603 [2024-11-18 23:12:16.723208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.603 [2024-11-18 23:12:16.723641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.603 [2024-11-18 23:12:16.723657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:57.603 [2024-11-18 23:12:16.723669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.389 ms 00:16:57.603 [2024-11-18 23:12:16.723690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.603 [2024-11-18 23:12:16.723829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.603 [2024-11-18 23:12:16.723845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:57.603 [2024-11-18 23:12:16.723857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:16:57.603 [2024-11-18 23:12:16.723865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.603 [2024-11-18 23:12:16.730947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.603 [2024-11-18 23:12:16.730979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:57.603 [2024-11-18 23:12:16.730991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.034 ms 00:16:57.603 [2024-11-18 23:12:16.731000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.603 [2024-11-18 23:12:16.740176] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:57.603 [2024-11-18 23:12:16.757649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.603 [2024-11-18 23:12:16.757689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:57.603 [2024-11-18 23:12:16.757699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.567 ms 00:16:57.603 [2024-11-18 23:12:16.757709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.603 [2024-11-18 23:12:16.814885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.603 [2024-11-18 23:12:16.815211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:57.603 [2024-11-18 23:12:16.815252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.106 ms 00:16:57.603 [2024-11-18 23:12:16.815298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.603 [2024-11-18 23:12:16.815633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.603 [2024-11-18 23:12:16.815656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:57.603 [2024-11-18 23:12:16.815670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.187 ms 00:16:57.603 [2024-11-18 23:12:16.815681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.603 [2024-11-18 23:12:16.819076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.603 [2024-11-18 23:12:16.819221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:57.603 [2024-11-18 23:12:16.819238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.357 ms 00:16:57.603 [2024-11-18 23:12:16.819249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.603 [2024-11-18 23:12:16.822229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.603 [2024-11-18 23:12:16.822260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:57.603 [2024-11-18 23:12:16.822270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.940 ms 00:16:57.603 [2024-11-18 23:12:16.822279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.603 [2024-11-18 23:12:16.822603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.603 [2024-11-18 23:12:16.822623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:57.603 [2024-11-18 23:12:16.822635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:16:57.603 [2024-11-18 23:12:16.822646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.603 [2024-11-18 23:12:16.850428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.603 [2024-11-18 23:12:16.850560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:57.603 [2024-11-18 23:12:16.850577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.748 ms 00:16:57.603 [2024-11-18 23:12:16.850588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.603 [2024-11-18 23:12:16.854932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.603 [2024-11-18 23:12:16.854974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:57.603 [2024-11-18 23:12:16.854983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.259 ms 00:16:57.603 [2024-11-18 23:12:16.854996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.603 [2024-11-18 23:12:16.858190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.603 [2024-11-18 23:12:16.858224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:57.603 [2024-11-18 23:12:16.858233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.146 ms 00:16:57.603 [2024-11-18 23:12:16.858242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.603 [2024-11-18 23:12:16.861863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.603 [2024-11-18 23:12:16.861990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:57.603 [2024-11-18 23:12:16.862006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.563 ms 00:16:57.603 [2024-11-18 23:12:16.862019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.603 [2024-11-18 23:12:16.862067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.603 [2024-11-18 23:12:16.862079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:57.603 [2024-11-18 23:12:16.862088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:57.603 [2024-11-18 23:12:16.862100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.603 [2024-11-18 23:12:16.862207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:57.603 [2024-11-18 23:12:16.862220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:57.603 [2024-11-18 23:12:16.862228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:16:57.603 [2024-11-18 23:12:16.862238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:57.603 [2024-11-18 23:12:16.863253] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:57.603 [2024-11-18 23:12:16.864284] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2532.546 ms, result 0 00:16:57.603 [2024-11-18 23:12:16.864973] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:57.603 { 00:16:57.603 "name": "ftl0", 00:16:57.603 "uuid": "ec74e3b2-a15b-4a81-b737-658f06791b00" 00:16:57.603 } 00:16:57.603 23:12:16 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:16:57.603 23:12:16 ftl.ftl_trim -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:16:57.603 23:12:16 ftl.ftl_trim -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:57.603 23:12:16 ftl.ftl_trim -- common/autotest_common.sh@901 -- # local i 00:16:57.603 23:12:16 ftl.ftl_trim -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:57.603 23:12:16 ftl.ftl_trim -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:57.603 23:12:16 ftl.ftl_trim -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:57.862 23:12:17 ftl.ftl_trim -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:16:58.120 [ 00:16:58.120 { 00:16:58.120 "name": "ftl0", 00:16:58.120 "aliases": [ 00:16:58.120 "ec74e3b2-a15b-4a81-b737-658f06791b00" 00:16:58.120 ], 00:16:58.121 "product_name": "FTL disk", 00:16:58.121 "block_size": 4096, 00:16:58.121 "num_blocks": 23592960, 00:16:58.121 "uuid": "ec74e3b2-a15b-4a81-b737-658f06791b00", 00:16:58.121 "assigned_rate_limits": { 00:16:58.121 "rw_ios_per_sec": 0, 00:16:58.121 "rw_mbytes_per_sec": 0, 00:16:58.121 "r_mbytes_per_sec": 0, 00:16:58.121 "w_mbytes_per_sec": 0 00:16:58.121 }, 00:16:58.121 "claimed": false, 00:16:58.121 "zoned": false, 00:16:58.121 "supported_io_types": { 00:16:58.121 "read": true, 00:16:58.121 "write": true, 00:16:58.121 "unmap": true, 00:16:58.121 "flush": true, 00:16:58.121 "reset": false, 00:16:58.121 "nvme_admin": false, 00:16:58.121 "nvme_io": false, 00:16:58.121 "nvme_io_md": false, 00:16:58.121 "write_zeroes": true, 00:16:58.121 "zcopy": false, 00:16:58.121 "get_zone_info": false, 00:16:58.121 "zone_management": false, 00:16:58.121 "zone_append": false, 00:16:58.121 "compare": false, 00:16:58.121 "compare_and_write": false, 00:16:58.121 "abort": false, 00:16:58.121 "seek_hole": false, 00:16:58.121 "seek_data": false, 00:16:58.121 "copy": false, 00:16:58.121 "nvme_iov_md": false 00:16:58.121 }, 00:16:58.121 "driver_specific": { 00:16:58.121 "ftl": { 00:16:58.121 "base_bdev": "93abb09d-e8f5-4329-b59d-62894a7ddfad", 00:16:58.121 "cache": "nvc0n1p0" 00:16:58.121 } 00:16:58.121 } 00:16:58.121 } 00:16:58.121 ] 00:16:58.121 23:12:17 ftl.ftl_trim -- common/autotest_common.sh@907 -- # return 0 00:16:58.121 23:12:17 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:16:58.121 23:12:17 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:16:58.121 23:12:17 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:16:58.121 23:12:17 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:16:58.380 23:12:17 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:16:58.380 { 00:16:58.380 "name": "ftl0", 00:16:58.380 "aliases": [ 00:16:58.380 "ec74e3b2-a15b-4a81-b737-658f06791b00" 00:16:58.380 ], 00:16:58.380 "product_name": "FTL disk", 00:16:58.380 "block_size": 4096, 00:16:58.380 "num_blocks": 23592960, 00:16:58.380 "uuid": "ec74e3b2-a15b-4a81-b737-658f06791b00", 00:16:58.380 "assigned_rate_limits": { 00:16:58.380 "rw_ios_per_sec": 0, 00:16:58.380 "rw_mbytes_per_sec": 0, 00:16:58.380 "r_mbytes_per_sec": 0, 00:16:58.380 "w_mbytes_per_sec": 0 00:16:58.380 }, 00:16:58.380 "claimed": false, 00:16:58.380 "zoned": false, 00:16:58.380 "supported_io_types": { 00:16:58.380 "read": true, 00:16:58.380 "write": true, 00:16:58.380 "unmap": true, 00:16:58.380 "flush": true, 00:16:58.380 "reset": false, 00:16:58.380 "nvme_admin": false, 00:16:58.380 "nvme_io": false, 00:16:58.380 "nvme_io_md": false, 00:16:58.380 "write_zeroes": true, 00:16:58.380 "zcopy": false, 00:16:58.380 "get_zone_info": false, 00:16:58.380 "zone_management": false, 00:16:58.380 "zone_append": false, 00:16:58.380 "compare": false, 00:16:58.380 "compare_and_write": false, 00:16:58.380 "abort": false, 00:16:58.380 "seek_hole": false, 00:16:58.380 "seek_data": false, 00:16:58.380 "copy": false, 00:16:58.380 "nvme_iov_md": false 00:16:58.380 }, 00:16:58.380 "driver_specific": { 00:16:58.380 "ftl": { 00:16:58.380 "base_bdev": "93abb09d-e8f5-4329-b59d-62894a7ddfad", 00:16:58.380 "cache": "nvc0n1p0" 00:16:58.380 } 00:16:58.380 } 00:16:58.380 } 00:16:58.380 ]' 00:16:58.380 23:12:17 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:16:58.380 23:12:17 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:16:58.380 23:12:17 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:16:58.640 [2024-11-18 23:12:17.896943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.640 [2024-11-18 23:12:17.897097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:58.640 [2024-11-18 23:12:17.897121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:58.640 [2024-11-18 23:12:17.897130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.640 [2024-11-18 23:12:17.897190] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:58.640 [2024-11-18 23:12:17.897750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.640 [2024-11-18 23:12:17.897776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:58.640 [2024-11-18 23:12:17.897786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.544 ms 00:16:58.640 [2024-11-18 23:12:17.897796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.640 [2024-11-18 23:12:17.898383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.640 [2024-11-18 23:12:17.898406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:58.640 [2024-11-18 23:12:17.898415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.554 ms 00:16:58.640 [2024-11-18 23:12:17.898430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.640 [2024-11-18 23:12:17.902073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.640 [2024-11-18 23:12:17.902096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:58.640 [2024-11-18 23:12:17.902107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.617 ms 00:16:58.640 [2024-11-18 23:12:17.902117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.640 [2024-11-18 23:12:17.909103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.640 [2024-11-18 23:12:17.909137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:58.640 [2024-11-18 23:12:17.909148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.921 ms 00:16:58.640 [2024-11-18 23:12:17.909174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.640 [2024-11-18 23:12:17.910812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.640 [2024-11-18 23:12:17.910850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:58.640 [2024-11-18 23:12:17.910860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.568 ms 00:16:58.640 [2024-11-18 23:12:17.910870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.640 [2024-11-18 23:12:17.915515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.640 [2024-11-18 23:12:17.915638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:58.640 [2024-11-18 23:12:17.915664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.599 ms 00:16:58.640 [2024-11-18 23:12:17.915674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.640 [2024-11-18 23:12:17.915853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.640 [2024-11-18 23:12:17.915870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:58.640 [2024-11-18 23:12:17.915881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:16:58.640 [2024-11-18 23:12:17.915894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.640 [2024-11-18 23:12:17.917406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.640 [2024-11-18 23:12:17.917443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:58.640 [2024-11-18 23:12:17.917453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.480 ms 00:16:58.640 [2024-11-18 23:12:17.917465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.640 [2024-11-18 23:12:17.919078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.640 [2024-11-18 23:12:17.919200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:58.640 [2024-11-18 23:12:17.919215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.560 ms 00:16:58.640 [2024-11-18 23:12:17.919224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.640 [2024-11-18 23:12:17.920334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.640 [2024-11-18 23:12:17.920366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:58.640 [2024-11-18 23:12:17.920375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.066 ms 00:16:58.640 [2024-11-18 23:12:17.920384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.640 [2024-11-18 23:12:17.921536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.640 [2024-11-18 23:12:17.921570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:58.640 [2024-11-18 23:12:17.921579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.059 ms 00:16:58.640 [2024-11-18 23:12:17.921587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.640 [2024-11-18 23:12:17.921642] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:58.640 [2024-11-18 23:12:17.921671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:58.640 [2024-11-18 23:12:17.921680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:58.640 [2024-11-18 23:12:17.921692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:58.640 [2024-11-18 23:12:17.921700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:58.640 [2024-11-18 23:12:17.921712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:58.640 [2024-11-18 23:12:17.921719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:58.640 [2024-11-18 23:12:17.921728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:58.640 [2024-11-18 23:12:17.921736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:58.640 [2024-11-18 23:12:17.921745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:58.640 [2024-11-18 23:12:17.921752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:58.640 [2024-11-18 23:12:17.921761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:58.640 [2024-11-18 23:12:17.921768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:58.640 [2024-11-18 23:12:17.921778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:58.640 [2024-11-18 23:12:17.921785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:58.640 [2024-11-18 23:12:17.921794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:58.640 [2024-11-18 23:12:17.921801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:58.640 [2024-11-18 23:12:17.921811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.921818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.921830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.921838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.921847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.921854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.921863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.921870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.921879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.921886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.921897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.921905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.921926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.921933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.921943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.921950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.921966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.921974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.921988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.921995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.922005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.922012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.922022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.922030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.922039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.922046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.922055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.922063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.922072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.922079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.922088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.922096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.922105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.922112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.922123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.922131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.922140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.922147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.922168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.922176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.922186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.922194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.922203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.922210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.922219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.922227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.922236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.922243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.922253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.922261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.922274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.922281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.922291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.922298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.922307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.922315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.922324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.922331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.922340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.922347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.922357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.922364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.922373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.922380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.922388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.922396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.922408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.922416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.922426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.922433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.922442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.922449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.922458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.922465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.922474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.922481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.922490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.922498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.922507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.922514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.922525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.922533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.922545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.922552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:58.641 [2024-11-18 23:12:17.922569] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:58.641 [2024-11-18 23:12:17.922577] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ec74e3b2-a15b-4a81-b737-658f06791b00 00:16:58.641 [2024-11-18 23:12:17.922587] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:58.641 [2024-11-18 23:12:17.922595] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:58.641 [2024-11-18 23:12:17.922604] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:58.641 [2024-11-18 23:12:17.922611] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:58.641 [2024-11-18 23:12:17.922620] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:58.641 [2024-11-18 23:12:17.922629] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:58.642 [2024-11-18 23:12:17.922640] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:58.642 [2024-11-18 23:12:17.922646] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:58.642 [2024-11-18 23:12:17.922655] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:58.642 [2024-11-18 23:12:17.922662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.642 [2024-11-18 23:12:17.922671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:58.642 [2024-11-18 23:12:17.922679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.020 ms 00:16:58.642 [2024-11-18 23:12:17.922690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.642 [2024-11-18 23:12:17.924829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.642 [2024-11-18 23:12:17.924858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:58.642 [2024-11-18 23:12:17.924868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.097 ms 00:16:58.642 [2024-11-18 23:12:17.924880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.642 [2024-11-18 23:12:17.924983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:58.642 [2024-11-18 23:12:17.924994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:58.642 [2024-11-18 23:12:17.925002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:16:58.642 [2024-11-18 23:12:17.925012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.642 [2024-11-18 23:12:17.931765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:58.642 [2024-11-18 23:12:17.931876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:58.642 [2024-11-18 23:12:17.931930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:58.642 [2024-11-18 23:12:17.931959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.642 [2024-11-18 23:12:17.932103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:58.642 [2024-11-18 23:12:17.932175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:58.642 [2024-11-18 23:12:17.932200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:58.642 [2024-11-18 23:12:17.932254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.642 [2024-11-18 23:12:17.932329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:58.642 [2024-11-18 23:12:17.932371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:58.642 [2024-11-18 23:12:17.932423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:58.642 [2024-11-18 23:12:17.932447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.642 [2024-11-18 23:12:17.932532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:58.642 [2024-11-18 23:12:17.932563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:58.642 [2024-11-18 23:12:17.932583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:58.642 [2024-11-18 23:12:17.932632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.642 [2024-11-18 23:12:17.944616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:58.642 [2024-11-18 23:12:17.944762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:58.642 [2024-11-18 23:12:17.944814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:58.642 [2024-11-18 23:12:17.944874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.642 [2024-11-18 23:12:17.955241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:58.642 [2024-11-18 23:12:17.955379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:58.642 [2024-11-18 23:12:17.955430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:58.642 [2024-11-18 23:12:17.955480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.642 [2024-11-18 23:12:17.955570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:58.642 [2024-11-18 23:12:17.955622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:58.642 [2024-11-18 23:12:17.955646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:58.642 [2024-11-18 23:12:17.955690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.642 [2024-11-18 23:12:17.955765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:58.642 [2024-11-18 23:12:17.955819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:58.642 [2024-11-18 23:12:17.955844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:58.642 [2024-11-18 23:12:17.955897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.642 [2024-11-18 23:12:17.956015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:58.642 [2024-11-18 23:12:17.956062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:58.642 [2024-11-18 23:12:17.956126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:58.642 [2024-11-18 23:12:17.956151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.642 [2024-11-18 23:12:17.956266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:58.642 [2024-11-18 23:12:17.956304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:58.642 [2024-11-18 23:12:17.956325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:58.642 [2024-11-18 23:12:17.956425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.642 [2024-11-18 23:12:17.956502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:58.642 [2024-11-18 23:12:17.956563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:58.642 [2024-11-18 23:12:17.956586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:58.642 [2024-11-18 23:12:17.956607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.642 [2024-11-18 23:12:17.956709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:58.642 [2024-11-18 23:12:17.956781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:58.642 [2024-11-18 23:12:17.956832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:58.642 [2024-11-18 23:12:17.956858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:58.642 [2024-11-18 23:12:17.957067] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 60.106 ms, result 0 00:16:58.642 true 00:16:58.642 23:12:17 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 85464 00:16:58.642 23:12:17 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 85464 ']' 00:16:58.642 23:12:17 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 85464 00:16:58.642 23:12:17 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:16:58.642 23:12:17 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:58.642 23:12:17 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85464 00:16:58.642 killing process with pid 85464 00:16:58.642 23:12:17 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:58.642 23:12:18 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:58.642 23:12:18 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85464' 00:16:58.642 23:12:18 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 85464 00:16:58.642 23:12:18 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 85464 00:17:03.923 23:12:22 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:17:04.858 65536+0 records in 00:17:04.858 65536+0 records out 00:17:04.858 268435456 bytes (268 MB, 256 MiB) copied, 1.08045 s, 248 MB/s 00:17:04.858 23:12:23 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:04.858 [2024-11-18 23:12:23.965971] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:04.858 [2024-11-18 23:12:23.966292] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85624 ] 00:17:04.858 [2024-11-18 23:12:24.107955] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:04.858 [2024-11-18 23:12:24.151259] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:05.118 [2024-11-18 23:12:24.253976] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:05.118 [2024-11-18 23:12:24.254041] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:05.118 [2024-11-18 23:12:24.412182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.118 [2024-11-18 23:12:24.412227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:05.118 [2024-11-18 23:12:24.412241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:05.118 [2024-11-18 23:12:24.412248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.118 [2024-11-18 23:12:24.414561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.118 [2024-11-18 23:12:24.414595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:05.118 [2024-11-18 23:12:24.414607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.290 ms 00:17:05.118 [2024-11-18 23:12:24.414615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.118 [2024-11-18 23:12:24.414685] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:05.118 [2024-11-18 23:12:24.414913] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:05.118 [2024-11-18 23:12:24.414929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.118 [2024-11-18 23:12:24.414937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:05.118 [2024-11-18 23:12:24.414948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.251 ms 00:17:05.118 [2024-11-18 23:12:24.414955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.118 [2024-11-18 23:12:24.416682] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:05.118 [2024-11-18 23:12:24.419690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.118 [2024-11-18 23:12:24.419732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:05.118 [2024-11-18 23:12:24.419743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.012 ms 00:17:05.118 [2024-11-18 23:12:24.419754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.118 [2024-11-18 23:12:24.419820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.118 [2024-11-18 23:12:24.419830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:05.118 [2024-11-18 23:12:24.419839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:17:05.118 [2024-11-18 23:12:24.419845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.118 [2024-11-18 23:12:24.426326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.118 [2024-11-18 23:12:24.426356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:05.118 [2024-11-18 23:12:24.426365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.442 ms 00:17:05.118 [2024-11-18 23:12:24.426373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.118 [2024-11-18 23:12:24.426477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.118 [2024-11-18 23:12:24.426489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:05.118 [2024-11-18 23:12:24.426501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:17:05.118 [2024-11-18 23:12:24.426508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.119 [2024-11-18 23:12:24.426534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.119 [2024-11-18 23:12:24.426547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:05.119 [2024-11-18 23:12:24.426555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:05.119 [2024-11-18 23:12:24.426562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.119 [2024-11-18 23:12:24.426586] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:05.119 [2024-11-18 23:12:24.428328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.119 [2024-11-18 23:12:24.428357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:05.119 [2024-11-18 23:12:24.428372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.748 ms 00:17:05.119 [2024-11-18 23:12:24.428383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.119 [2024-11-18 23:12:24.428431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.119 [2024-11-18 23:12:24.428442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:05.119 [2024-11-18 23:12:24.428453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:05.119 [2024-11-18 23:12:24.428461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.119 [2024-11-18 23:12:24.428479] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:05.119 [2024-11-18 23:12:24.428497] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:05.119 [2024-11-18 23:12:24.428532] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:05.119 [2024-11-18 23:12:24.428547] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:05.119 [2024-11-18 23:12:24.428655] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:05.119 [2024-11-18 23:12:24.428665] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:05.119 [2024-11-18 23:12:24.428676] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:05.119 [2024-11-18 23:12:24.428690] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:05.119 [2024-11-18 23:12:24.428699] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:05.119 [2024-11-18 23:12:24.428706] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:05.119 [2024-11-18 23:12:24.428714] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:05.119 [2024-11-18 23:12:24.428725] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:05.119 [2024-11-18 23:12:24.428732] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:05.119 [2024-11-18 23:12:24.428740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.119 [2024-11-18 23:12:24.428749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:05.119 [2024-11-18 23:12:24.428759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:17:05.119 [2024-11-18 23:12:24.428766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.119 [2024-11-18 23:12:24.428853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.119 [2024-11-18 23:12:24.428862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:05.119 [2024-11-18 23:12:24.428869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:05.119 [2024-11-18 23:12:24.428876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.119 [2024-11-18 23:12:24.428980] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:05.119 [2024-11-18 23:12:24.428996] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:05.119 [2024-11-18 23:12:24.429006] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:05.119 [2024-11-18 23:12:24.429017] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:05.119 [2024-11-18 23:12:24.429026] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:05.119 [2024-11-18 23:12:24.429034] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:05.119 [2024-11-18 23:12:24.429042] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:05.119 [2024-11-18 23:12:24.429051] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:05.119 [2024-11-18 23:12:24.429063] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:05.119 [2024-11-18 23:12:24.429071] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:05.119 [2024-11-18 23:12:24.429078] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:05.119 [2024-11-18 23:12:24.429086] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:05.119 [2024-11-18 23:12:24.429094] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:05.119 [2024-11-18 23:12:24.429101] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:05.119 [2024-11-18 23:12:24.429109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:05.119 [2024-11-18 23:12:24.429116] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:05.119 [2024-11-18 23:12:24.429124] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:05.119 [2024-11-18 23:12:24.429132] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:05.119 [2024-11-18 23:12:24.429139] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:05.119 [2024-11-18 23:12:24.429147] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:05.119 [2024-11-18 23:12:24.429171] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:05.119 [2024-11-18 23:12:24.429179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:05.119 [2024-11-18 23:12:24.429188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:05.119 [2024-11-18 23:12:24.429196] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:05.119 [2024-11-18 23:12:24.429211] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:05.119 [2024-11-18 23:12:24.429219] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:05.119 [2024-11-18 23:12:24.429228] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:05.119 [2024-11-18 23:12:24.429236] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:05.119 [2024-11-18 23:12:24.429243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:05.119 [2024-11-18 23:12:24.429251] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:05.119 [2024-11-18 23:12:24.429259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:05.119 [2024-11-18 23:12:24.429267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:05.119 [2024-11-18 23:12:24.429275] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:05.119 [2024-11-18 23:12:24.429282] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:05.119 [2024-11-18 23:12:24.429290] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:05.119 [2024-11-18 23:12:24.429298] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:05.119 [2024-11-18 23:12:24.429306] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:05.119 [2024-11-18 23:12:24.429313] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:05.119 [2024-11-18 23:12:24.429321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:05.119 [2024-11-18 23:12:24.429329] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:05.119 [2024-11-18 23:12:24.429338] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:05.119 [2024-11-18 23:12:24.429352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:05.119 [2024-11-18 23:12:24.429360] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:05.119 [2024-11-18 23:12:24.429366] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:05.119 [2024-11-18 23:12:24.429373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:05.119 [2024-11-18 23:12:24.429380] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:05.119 [2024-11-18 23:12:24.429387] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:05.119 [2024-11-18 23:12:24.429395] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:05.119 [2024-11-18 23:12:24.429402] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:05.119 [2024-11-18 23:12:24.429408] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:05.119 [2024-11-18 23:12:24.429415] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:05.119 [2024-11-18 23:12:24.429421] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:05.119 [2024-11-18 23:12:24.429428] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:05.119 [2024-11-18 23:12:24.429436] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:05.119 [2024-11-18 23:12:24.429445] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:05.119 [2024-11-18 23:12:24.429454] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:05.119 [2024-11-18 23:12:24.429464] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:05.119 [2024-11-18 23:12:24.429472] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:05.119 [2024-11-18 23:12:24.429479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:05.119 [2024-11-18 23:12:24.429486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:05.119 [2024-11-18 23:12:24.429493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:05.119 [2024-11-18 23:12:24.429500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:05.119 [2024-11-18 23:12:24.429507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:05.119 [2024-11-18 23:12:24.429514] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:05.119 [2024-11-18 23:12:24.429521] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:05.119 [2024-11-18 23:12:24.429528] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:05.119 [2024-11-18 23:12:24.429535] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:05.120 [2024-11-18 23:12:24.429542] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:05.120 [2024-11-18 23:12:24.429549] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:05.120 [2024-11-18 23:12:24.429557] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:05.120 [2024-11-18 23:12:24.429565] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:05.120 [2024-11-18 23:12:24.429573] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:05.120 [2024-11-18 23:12:24.429588] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:05.120 [2024-11-18 23:12:24.429596] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:05.120 [2024-11-18 23:12:24.429604] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:05.120 [2024-11-18 23:12:24.429612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.120 [2024-11-18 23:12:24.429623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:05.120 [2024-11-18 23:12:24.429635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.699 ms 00:17:05.120 [2024-11-18 23:12:24.429642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.120 [2024-11-18 23:12:24.453322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.120 [2024-11-18 23:12:24.453413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:05.120 [2024-11-18 23:12:24.453445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.621 ms 00:17:05.120 [2024-11-18 23:12:24.453466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.120 [2024-11-18 23:12:24.453804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.120 [2024-11-18 23:12:24.453847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:05.120 [2024-11-18 23:12:24.453871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.168 ms 00:17:05.120 [2024-11-18 23:12:24.453899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.120 [2024-11-18 23:12:24.465101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.120 [2024-11-18 23:12:24.465133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:05.120 [2024-11-18 23:12:24.465143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.148 ms 00:17:05.120 [2024-11-18 23:12:24.465151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.120 [2024-11-18 23:12:24.465222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.120 [2024-11-18 23:12:24.465232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:05.120 [2024-11-18 23:12:24.465243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:05.120 [2024-11-18 23:12:24.465254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.120 [2024-11-18 23:12:24.465666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.120 [2024-11-18 23:12:24.465689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:05.120 [2024-11-18 23:12:24.465698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.393 ms 00:17:05.120 [2024-11-18 23:12:24.465713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.120 [2024-11-18 23:12:24.465847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.120 [2024-11-18 23:12:24.465857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:05.120 [2024-11-18 23:12:24.465866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:17:05.120 [2024-11-18 23:12:24.465877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.120 [2024-11-18 23:12:24.471977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.120 [2024-11-18 23:12:24.472115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:05.120 [2024-11-18 23:12:24.472131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.078 ms 00:17:05.120 [2024-11-18 23:12:24.472139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.120 [2024-11-18 23:12:24.475225] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:17:05.120 [2024-11-18 23:12:24.475257] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:05.120 [2024-11-18 23:12:24.475277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.120 [2024-11-18 23:12:24.475285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:05.120 [2024-11-18 23:12:24.475293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.041 ms 00:17:05.120 [2024-11-18 23:12:24.475300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.120 [2024-11-18 23:12:24.490463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.120 [2024-11-18 23:12:24.490590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:05.120 [2024-11-18 23:12:24.490607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.098 ms 00:17:05.120 [2024-11-18 23:12:24.490615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.386 [2024-11-18 23:12:24.492608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.387 [2024-11-18 23:12:24.492640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:05.387 [2024-11-18 23:12:24.492649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.924 ms 00:17:05.387 [2024-11-18 23:12:24.492656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.387 [2024-11-18 23:12:24.494483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.387 [2024-11-18 23:12:24.494513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:05.387 [2024-11-18 23:12:24.494527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.788 ms 00:17:05.387 [2024-11-18 23:12:24.494534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.387 [2024-11-18 23:12:24.494851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.387 [2024-11-18 23:12:24.494862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:05.387 [2024-11-18 23:12:24.494871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.253 ms 00:17:05.387 [2024-11-18 23:12:24.494878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.387 [2024-11-18 23:12:24.513753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.387 [2024-11-18 23:12:24.513921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:05.387 [2024-11-18 23:12:24.513939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.840 ms 00:17:05.387 [2024-11-18 23:12:24.513947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.387 [2024-11-18 23:12:24.521634] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:05.387 [2024-11-18 23:12:24.538079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.387 [2024-11-18 23:12:24.538117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:05.387 [2024-11-18 23:12:24.538129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.061 ms 00:17:05.387 [2024-11-18 23:12:24.538143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.387 [2024-11-18 23:12:24.538233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.387 [2024-11-18 23:12:24.538245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:05.387 [2024-11-18 23:12:24.538255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:05.387 [2024-11-18 23:12:24.538263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.387 [2024-11-18 23:12:24.538311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.387 [2024-11-18 23:12:24.538322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:05.387 [2024-11-18 23:12:24.538331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:17:05.387 [2024-11-18 23:12:24.538339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.388 [2024-11-18 23:12:24.538369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.388 [2024-11-18 23:12:24.538377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:05.388 [2024-11-18 23:12:24.538386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:05.388 [2024-11-18 23:12:24.538394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.388 [2024-11-18 23:12:24.538425] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:05.388 [2024-11-18 23:12:24.538435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.388 [2024-11-18 23:12:24.538446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:05.388 [2024-11-18 23:12:24.538456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:05.388 [2024-11-18 23:12:24.538464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.388 [2024-11-18 23:12:24.542719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.388 [2024-11-18 23:12:24.542754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:05.388 [2024-11-18 23:12:24.542771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.233 ms 00:17:05.388 [2024-11-18 23:12:24.542779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.388 [2024-11-18 23:12:24.542865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.388 [2024-11-18 23:12:24.542876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:05.388 [2024-11-18 23:12:24.542891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:17:05.388 [2024-11-18 23:12:24.542899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.388 [2024-11-18 23:12:24.543806] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:05.388 [2024-11-18 23:12:24.544834] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 131.332 ms, result 0 00:17:05.388 [2024-11-18 23:12:24.545760] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:05.388 [2024-11-18 23:12:24.555199] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:06.334  [2024-11-18T23:12:26.656Z] Copying: 24/256 [MB] (24 MBps) [2024-11-18T23:12:27.604Z] Copying: 39/256 [MB] (14 MBps) [2024-11-18T23:12:28.993Z] Copying: 54/256 [MB] (15 MBps) [2024-11-18T23:12:29.563Z] Copying: 88/256 [MB] (33 MBps) [2024-11-18T23:12:30.962Z] Copying: 110/256 [MB] (21 MBps) [2024-11-18T23:12:31.912Z] Copying: 121/256 [MB] (11 MBps) [2024-11-18T23:12:32.857Z] Copying: 136/256 [MB] (14 MBps) [2024-11-18T23:12:33.801Z] Copying: 149/256 [MB] (12 MBps) [2024-11-18T23:12:34.742Z] Copying: 171/256 [MB] (22 MBps) [2024-11-18T23:12:35.719Z] Copying: 196/256 [MB] (24 MBps) [2024-11-18T23:12:36.663Z] Copying: 218/256 [MB] (21 MBps) [2024-11-18T23:12:36.926Z] Copying: 244/256 [MB] (26 MBps) [2024-11-18T23:12:36.926Z] Copying: 256/256 [MB] (average 20 MBps)[2024-11-18 23:12:36.863179] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:17.548 [2024-11-18 23:12:36.864527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.548 [2024-11-18 23:12:36.864586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:17.548 [2024-11-18 23:12:36.864598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:17.548 [2024-11-18 23:12:36.864610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.548 [2024-11-18 23:12:36.864626] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:17.548 [2024-11-18 23:12:36.865144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.548 [2024-11-18 23:12:36.865180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:17.548 [2024-11-18 23:12:36.865193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.507 ms 00:17:17.548 [2024-11-18 23:12:36.865200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.548 [2024-11-18 23:12:36.866768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.548 [2024-11-18 23:12:36.866799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:17.548 [2024-11-18 23:12:36.866807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.550 ms 00:17:17.548 [2024-11-18 23:12:36.866813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.548 [2024-11-18 23:12:36.872110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.548 [2024-11-18 23:12:36.872143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:17.548 [2024-11-18 23:12:36.872150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.283 ms 00:17:17.548 [2024-11-18 23:12:36.872163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.548 [2024-11-18 23:12:36.877559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.548 [2024-11-18 23:12:36.877777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:17.548 [2024-11-18 23:12:36.877789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.370 ms 00:17:17.548 [2024-11-18 23:12:36.877796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.549 [2024-11-18 23:12:36.879289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.549 [2024-11-18 23:12:36.879325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:17.549 [2024-11-18 23:12:36.879332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.454 ms 00:17:17.549 [2024-11-18 23:12:36.879338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.549 [2024-11-18 23:12:36.883066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.549 [2024-11-18 23:12:36.883099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:17.549 [2024-11-18 23:12:36.883107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.690 ms 00:17:17.549 [2024-11-18 23:12:36.883116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.549 [2024-11-18 23:12:36.883217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.549 [2024-11-18 23:12:36.883225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:17.549 [2024-11-18 23:12:36.883232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:17:17.549 [2024-11-18 23:12:36.883239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.549 [2024-11-18 23:12:36.885213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.549 [2024-11-18 23:12:36.885239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:17.549 [2024-11-18 23:12:36.885247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.957 ms 00:17:17.549 [2024-11-18 23:12:36.885254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.549 [2024-11-18 23:12:36.886624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.549 [2024-11-18 23:12:36.886649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:17.549 [2024-11-18 23:12:36.886655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.345 ms 00:17:17.549 [2024-11-18 23:12:36.886661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.549 [2024-11-18 23:12:36.887744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.549 [2024-11-18 23:12:36.887771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:17.549 [2024-11-18 23:12:36.887778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.059 ms 00:17:17.549 [2024-11-18 23:12:36.887783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.549 [2024-11-18 23:12:36.888850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.549 [2024-11-18 23:12:36.888948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:17.549 [2024-11-18 23:12:36.888958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.021 ms 00:17:17.549 [2024-11-18 23:12:36.888964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.549 [2024-11-18 23:12:36.888994] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:17.549 [2024-11-18 23:12:36.889006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:17.549 [2024-11-18 23:12:36.889017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:17.549 [2024-11-18 23:12:36.889023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:17.549 [2024-11-18 23:12:36.889029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:17.549 [2024-11-18 23:12:36.889034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:17.549 [2024-11-18 23:12:36.889039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:17.549 [2024-11-18 23:12:36.889045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:17.549 [2024-11-18 23:12:36.889050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:17.549 [2024-11-18 23:12:36.889056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:17.549 [2024-11-18 23:12:36.889061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:17.549 [2024-11-18 23:12:36.889067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:17.549 [2024-11-18 23:12:36.889073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:17.549 [2024-11-18 23:12:36.889078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:17.549 [2024-11-18 23:12:36.889083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:17.549 [2024-11-18 23:12:36.889089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:17.549 [2024-11-18 23:12:36.889094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:17.549 [2024-11-18 23:12:36.889099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:17.549 [2024-11-18 23:12:36.889105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:17.549 [2024-11-18 23:12:36.889111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:17.549 [2024-11-18 23:12:36.889116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:17.549 [2024-11-18 23:12:36.889121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:17.549 [2024-11-18 23:12:36.889127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:17.549 [2024-11-18 23:12:36.889132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:17.549 [2024-11-18 23:12:36.889138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:17.549 [2024-11-18 23:12:36.889144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:17.549 [2024-11-18 23:12:36.889150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:17.549 [2024-11-18 23:12:36.889168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:17.549 [2024-11-18 23:12:36.889176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:17.549 [2024-11-18 23:12:36.889182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:17.549 [2024-11-18 23:12:36.889188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:17.549 [2024-11-18 23:12:36.889194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:17.549 [2024-11-18 23:12:36.889200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:17.549 [2024-11-18 23:12:36.889205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:17.549 [2024-11-18 23:12:36.889211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:17.549 [2024-11-18 23:12:36.889217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:17.549 [2024-11-18 23:12:36.889224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:17.549 [2024-11-18 23:12:36.889229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:17.549 [2024-11-18 23:12:36.889235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:17.549 [2024-11-18 23:12:36.889241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:17.549 [2024-11-18 23:12:36.889246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:17.549 [2024-11-18 23:12:36.889252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:17.549 [2024-11-18 23:12:36.889258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:17.549 [2024-11-18 23:12:36.889264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:17.549 [2024-11-18 23:12:36.889269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:17.549 [2024-11-18 23:12:36.889275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:17.549 [2024-11-18 23:12:36.889280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:17.549 [2024-11-18 23:12:36.889286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:17.549 [2024-11-18 23:12:36.889292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:17.549 [2024-11-18 23:12:36.889297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:17.549 [2024-11-18 23:12:36.889303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:17.549 [2024-11-18 23:12:36.889309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:17.549 [2024-11-18 23:12:36.889315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:17.549 [2024-11-18 23:12:36.889321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:17.549 [2024-11-18 23:12:36.889327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:17.549 [2024-11-18 23:12:36.889333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:17.549 [2024-11-18 23:12:36.889339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:17.549 [2024-11-18 23:12:36.889345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:17.549 [2024-11-18 23:12:36.889350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:17.549 [2024-11-18 23:12:36.889355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:17.549 [2024-11-18 23:12:36.889362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:17.549 [2024-11-18 23:12:36.889368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:17.549 [2024-11-18 23:12:36.889373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:17.549 [2024-11-18 23:12:36.889379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:17.549 [2024-11-18 23:12:36.889385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:17.549 [2024-11-18 23:12:36.889390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:17.549 [2024-11-18 23:12:36.889396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:17.550 [2024-11-18 23:12:36.889403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:17.550 [2024-11-18 23:12:36.889409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:17.550 [2024-11-18 23:12:36.889414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:17.550 [2024-11-18 23:12:36.889420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:17.550 [2024-11-18 23:12:36.889426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:17.550 [2024-11-18 23:12:36.889432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:17.550 [2024-11-18 23:12:36.889437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:17.550 [2024-11-18 23:12:36.889443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:17.550 [2024-11-18 23:12:36.889448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:17.550 [2024-11-18 23:12:36.889454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:17.550 [2024-11-18 23:12:36.889460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:17.550 [2024-11-18 23:12:36.889465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:17.550 [2024-11-18 23:12:36.889470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:17.550 [2024-11-18 23:12:36.889476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:17.550 [2024-11-18 23:12:36.889482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:17.550 [2024-11-18 23:12:36.889488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:17.550 [2024-11-18 23:12:36.889493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:17.550 [2024-11-18 23:12:36.889498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:17.550 [2024-11-18 23:12:36.889504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:17.550 [2024-11-18 23:12:36.889510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:17.550 [2024-11-18 23:12:36.889516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:17.550 [2024-11-18 23:12:36.889522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:17.550 [2024-11-18 23:12:36.889528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:17.550 [2024-11-18 23:12:36.889533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:17.550 [2024-11-18 23:12:36.889539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:17.550 [2024-11-18 23:12:36.889545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:17.550 [2024-11-18 23:12:36.889551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:17.550 [2024-11-18 23:12:36.889556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:17.550 [2024-11-18 23:12:36.889562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:17.550 [2024-11-18 23:12:36.889568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:17.550 [2024-11-18 23:12:36.889573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:17.550 [2024-11-18 23:12:36.889579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:17.550 [2024-11-18 23:12:36.889585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:17.550 [2024-11-18 23:12:36.889591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:17.550 [2024-11-18 23:12:36.889604] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:17.550 [2024-11-18 23:12:36.889610] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ec74e3b2-a15b-4a81-b737-658f06791b00 00:17:17.550 [2024-11-18 23:12:36.889616] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:17.550 [2024-11-18 23:12:36.889627] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:17.550 [2024-11-18 23:12:36.889633] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:17.550 [2024-11-18 23:12:36.889639] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:17.550 [2024-11-18 23:12:36.889644] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:17.550 [2024-11-18 23:12:36.889650] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:17.550 [2024-11-18 23:12:36.889656] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:17.550 [2024-11-18 23:12:36.889661] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:17.550 [2024-11-18 23:12:36.889665] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:17.550 [2024-11-18 23:12:36.889671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.550 [2024-11-18 23:12:36.889676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:17.550 [2024-11-18 23:12:36.889683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.677 ms 00:17:17.550 [2024-11-18 23:12:36.889690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.550 [2024-11-18 23:12:36.891421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.550 [2024-11-18 23:12:36.891435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:17.550 [2024-11-18 23:12:36.891442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.713 ms 00:17:17.550 [2024-11-18 23:12:36.891448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.550 [2024-11-18 23:12:36.891533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.550 [2024-11-18 23:12:36.891540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:17.550 [2024-11-18 23:12:36.891551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:17:17.550 [2024-11-18 23:12:36.891557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.550 [2024-11-18 23:12:36.897087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:17.550 [2024-11-18 23:12:36.897117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:17.550 [2024-11-18 23:12:36.897125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:17.550 [2024-11-18 23:12:36.897137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.550 [2024-11-18 23:12:36.897194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:17.550 [2024-11-18 23:12:36.897201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:17.550 [2024-11-18 23:12:36.897210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:17.550 [2024-11-18 23:12:36.897216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.550 [2024-11-18 23:12:36.897246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:17.550 [2024-11-18 23:12:36.897253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:17.550 [2024-11-18 23:12:36.897259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:17.550 [2024-11-18 23:12:36.897265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.550 [2024-11-18 23:12:36.897279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:17.550 [2024-11-18 23:12:36.897285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:17.550 [2024-11-18 23:12:36.897291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:17.550 [2024-11-18 23:12:36.897299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.550 [2024-11-18 23:12:36.908246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:17.550 [2024-11-18 23:12:36.908281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:17.550 [2024-11-18 23:12:36.908291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:17.550 [2024-11-18 23:12:36.908297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.550 [2024-11-18 23:12:36.916830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:17.550 [2024-11-18 23:12:36.916863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:17.550 [2024-11-18 23:12:36.916877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:17.550 [2024-11-18 23:12:36.916884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.550 [2024-11-18 23:12:36.916912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:17.550 [2024-11-18 23:12:36.916924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:17.550 [2024-11-18 23:12:36.916931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:17.550 [2024-11-18 23:12:36.916937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.550 [2024-11-18 23:12:36.916962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:17.550 [2024-11-18 23:12:36.916970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:17.550 [2024-11-18 23:12:36.916976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:17.550 [2024-11-18 23:12:36.916983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.550 [2024-11-18 23:12:36.917042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:17.550 [2024-11-18 23:12:36.917051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:17.550 [2024-11-18 23:12:36.917057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:17.550 [2024-11-18 23:12:36.917063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.550 [2024-11-18 23:12:36.917087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:17.550 [2024-11-18 23:12:36.917095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:17.550 [2024-11-18 23:12:36.917101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:17.550 [2024-11-18 23:12:36.917107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.550 [2024-11-18 23:12:36.917148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:17.550 [2024-11-18 23:12:36.917169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:17.550 [2024-11-18 23:12:36.917176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:17.550 [2024-11-18 23:12:36.917182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.550 [2024-11-18 23:12:36.917222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:17.551 [2024-11-18 23:12:36.917230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:17.551 [2024-11-18 23:12:36.917237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:17.551 [2024-11-18 23:12:36.917243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.551 [2024-11-18 23:12:36.917372] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 52.819 ms, result 0 00:17:18.130 00:17:18.130 00:17:18.130 23:12:37 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=85765 00:17:18.130 23:12:37 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 85765 00:17:18.130 23:12:37 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:18.130 23:12:37 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 85765 ']' 00:17:18.130 23:12:37 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:18.130 23:12:37 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:18.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:18.130 23:12:37 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:18.130 23:12:37 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:18.130 23:12:37 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:18.395 [2024-11-18 23:12:37.523965] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:18.395 [2024-11-18 23:12:37.524084] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85765 ] 00:17:18.395 [2024-11-18 23:12:37.670383] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.395 [2024-11-18 23:12:37.712593] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:19.334 23:12:38 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:19.334 23:12:38 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:17:19.334 23:12:38 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:19.334 [2024-11-18 23:12:38.561970] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:19.334 [2024-11-18 23:12:38.562027] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:19.597 [2024-11-18 23:12:38.725971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.597 [2024-11-18 23:12:38.726014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:19.597 [2024-11-18 23:12:38.726027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:19.597 [2024-11-18 23:12:38.726035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.597 [2024-11-18 23:12:38.727927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.597 [2024-11-18 23:12:38.727962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:19.597 [2024-11-18 23:12:38.727971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.875 ms 00:17:19.597 [2024-11-18 23:12:38.727978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.597 [2024-11-18 23:12:38.728033] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:19.597 [2024-11-18 23:12:38.728240] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:19.597 [2024-11-18 23:12:38.728252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.597 [2024-11-18 23:12:38.728260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:19.597 [2024-11-18 23:12:38.728269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.225 ms 00:17:19.597 [2024-11-18 23:12:38.728278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.597 [2024-11-18 23:12:38.729575] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:19.597 [2024-11-18 23:12:38.732094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.597 [2024-11-18 23:12:38.732126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:19.597 [2024-11-18 23:12:38.732137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.518 ms 00:17:19.597 [2024-11-18 23:12:38.732143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.597 [2024-11-18 23:12:38.732204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.597 [2024-11-18 23:12:38.732213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:19.597 [2024-11-18 23:12:38.732223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:17:19.597 [2024-11-18 23:12:38.732230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.597 [2024-11-18 23:12:38.738444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.597 [2024-11-18 23:12:38.738470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:19.597 [2024-11-18 23:12:38.738479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.174 ms 00:17:19.597 [2024-11-18 23:12:38.738485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.597 [2024-11-18 23:12:38.738562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.597 [2024-11-18 23:12:38.738574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:19.597 [2024-11-18 23:12:38.738583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:17:19.597 [2024-11-18 23:12:38.738589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.597 [2024-11-18 23:12:38.738609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.597 [2024-11-18 23:12:38.738617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:19.597 [2024-11-18 23:12:38.738624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:19.597 [2024-11-18 23:12:38.738632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.597 [2024-11-18 23:12:38.738651] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:19.597 [2024-11-18 23:12:38.740207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.598 [2024-11-18 23:12:38.740327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:19.598 [2024-11-18 23:12:38.740343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.560 ms 00:17:19.598 [2024-11-18 23:12:38.740351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.598 [2024-11-18 23:12:38.740390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.598 [2024-11-18 23:12:38.740399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:19.598 [2024-11-18 23:12:38.740408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:19.598 [2024-11-18 23:12:38.740415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.598 [2024-11-18 23:12:38.740431] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:19.598 [2024-11-18 23:12:38.740448] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:19.598 [2024-11-18 23:12:38.740483] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:19.598 [2024-11-18 23:12:38.740499] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:19.598 [2024-11-18 23:12:38.740583] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:19.598 [2024-11-18 23:12:38.740594] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:19.598 [2024-11-18 23:12:38.740603] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:19.598 [2024-11-18 23:12:38.740613] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:19.598 [2024-11-18 23:12:38.740623] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:19.598 [2024-11-18 23:12:38.740632] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:19.598 [2024-11-18 23:12:38.740638] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:19.598 [2024-11-18 23:12:38.740645] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:19.598 [2024-11-18 23:12:38.740651] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:19.598 [2024-11-18 23:12:38.740658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.598 [2024-11-18 23:12:38.740666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:19.598 [2024-11-18 23:12:38.740675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.228 ms 00:17:19.598 [2024-11-18 23:12:38.740681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.598 [2024-11-18 23:12:38.740749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.598 [2024-11-18 23:12:38.740755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:19.598 [2024-11-18 23:12:38.740762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:17:19.598 [2024-11-18 23:12:38.740768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.598 [2024-11-18 23:12:38.740850] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:19.598 [2024-11-18 23:12:38.740858] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:19.598 [2024-11-18 23:12:38.740868] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:19.598 [2024-11-18 23:12:38.740874] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:19.598 [2024-11-18 23:12:38.740883] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:19.598 [2024-11-18 23:12:38.740888] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:19.598 [2024-11-18 23:12:38.740895] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:19.598 [2024-11-18 23:12:38.740901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:19.598 [2024-11-18 23:12:38.740913] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:19.598 [2024-11-18 23:12:38.740919] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:19.598 [2024-11-18 23:12:38.740926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:19.598 [2024-11-18 23:12:38.740932] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:19.598 [2024-11-18 23:12:38.740939] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:19.598 [2024-11-18 23:12:38.740945] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:19.598 [2024-11-18 23:12:38.740952] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:19.598 [2024-11-18 23:12:38.740957] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:19.598 [2024-11-18 23:12:38.740964] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:19.598 [2024-11-18 23:12:38.740971] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:19.598 [2024-11-18 23:12:38.740978] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:19.598 [2024-11-18 23:12:38.740984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:19.598 [2024-11-18 23:12:38.740994] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:19.598 [2024-11-18 23:12:38.741000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:19.598 [2024-11-18 23:12:38.741008] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:19.598 [2024-11-18 23:12:38.741014] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:19.598 [2024-11-18 23:12:38.741021] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:19.598 [2024-11-18 23:12:38.741027] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:19.598 [2024-11-18 23:12:38.741035] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:19.598 [2024-11-18 23:12:38.741041] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:19.598 [2024-11-18 23:12:38.741048] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:19.598 [2024-11-18 23:12:38.741054] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:19.598 [2024-11-18 23:12:38.741061] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:19.598 [2024-11-18 23:12:38.741067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:19.598 [2024-11-18 23:12:38.741074] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:19.598 [2024-11-18 23:12:38.741079] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:19.598 [2024-11-18 23:12:38.741087] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:19.598 [2024-11-18 23:12:38.741093] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:19.598 [2024-11-18 23:12:38.741102] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:19.598 [2024-11-18 23:12:38.741108] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:19.598 [2024-11-18 23:12:38.741115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:19.598 [2024-11-18 23:12:38.741121] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:19.598 [2024-11-18 23:12:38.741128] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:19.598 [2024-11-18 23:12:38.741134] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:19.598 [2024-11-18 23:12:38.741141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:19.598 [2024-11-18 23:12:38.741147] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:19.598 [2024-11-18 23:12:38.741169] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:19.598 [2024-11-18 23:12:38.741177] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:19.598 [2024-11-18 23:12:38.741185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:19.598 [2024-11-18 23:12:38.741194] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:19.598 [2024-11-18 23:12:38.741202] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:19.598 [2024-11-18 23:12:38.741208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:19.599 [2024-11-18 23:12:38.741215] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:19.599 [2024-11-18 23:12:38.741221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:19.599 [2024-11-18 23:12:38.741233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:19.599 [2024-11-18 23:12:38.741241] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:19.599 [2024-11-18 23:12:38.741250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:19.599 [2024-11-18 23:12:38.741258] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:19.599 [2024-11-18 23:12:38.741266] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:19.599 [2024-11-18 23:12:38.741272] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:19.599 [2024-11-18 23:12:38.741280] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:19.599 [2024-11-18 23:12:38.741286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:19.599 [2024-11-18 23:12:38.741293] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:19.599 [2024-11-18 23:12:38.741298] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:19.599 [2024-11-18 23:12:38.741306] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:19.599 [2024-11-18 23:12:38.741311] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:19.599 [2024-11-18 23:12:38.741318] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:19.599 [2024-11-18 23:12:38.741323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:19.599 [2024-11-18 23:12:38.741329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:19.599 [2024-11-18 23:12:38.741335] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:19.599 [2024-11-18 23:12:38.741343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:19.599 [2024-11-18 23:12:38.741349] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:19.599 [2024-11-18 23:12:38.741356] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:19.599 [2024-11-18 23:12:38.741364] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:19.599 [2024-11-18 23:12:38.741371] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:19.599 [2024-11-18 23:12:38.741377] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:19.599 [2024-11-18 23:12:38.741384] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:19.599 [2024-11-18 23:12:38.741390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.599 [2024-11-18 23:12:38.741397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:19.599 [2024-11-18 23:12:38.741403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.595 ms 00:17:19.599 [2024-11-18 23:12:38.741413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.599 [2024-11-18 23:12:38.752823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.599 [2024-11-18 23:12:38.752932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:19.599 [2024-11-18 23:12:38.752984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.365 ms 00:17:19.599 [2024-11-18 23:12:38.753006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.599 [2024-11-18 23:12:38.753113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.599 [2024-11-18 23:12:38.753137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:19.599 [2024-11-18 23:12:38.753234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:17:19.599 [2024-11-18 23:12:38.753256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.599 [2024-11-18 23:12:38.762807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.599 [2024-11-18 23:12:38.762913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:19.599 [2024-11-18 23:12:38.762959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.522 ms 00:17:19.599 [2024-11-18 23:12:38.762980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.599 [2024-11-18 23:12:38.763025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.599 [2024-11-18 23:12:38.763047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:19.599 [2024-11-18 23:12:38.763064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:19.599 [2024-11-18 23:12:38.763080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.599 [2024-11-18 23:12:38.763504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.599 [2024-11-18 23:12:38.763583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:19.599 [2024-11-18 23:12:38.763637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.388 ms 00:17:19.599 [2024-11-18 23:12:38.763658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.599 [2024-11-18 23:12:38.763781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.599 [2024-11-18 23:12:38.763809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:19.599 [2024-11-18 23:12:38.763863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:17:19.599 [2024-11-18 23:12:38.763883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.599 [2024-11-18 23:12:38.779993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.599 [2024-11-18 23:12:38.780151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:19.599 [2024-11-18 23:12:38.780235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.081 ms 00:17:19.599 [2024-11-18 23:12:38.780268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.599 [2024-11-18 23:12:38.783294] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:19.599 [2024-11-18 23:12:38.783450] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:19.599 [2024-11-18 23:12:38.783526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.599 [2024-11-18 23:12:38.783555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:19.599 [2024-11-18 23:12:38.783608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.095 ms 00:17:19.599 [2024-11-18 23:12:38.783638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.599 [2024-11-18 23:12:38.808420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.599 [2024-11-18 23:12:38.808576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:19.599 [2024-11-18 23:12:38.808635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.670 ms 00:17:19.599 [2024-11-18 23:12:38.808663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.599 [2024-11-18 23:12:38.810897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.599 [2024-11-18 23:12:38.811012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:19.599 [2024-11-18 23:12:38.811062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.134 ms 00:17:19.599 [2024-11-18 23:12:38.811087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.599 [2024-11-18 23:12:38.813174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.599 [2024-11-18 23:12:38.813303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:19.600 [2024-11-18 23:12:38.813361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.787 ms 00:17:19.600 [2024-11-18 23:12:38.813386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.600 [2024-11-18 23:12:38.814063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.600 [2024-11-18 23:12:38.814208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:19.600 [2024-11-18 23:12:38.814266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:17:19.600 [2024-11-18 23:12:38.814292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.600 [2024-11-18 23:12:38.831502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.600 [2024-11-18 23:12:38.831636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:19.600 [2024-11-18 23:12:38.831689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.148 ms 00:17:19.600 [2024-11-18 23:12:38.831720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.600 [2024-11-18 23:12:38.839279] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:19.600 [2024-11-18 23:12:38.852975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.600 [2024-11-18 23:12:38.853087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:19.600 [2024-11-18 23:12:38.853137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.178 ms 00:17:19.600 [2024-11-18 23:12:38.853179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.600 [2024-11-18 23:12:38.853260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.600 [2024-11-18 23:12:38.853292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:19.600 [2024-11-18 23:12:38.853317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:19.600 [2024-11-18 23:12:38.853343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.600 [2024-11-18 23:12:38.853404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.600 [2024-11-18 23:12:38.853426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:19.600 [2024-11-18 23:12:38.853452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:17:19.600 [2024-11-18 23:12:38.853528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.600 [2024-11-18 23:12:38.853571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.600 [2024-11-18 23:12:38.853594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:19.600 [2024-11-18 23:12:38.853656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:19.600 [2024-11-18 23:12:38.853679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.600 [2024-11-18 23:12:38.853726] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:19.600 [2024-11-18 23:12:38.854207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.600 [2024-11-18 23:12:38.854233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:19.600 [2024-11-18 23:12:38.854278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.483 ms 00:17:19.600 [2024-11-18 23:12:38.854304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.600 [2024-11-18 23:12:38.858304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.600 [2024-11-18 23:12:38.858410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:19.600 [2024-11-18 23:12:38.858458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.965 ms 00:17:19.600 [2024-11-18 23:12:38.858482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.600 [2024-11-18 23:12:38.858827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.600 [2024-11-18 23:12:38.858893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:19.600 [2024-11-18 23:12:38.858984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:17:19.600 [2024-11-18 23:12:38.859012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.600 [2024-11-18 23:12:38.860070] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:19.600 [2024-11-18 23:12:38.861192] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 133.825 ms, result 0 00:17:19.600 [2024-11-18 23:12:38.863113] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:19.600 Some configs were skipped because the RPC state that can call them passed over. 00:17:19.600 23:12:38 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:19.862 [2024-11-18 23:12:39.084744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.862 [2024-11-18 23:12:39.084795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:19.862 [2024-11-18 23:12:39.084815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.954 ms 00:17:19.862 [2024-11-18 23:12:39.084824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.862 [2024-11-18 23:12:39.084861] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.084 ms, result 0 00:17:19.862 true 00:17:19.862 23:12:39 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:20.122 [2024-11-18 23:12:39.300749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.122 [2024-11-18 23:12:39.300921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:20.122 [2024-11-18 23:12:39.300940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.698 ms 00:17:20.122 [2024-11-18 23:12:39.300950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.122 [2024-11-18 23:12:39.300992] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.940 ms, result 0 00:17:20.122 true 00:17:20.123 23:12:39 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 85765 00:17:20.123 23:12:39 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 85765 ']' 00:17:20.123 23:12:39 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 85765 00:17:20.123 23:12:39 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:17:20.123 23:12:39 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:20.123 23:12:39 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85765 00:17:20.123 killing process with pid 85765 00:17:20.123 23:12:39 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:20.123 23:12:39 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:20.123 23:12:39 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85765' 00:17:20.123 23:12:39 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 85765 00:17:20.123 23:12:39 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 85765 00:17:20.123 [2024-11-18 23:12:39.476554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.123 [2024-11-18 23:12:39.476616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:20.123 [2024-11-18 23:12:39.476632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:20.123 [2024-11-18 23:12:39.476640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.123 [2024-11-18 23:12:39.476667] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:20.123 [2024-11-18 23:12:39.477256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.123 [2024-11-18 23:12:39.477282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:20.123 [2024-11-18 23:12:39.477294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.575 ms 00:17:20.123 [2024-11-18 23:12:39.477304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.123 [2024-11-18 23:12:39.477602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.123 [2024-11-18 23:12:39.477627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:20.123 [2024-11-18 23:12:39.477639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:17:20.123 [2024-11-18 23:12:39.477649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.123 [2024-11-18 23:12:39.482134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.123 [2024-11-18 23:12:39.482193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:20.123 [2024-11-18 23:12:39.482204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.457 ms 00:17:20.123 [2024-11-18 23:12:39.482218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.123 [2024-11-18 23:12:39.489363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.123 [2024-11-18 23:12:39.489410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:20.123 [2024-11-18 23:12:39.489426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.105 ms 00:17:20.123 [2024-11-18 23:12:39.489439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.123 [2024-11-18 23:12:39.492220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.123 [2024-11-18 23:12:39.492266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:20.123 [2024-11-18 23:12:39.492276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.704 ms 00:17:20.123 [2024-11-18 23:12:39.492285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.123 [2024-11-18 23:12:39.497888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.123 [2024-11-18 23:12:39.497945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:20.123 [2024-11-18 23:12:39.497956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.557 ms 00:17:20.123 [2024-11-18 23:12:39.497966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.385 [2024-11-18 23:12:39.498122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.385 [2024-11-18 23:12:39.498138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:20.385 [2024-11-18 23:12:39.498147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:17:20.385 [2024-11-18 23:12:39.498184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.385 [2024-11-18 23:12:39.501387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.385 [2024-11-18 23:12:39.501435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:20.385 [2024-11-18 23:12:39.501445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.183 ms 00:17:20.385 [2024-11-18 23:12:39.501457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.385 [2024-11-18 23:12:39.504258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.385 [2024-11-18 23:12:39.504306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:20.385 [2024-11-18 23:12:39.504316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.754 ms 00:17:20.385 [2024-11-18 23:12:39.504326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.385 [2024-11-18 23:12:39.506518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.385 [2024-11-18 23:12:39.506571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:20.385 [2024-11-18 23:12:39.506580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.148 ms 00:17:20.385 [2024-11-18 23:12:39.506591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.385 [2024-11-18 23:12:39.508908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.385 [2024-11-18 23:12:39.508962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:20.385 [2024-11-18 23:12:39.508972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.244 ms 00:17:20.385 [2024-11-18 23:12:39.508981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.385 [2024-11-18 23:12:39.509023] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:20.385 [2024-11-18 23:12:39.509041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:20.385 [2024-11-18 23:12:39.509052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:20.385 [2024-11-18 23:12:39.509066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:20.385 [2024-11-18 23:12:39.509074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:20.385 [2024-11-18 23:12:39.509087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:20.385 [2024-11-18 23:12:39.509094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:20.385 [2024-11-18 23:12:39.509104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:20.385 [2024-11-18 23:12:39.509112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:20.385 [2024-11-18 23:12:39.509122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:20.385 [2024-11-18 23:12:39.509129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:20.385 [2024-11-18 23:12:39.509139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:20.385 [2024-11-18 23:12:39.509146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:20.385 [2024-11-18 23:12:39.509174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:20.385 [2024-11-18 23:12:39.509182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:20.385 [2024-11-18 23:12:39.509193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:20.385 [2024-11-18 23:12:39.509201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:20.385 [2024-11-18 23:12:39.509211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:20.385 [2024-11-18 23:12:39.509220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:20.385 [2024-11-18 23:12:39.509232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:20.385 [2024-11-18 23:12:39.509240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:20.385 [2024-11-18 23:12:39.509249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:20.385 [2024-11-18 23:12:39.509257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:20.385 [2024-11-18 23:12:39.509267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:20.385 [2024-11-18 23:12:39.509274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:20.385 [2024-11-18 23:12:39.509284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:20.385 [2024-11-18 23:12:39.509291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:20.385 [2024-11-18 23:12:39.509300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:20.385 [2024-11-18 23:12:39.509309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:20.385 [2024-11-18 23:12:39.509320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:20.385 [2024-11-18 23:12:39.509330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:20.385 [2024-11-18 23:12:39.509341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:20.385 [2024-11-18 23:12:39.509349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:20.385 [2024-11-18 23:12:39.509359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:20.385 [2024-11-18 23:12:39.509367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:20.385 [2024-11-18 23:12:39.509380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:20.385 [2024-11-18 23:12:39.509388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:20.385 [2024-11-18 23:12:39.509398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:20.385 [2024-11-18 23:12:39.509406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:20.385 [2024-11-18 23:12:39.509416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:20.385 [2024-11-18 23:12:39.509423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:20.385 [2024-11-18 23:12:39.509434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:20.385 [2024-11-18 23:12:39.509441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:20.385 [2024-11-18 23:12:39.509450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:20.385 [2024-11-18 23:12:39.509457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:20.385 [2024-11-18 23:12:39.509468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:20.385 [2024-11-18 23:12:39.509475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:20.385 [2024-11-18 23:12:39.509484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:20.385 [2024-11-18 23:12:39.509491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:20.385 [2024-11-18 23:12:39.509501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:20.386 [2024-11-18 23:12:39.509509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:20.386 [2024-11-18 23:12:39.509522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:20.386 [2024-11-18 23:12:39.509529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:20.386 [2024-11-18 23:12:39.509538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:20.386 [2024-11-18 23:12:39.509545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:20.386 [2024-11-18 23:12:39.509556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:20.386 [2024-11-18 23:12:39.509564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:20.386 [2024-11-18 23:12:39.509573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:20.386 [2024-11-18 23:12:39.509589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:20.386 [2024-11-18 23:12:39.509600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:20.386 [2024-11-18 23:12:39.509610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:20.386 [2024-11-18 23:12:39.509620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:20.386 [2024-11-18 23:12:39.509628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:20.386 [2024-11-18 23:12:39.509638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:20.386 [2024-11-18 23:12:39.509646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:20.386 [2024-11-18 23:12:39.509656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:20.386 [2024-11-18 23:12:39.509664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:20.386 [2024-11-18 23:12:39.509676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:20.386 [2024-11-18 23:12:39.509683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:20.386 [2024-11-18 23:12:39.509693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:20.386 [2024-11-18 23:12:39.509701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:20.386 [2024-11-18 23:12:39.509711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:20.386 [2024-11-18 23:12:39.509718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:20.386 [2024-11-18 23:12:39.509728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:20.386 [2024-11-18 23:12:39.509735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:20.386 [2024-11-18 23:12:39.509746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:20.386 [2024-11-18 23:12:39.509756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:20.386 [2024-11-18 23:12:39.509765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:20.386 [2024-11-18 23:12:39.509773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:20.386 [2024-11-18 23:12:39.509782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:20.386 [2024-11-18 23:12:39.509792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:20.386 [2024-11-18 23:12:39.509801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:20.386 [2024-11-18 23:12:39.509809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:20.386 [2024-11-18 23:12:39.509821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:20.386 [2024-11-18 23:12:39.509829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:20.386 [2024-11-18 23:12:39.509840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:20.386 [2024-11-18 23:12:39.509848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:20.386 [2024-11-18 23:12:39.509858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:20.386 [2024-11-18 23:12:39.509874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:20.386 [2024-11-18 23:12:39.509884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:20.386 [2024-11-18 23:12:39.509892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:20.386 [2024-11-18 23:12:39.509901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:20.386 [2024-11-18 23:12:39.509908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:20.386 [2024-11-18 23:12:39.509919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:20.386 [2024-11-18 23:12:39.509926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:20.386 [2024-11-18 23:12:39.509937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:20.386 [2024-11-18 23:12:39.509944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:20.386 [2024-11-18 23:12:39.509953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:20.386 [2024-11-18 23:12:39.509961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:20.386 [2024-11-18 23:12:39.509973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:20.386 [2024-11-18 23:12:39.509980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:20.386 [2024-11-18 23:12:39.509998] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:20.386 [2024-11-18 23:12:39.510008] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ec74e3b2-a15b-4a81-b737-658f06791b00 00:17:20.386 [2024-11-18 23:12:39.510019] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:20.386 [2024-11-18 23:12:39.510026] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:20.386 [2024-11-18 23:12:39.510035] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:20.386 [2024-11-18 23:12:39.510048] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:20.386 [2024-11-18 23:12:39.510058] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:20.386 [2024-11-18 23:12:39.510066] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:20.386 [2024-11-18 23:12:39.510080] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:20.386 [2024-11-18 23:12:39.510088] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:20.386 [2024-11-18 23:12:39.510097] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:20.386 [2024-11-18 23:12:39.510104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.386 [2024-11-18 23:12:39.510114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:20.386 [2024-11-18 23:12:39.510124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.083 ms 00:17:20.386 [2024-11-18 23:12:39.510137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.386 [2024-11-18 23:12:39.512350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.386 [2024-11-18 23:12:39.512401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:20.386 [2024-11-18 23:12:39.512412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.150 ms 00:17:20.386 [2024-11-18 23:12:39.512430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.386 [2024-11-18 23:12:39.512550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.386 [2024-11-18 23:12:39.512564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:20.386 [2024-11-18 23:12:39.512575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:17:20.386 [2024-11-18 23:12:39.512586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.386 [2024-11-18 23:12:39.520567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.386 [2024-11-18 23:12:39.520616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:20.386 [2024-11-18 23:12:39.520627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.386 [2024-11-18 23:12:39.520637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.386 [2024-11-18 23:12:39.520731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.386 [2024-11-18 23:12:39.520745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:20.386 [2024-11-18 23:12:39.520755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.386 [2024-11-18 23:12:39.520772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.386 [2024-11-18 23:12:39.520823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.386 [2024-11-18 23:12:39.520841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:20.386 [2024-11-18 23:12:39.520849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.387 [2024-11-18 23:12:39.520859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.387 [2024-11-18 23:12:39.520879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.387 [2024-11-18 23:12:39.520891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:20.387 [2024-11-18 23:12:39.520900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.387 [2024-11-18 23:12:39.520911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.387 [2024-11-18 23:12:39.534891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.387 [2024-11-18 23:12:39.534951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:20.387 [2024-11-18 23:12:39.534961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.387 [2024-11-18 23:12:39.534972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.387 [2024-11-18 23:12:39.546129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.387 [2024-11-18 23:12:39.546222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:20.387 [2024-11-18 23:12:39.546233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.387 [2024-11-18 23:12:39.546248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.387 [2024-11-18 23:12:39.546315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.387 [2024-11-18 23:12:39.546329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:20.387 [2024-11-18 23:12:39.546338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.387 [2024-11-18 23:12:39.546351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.387 [2024-11-18 23:12:39.546388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.387 [2024-11-18 23:12:39.546401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:20.387 [2024-11-18 23:12:39.546410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.387 [2024-11-18 23:12:39.546427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.387 [2024-11-18 23:12:39.546508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.387 [2024-11-18 23:12:39.546521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:20.387 [2024-11-18 23:12:39.546529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.387 [2024-11-18 23:12:39.546542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.387 [2024-11-18 23:12:39.546579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.387 [2024-11-18 23:12:39.546593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:20.387 [2024-11-18 23:12:39.546606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.387 [2024-11-18 23:12:39.546619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.387 [2024-11-18 23:12:39.546663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.387 [2024-11-18 23:12:39.546677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:20.387 [2024-11-18 23:12:39.546689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.387 [2024-11-18 23:12:39.546699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.387 [2024-11-18 23:12:39.546751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.387 [2024-11-18 23:12:39.546766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:20.387 [2024-11-18 23:12:39.546776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.387 [2024-11-18 23:12:39.546791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.387 [2024-11-18 23:12:39.546942] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 70.363 ms, result 0 00:17:20.646 23:12:39 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:20.646 23:12:39 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:20.646 [2024-11-18 23:12:39.991208] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:20.646 [2024-11-18 23:12:39.991398] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85802 ] 00:17:20.906 [2024-11-18 23:12:40.147481] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:20.906 [2024-11-18 23:12:40.222064] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:21.167 [2024-11-18 23:12:40.374038] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:21.167 [2024-11-18 23:12:40.374136] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:21.167 [2024-11-18 23:12:40.538717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.167 [2024-11-18 23:12:40.538787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:21.167 [2024-11-18 23:12:40.538805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:21.167 [2024-11-18 23:12:40.538814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.168 [2024-11-18 23:12:40.541742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.168 [2024-11-18 23:12:40.541799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:21.168 [2024-11-18 23:12:40.541815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.901 ms 00:17:21.168 [2024-11-18 23:12:40.541824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.168 [2024-11-18 23:12:40.541930] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:21.168 [2024-11-18 23:12:40.542240] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:21.168 [2024-11-18 23:12:40.542263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.168 [2024-11-18 23:12:40.542273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:21.168 [2024-11-18 23:12:40.542286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.343 ms 00:17:21.168 [2024-11-18 23:12:40.542294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.430 [2024-11-18 23:12:40.544715] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:21.430 [2024-11-18 23:12:40.549689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.430 [2024-11-18 23:12:40.549752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:21.430 [2024-11-18 23:12:40.549765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.976 ms 00:17:21.430 [2024-11-18 23:12:40.549777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.430 [2024-11-18 23:12:40.549881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.430 [2024-11-18 23:12:40.549893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:21.430 [2024-11-18 23:12:40.549908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:17:21.430 [2024-11-18 23:12:40.549920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.430 [2024-11-18 23:12:40.561710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.430 [2024-11-18 23:12:40.561760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:21.430 [2024-11-18 23:12:40.561773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.741 ms 00:17:21.430 [2024-11-18 23:12:40.561781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.430 [2024-11-18 23:12:40.561936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.430 [2024-11-18 23:12:40.561950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:21.430 [2024-11-18 23:12:40.561960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:17:21.430 [2024-11-18 23:12:40.561973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.430 [2024-11-18 23:12:40.562001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.430 [2024-11-18 23:12:40.562019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:21.430 [2024-11-18 23:12:40.562028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:21.430 [2024-11-18 23:12:40.562040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.430 [2024-11-18 23:12:40.562063] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:21.430 [2024-11-18 23:12:40.564863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.430 [2024-11-18 23:12:40.564918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:21.430 [2024-11-18 23:12:40.564929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.806 ms 00:17:21.430 [2024-11-18 23:12:40.564938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.430 [2024-11-18 23:12:40.564987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.430 [2024-11-18 23:12:40.565001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:21.430 [2024-11-18 23:12:40.565018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:17:21.430 [2024-11-18 23:12:40.565027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.430 [2024-11-18 23:12:40.565049] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:21.430 [2024-11-18 23:12:40.565086] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:21.430 [2024-11-18 23:12:40.565128] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:21.430 [2024-11-18 23:12:40.565147] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:21.430 [2024-11-18 23:12:40.565288] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:21.430 [2024-11-18 23:12:40.565300] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:21.430 [2024-11-18 23:12:40.565312] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:21.430 [2024-11-18 23:12:40.565325] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:21.430 [2024-11-18 23:12:40.565336] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:21.430 [2024-11-18 23:12:40.565344] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:21.430 [2024-11-18 23:12:40.565353] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:21.430 [2024-11-18 23:12:40.565363] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:21.430 [2024-11-18 23:12:40.565371] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:21.430 [2024-11-18 23:12:40.565380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.430 [2024-11-18 23:12:40.565391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:21.430 [2024-11-18 23:12:40.565404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.336 ms 00:17:21.430 [2024-11-18 23:12:40.565413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.430 [2024-11-18 23:12:40.565504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.430 [2024-11-18 23:12:40.565535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:21.430 [2024-11-18 23:12:40.565546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:17:21.430 [2024-11-18 23:12:40.565555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.430 [2024-11-18 23:12:40.565671] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:21.430 [2024-11-18 23:12:40.565704] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:21.430 [2024-11-18 23:12:40.565714] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:21.430 [2024-11-18 23:12:40.565727] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:21.430 [2024-11-18 23:12:40.565737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:21.430 [2024-11-18 23:12:40.565746] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:21.430 [2024-11-18 23:12:40.565755] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:21.430 [2024-11-18 23:12:40.565763] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:21.431 [2024-11-18 23:12:40.565774] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:21.431 [2024-11-18 23:12:40.565783] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:21.431 [2024-11-18 23:12:40.565791] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:21.431 [2024-11-18 23:12:40.565800] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:21.431 [2024-11-18 23:12:40.565809] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:21.431 [2024-11-18 23:12:40.565817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:21.431 [2024-11-18 23:12:40.565825] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:21.431 [2024-11-18 23:12:40.565833] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:21.431 [2024-11-18 23:12:40.565840] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:21.431 [2024-11-18 23:12:40.565854] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:21.431 [2024-11-18 23:12:40.565862] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:21.431 [2024-11-18 23:12:40.565871] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:21.431 [2024-11-18 23:12:40.565879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:21.431 [2024-11-18 23:12:40.565887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:21.431 [2024-11-18 23:12:40.565894] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:21.431 [2024-11-18 23:12:40.565901] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:21.431 [2024-11-18 23:12:40.565914] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:21.431 [2024-11-18 23:12:40.565921] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:21.431 [2024-11-18 23:12:40.565929] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:21.431 [2024-11-18 23:12:40.565936] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:21.431 [2024-11-18 23:12:40.565944] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:21.431 [2024-11-18 23:12:40.565951] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:21.431 [2024-11-18 23:12:40.565960] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:21.431 [2024-11-18 23:12:40.565966] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:21.431 [2024-11-18 23:12:40.565973] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:21.431 [2024-11-18 23:12:40.565979] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:21.431 [2024-11-18 23:12:40.565986] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:21.431 [2024-11-18 23:12:40.565992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:21.431 [2024-11-18 23:12:40.565998] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:21.431 [2024-11-18 23:12:40.566005] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:21.431 [2024-11-18 23:12:40.566012] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:21.431 [2024-11-18 23:12:40.566019] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:21.431 [2024-11-18 23:12:40.566028] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:21.431 [2024-11-18 23:12:40.566035] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:21.431 [2024-11-18 23:12:40.566042] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:21.431 [2024-11-18 23:12:40.566049] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:21.431 [2024-11-18 23:12:40.566057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:21.431 [2024-11-18 23:12:40.566069] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:21.431 [2024-11-18 23:12:40.566076] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:21.431 [2024-11-18 23:12:40.566085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:21.431 [2024-11-18 23:12:40.566092] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:21.431 [2024-11-18 23:12:40.566100] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:21.431 [2024-11-18 23:12:40.566108] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:21.431 [2024-11-18 23:12:40.566115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:21.431 [2024-11-18 23:12:40.566122] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:21.431 [2024-11-18 23:12:40.566133] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:21.431 [2024-11-18 23:12:40.566143] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:21.431 [2024-11-18 23:12:40.566152] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:21.431 [2024-11-18 23:12:40.566189] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:21.431 [2024-11-18 23:12:40.566198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:21.431 [2024-11-18 23:12:40.566207] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:21.431 [2024-11-18 23:12:40.566215] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:21.431 [2024-11-18 23:12:40.566222] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:21.431 [2024-11-18 23:12:40.566230] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:21.431 [2024-11-18 23:12:40.566237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:21.431 [2024-11-18 23:12:40.566244] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:21.431 [2024-11-18 23:12:40.566252] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:21.431 [2024-11-18 23:12:40.566260] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:21.431 [2024-11-18 23:12:40.566267] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:21.431 [2024-11-18 23:12:40.566275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:21.431 [2024-11-18 23:12:40.566283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:21.431 [2024-11-18 23:12:40.566290] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:21.431 [2024-11-18 23:12:40.566300] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:21.431 [2024-11-18 23:12:40.566309] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:21.431 [2024-11-18 23:12:40.566320] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:21.431 [2024-11-18 23:12:40.566327] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:21.431 [2024-11-18 23:12:40.566335] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:21.431 [2024-11-18 23:12:40.566342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.431 [2024-11-18 23:12:40.566350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:21.431 [2024-11-18 23:12:40.566363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.742 ms 00:17:21.431 [2024-11-18 23:12:40.566370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.431 [2024-11-18 23:12:40.598342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.431 [2024-11-18 23:12:40.598420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:21.431 [2024-11-18 23:12:40.598441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.895 ms 00:17:21.431 [2024-11-18 23:12:40.598454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.431 [2024-11-18 23:12:40.598679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.431 [2024-11-18 23:12:40.598710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:21.431 [2024-11-18 23:12:40.598724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:17:21.432 [2024-11-18 23:12:40.598739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.432 [2024-11-18 23:12:40.614871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.432 [2024-11-18 23:12:40.614929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:21.432 [2024-11-18 23:12:40.614941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.097 ms 00:17:21.432 [2024-11-18 23:12:40.614950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.432 [2024-11-18 23:12:40.615038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.432 [2024-11-18 23:12:40.615050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:21.432 [2024-11-18 23:12:40.615064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:21.432 [2024-11-18 23:12:40.615072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.432 [2024-11-18 23:12:40.615834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.432 [2024-11-18 23:12:40.615880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:21.432 [2024-11-18 23:12:40.615892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.734 ms 00:17:21.432 [2024-11-18 23:12:40.615908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.432 [2024-11-18 23:12:40.616090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.432 [2024-11-18 23:12:40.616102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:21.432 [2024-11-18 23:12:40.616112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.143 ms 00:17:21.432 [2024-11-18 23:12:40.616124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.432 [2024-11-18 23:12:40.626642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.432 [2024-11-18 23:12:40.626701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:21.432 [2024-11-18 23:12:40.626713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.492 ms 00:17:21.432 [2024-11-18 23:12:40.626721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.432 [2024-11-18 23:12:40.631617] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:21.432 [2024-11-18 23:12:40.631683] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:21.432 [2024-11-18 23:12:40.631698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.432 [2024-11-18 23:12:40.631707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:21.432 [2024-11-18 23:12:40.631717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.853 ms 00:17:21.432 [2024-11-18 23:12:40.631726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.432 [2024-11-18 23:12:40.648382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.432 [2024-11-18 23:12:40.648440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:21.432 [2024-11-18 23:12:40.648453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.528 ms 00:17:21.432 [2024-11-18 23:12:40.648463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.432 [2024-11-18 23:12:40.651834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.432 [2024-11-18 23:12:40.651887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:21.432 [2024-11-18 23:12:40.651898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.262 ms 00:17:21.432 [2024-11-18 23:12:40.651906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.432 [2024-11-18 23:12:40.654858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.432 [2024-11-18 23:12:40.654906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:21.432 [2024-11-18 23:12:40.654927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.893 ms 00:17:21.432 [2024-11-18 23:12:40.654935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.432 [2024-11-18 23:12:40.655381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.432 [2024-11-18 23:12:40.655463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:21.432 [2024-11-18 23:12:40.655479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.359 ms 00:17:21.432 [2024-11-18 23:12:40.655488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.432 [2024-11-18 23:12:40.685472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.432 [2024-11-18 23:12:40.685540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:21.432 [2024-11-18 23:12:40.685554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.955 ms 00:17:21.432 [2024-11-18 23:12:40.685563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.432 [2024-11-18 23:12:40.694317] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:21.432 [2024-11-18 23:12:40.719811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.432 [2024-11-18 23:12:40.719870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:21.432 [2024-11-18 23:12:40.719885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.145 ms 00:17:21.432 [2024-11-18 23:12:40.719895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.432 [2024-11-18 23:12:40.719998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.432 [2024-11-18 23:12:40.720011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:21.432 [2024-11-18 23:12:40.720023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:17:21.432 [2024-11-18 23:12:40.720033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.432 [2024-11-18 23:12:40.720110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.432 [2024-11-18 23:12:40.720122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:21.432 [2024-11-18 23:12:40.720138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:17:21.432 [2024-11-18 23:12:40.720147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.432 [2024-11-18 23:12:40.720211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.432 [2024-11-18 23:12:40.720227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:21.432 [2024-11-18 23:12:40.720237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:21.432 [2024-11-18 23:12:40.720246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.432 [2024-11-18 23:12:40.720291] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:21.432 [2024-11-18 23:12:40.720307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.432 [2024-11-18 23:12:40.720316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:21.432 [2024-11-18 23:12:40.720325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:17:21.432 [2024-11-18 23:12:40.720334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.432 [2024-11-18 23:12:40.727751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.432 [2024-11-18 23:12:40.727805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:21.432 [2024-11-18 23:12:40.727817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.390 ms 00:17:21.432 [2024-11-18 23:12:40.727827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.432 [2024-11-18 23:12:40.727941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.432 [2024-11-18 23:12:40.727958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:21.432 [2024-11-18 23:12:40.727970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:17:21.432 [2024-11-18 23:12:40.727980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.432 [2024-11-18 23:12:40.729796] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:21.432 [2024-11-18 23:12:40.731396] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 190.677 ms, result 0 00:17:21.432 [2024-11-18 23:12:40.732872] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:21.432 [2024-11-18 23:12:40.740121] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:22.374  [2024-11-18T23:12:43.140Z] Copying: 20/256 [MB] (20 MBps) [2024-11-18T23:12:44.085Z] Copying: 35/256 [MB] (14 MBps) [2024-11-18T23:12:45.030Z] Copying: 52/256 [MB] (17 MBps) [2024-11-18T23:12:45.986Z] Copying: 73/256 [MB] (20 MBps) [2024-11-18T23:12:46.976Z] Copying: 85284/262144 [kB] (10196 kBps) [2024-11-18T23:12:47.921Z] Copying: 93/256 [MB] (10 MBps) [2024-11-18T23:12:48.868Z] Copying: 104/256 [MB] (10 MBps) [2024-11-18T23:12:49.813Z] Copying: 114/256 [MB] (10 MBps) [2024-11-18T23:12:50.763Z] Copying: 127/256 [MB] (12 MBps) [2024-11-18T23:12:51.802Z] Copying: 137/256 [MB] (10 MBps) [2024-11-18T23:12:52.744Z] Copying: 149/256 [MB] (11 MBps) [2024-11-18T23:12:54.142Z] Copying: 160/256 [MB] (10 MBps) [2024-11-18T23:12:55.087Z] Copying: 170/256 [MB] (10 MBps) [2024-11-18T23:12:56.031Z] Copying: 181/256 [MB] (10 MBps) [2024-11-18T23:12:56.975Z] Copying: 200/256 [MB] (19 MBps) [2024-11-18T23:12:57.920Z] Copying: 211/256 [MB] (10 MBps) [2024-11-18T23:12:58.863Z] Copying: 221/256 [MB] (10 MBps) [2024-11-18T23:12:59.807Z] Copying: 235/256 [MB] (13 MBps) [2024-11-18T23:13:00.070Z] Copying: 254/256 [MB] (18 MBps) [2024-11-18T23:13:00.070Z] Copying: 256/256 [MB] (average 13 MBps)[2024-11-18 23:12:59.846999] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:40.692 [2024-11-18 23:12:59.849713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.692 [2024-11-18 23:12:59.849898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:40.692 [2024-11-18 23:12:59.850034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:40.692 [2024-11-18 23:12:59.850060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.692 [2024-11-18 23:12:59.850107] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:40.692 [2024-11-18 23:12:59.851187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.692 [2024-11-18 23:12:59.851359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:40.692 [2024-11-18 23:12:59.851538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.965 ms 00:17:40.692 [2024-11-18 23:12:59.851563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.692 [2024-11-18 23:12:59.851867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.692 [2024-11-18 23:12:59.851896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:40.692 [2024-11-18 23:12:59.851917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:17:40.692 [2024-11-18 23:12:59.852045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.692 [2024-11-18 23:12:59.855825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.692 [2024-11-18 23:12:59.855944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:40.692 [2024-11-18 23:12:59.856001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.728 ms 00:17:40.692 [2024-11-18 23:12:59.856025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.692 [2024-11-18 23:12:59.863445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.692 [2024-11-18 23:12:59.863604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:40.692 [2024-11-18 23:12:59.863664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.381 ms 00:17:40.692 [2024-11-18 23:12:59.863686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.692 [2024-11-18 23:12:59.866687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.692 [2024-11-18 23:12:59.866855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:40.692 [2024-11-18 23:12:59.866913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.905 ms 00:17:40.692 [2024-11-18 23:12:59.866953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.692 [2024-11-18 23:12:59.872064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.692 [2024-11-18 23:12:59.872266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:40.692 [2024-11-18 23:12:59.872714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.056 ms 00:17:40.692 [2024-11-18 23:12:59.872773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.692 [2024-11-18 23:12:59.872953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.692 [2024-11-18 23:12:59.872969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:40.692 [2024-11-18 23:12:59.872980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:17:40.692 [2024-11-18 23:12:59.872989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.693 [2024-11-18 23:12:59.876465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.693 [2024-11-18 23:12:59.876519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:40.693 [2024-11-18 23:12:59.876530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.455 ms 00:17:40.693 [2024-11-18 23:12:59.876539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.693 [2024-11-18 23:12:59.879697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.693 [2024-11-18 23:12:59.879744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:40.693 [2024-11-18 23:12:59.879754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.104 ms 00:17:40.693 [2024-11-18 23:12:59.879761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.693 [2024-11-18 23:12:59.882278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.693 [2024-11-18 23:12:59.882339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:40.693 [2024-11-18 23:12:59.882350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.465 ms 00:17:40.693 [2024-11-18 23:12:59.882358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.693 [2024-11-18 23:12:59.884806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.693 [2024-11-18 23:12:59.884857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:40.693 [2024-11-18 23:12:59.884867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.362 ms 00:17:40.693 [2024-11-18 23:12:59.884875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.693 [2024-11-18 23:12:59.884923] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:40.693 [2024-11-18 23:12:59.884949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:40.693 [2024-11-18 23:12:59.884960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:40.693 [2024-11-18 23:12:59.884968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:40.693 [2024-11-18 23:12:59.884976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:40.693 [2024-11-18 23:12:59.884984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:40.693 [2024-11-18 23:12:59.884992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:40.693 [2024-11-18 23:12:59.885000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:40.693 [2024-11-18 23:12:59.885008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:40.693 [2024-11-18 23:12:59.885016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:40.693 [2024-11-18 23:12:59.885024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:40.693 [2024-11-18 23:12:59.885032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:40.693 [2024-11-18 23:12:59.885039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:40.693 [2024-11-18 23:12:59.885047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:40.693 [2024-11-18 23:12:59.885054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:40.693 [2024-11-18 23:12:59.885061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:40.693 [2024-11-18 23:12:59.885068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:40.693 [2024-11-18 23:12:59.885075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:40.693 [2024-11-18 23:12:59.885083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:40.693 [2024-11-18 23:12:59.885091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:40.693 [2024-11-18 23:12:59.885099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:40.693 [2024-11-18 23:12:59.885106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:40.693 [2024-11-18 23:12:59.885113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:40.693 [2024-11-18 23:12:59.885120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:40.693 [2024-11-18 23:12:59.885128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:40.693 [2024-11-18 23:12:59.885135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:40.693 [2024-11-18 23:12:59.885142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:40.693 [2024-11-18 23:12:59.885148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:40.693 [2024-11-18 23:12:59.885175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:40.693 [2024-11-18 23:12:59.885185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:40.693 [2024-11-18 23:12:59.885193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:40.693 [2024-11-18 23:12:59.885201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:40.693 [2024-11-18 23:12:59.885209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:40.693 [2024-11-18 23:12:59.885216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:40.693 [2024-11-18 23:12:59.885224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:40.693 [2024-11-18 23:12:59.885232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:40.693 [2024-11-18 23:12:59.885240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:40.693 [2024-11-18 23:12:59.885247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:40.693 [2024-11-18 23:12:59.885255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:40.693 [2024-11-18 23:12:59.885263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:40.693 [2024-11-18 23:12:59.885271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:40.693 [2024-11-18 23:12:59.885278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:40.693 [2024-11-18 23:12:59.885286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:40.693 [2024-11-18 23:12:59.885293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:40.693 [2024-11-18 23:12:59.885301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:40.693 [2024-11-18 23:12:59.885309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:40.693 [2024-11-18 23:12:59.885316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:40.693 [2024-11-18 23:12:59.885324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:40.693 [2024-11-18 23:12:59.885331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:40.693 [2024-11-18 23:12:59.885340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:40.693 [2024-11-18 23:12:59.885348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:40.693 [2024-11-18 23:12:59.885356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:40.693 [2024-11-18 23:12:59.885363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:40.693 [2024-11-18 23:12:59.885371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:40.693 [2024-11-18 23:12:59.885378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:40.693 [2024-11-18 23:12:59.885386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:40.693 [2024-11-18 23:12:59.885405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:40.693 [2024-11-18 23:12:59.885412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:40.693 [2024-11-18 23:12:59.885419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:40.693 [2024-11-18 23:12:59.885426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:40.693 [2024-11-18 23:12:59.885434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:40.693 [2024-11-18 23:12:59.885444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:40.694 [2024-11-18 23:12:59.885452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:40.694 [2024-11-18 23:12:59.885460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:40.694 [2024-11-18 23:12:59.885468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:40.694 [2024-11-18 23:12:59.885475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:40.694 [2024-11-18 23:12:59.885483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:40.694 [2024-11-18 23:12:59.885490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:40.694 [2024-11-18 23:12:59.885498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:40.694 [2024-11-18 23:12:59.885506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:40.694 [2024-11-18 23:12:59.885514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:40.694 [2024-11-18 23:12:59.885522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:40.694 [2024-11-18 23:12:59.885529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:40.694 [2024-11-18 23:12:59.885538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:40.694 [2024-11-18 23:12:59.885545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:40.694 [2024-11-18 23:12:59.885553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:40.694 [2024-11-18 23:12:59.885560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:40.694 [2024-11-18 23:12:59.885567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:40.694 [2024-11-18 23:12:59.885575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:40.694 [2024-11-18 23:12:59.885583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:40.694 [2024-11-18 23:12:59.885590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:40.694 [2024-11-18 23:12:59.885597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:40.694 [2024-11-18 23:12:59.885604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:40.694 [2024-11-18 23:12:59.885611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:40.694 [2024-11-18 23:12:59.885618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:40.694 [2024-11-18 23:12:59.885625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:40.694 [2024-11-18 23:12:59.885632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:40.694 [2024-11-18 23:12:59.885639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:40.694 [2024-11-18 23:12:59.885646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:40.694 [2024-11-18 23:12:59.885654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:40.694 [2024-11-18 23:12:59.885661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:40.694 [2024-11-18 23:12:59.885668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:40.694 [2024-11-18 23:12:59.885676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:40.694 [2024-11-18 23:12:59.885692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:40.694 [2024-11-18 23:12:59.885700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:40.694 [2024-11-18 23:12:59.885709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:40.694 [2024-11-18 23:12:59.885716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:40.694 [2024-11-18 23:12:59.885724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:40.694 [2024-11-18 23:12:59.885732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:40.694 [2024-11-18 23:12:59.885739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:40.694 [2024-11-18 23:12:59.885747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:40.694 [2024-11-18 23:12:59.885764] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:40.694 [2024-11-18 23:12:59.885773] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ec74e3b2-a15b-4a81-b737-658f06791b00 00:17:40.694 [2024-11-18 23:12:59.885791] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:40.694 [2024-11-18 23:12:59.885799] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:40.694 [2024-11-18 23:12:59.885807] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:40.694 [2024-11-18 23:12:59.885815] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:40.694 [2024-11-18 23:12:59.885823] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:40.694 [2024-11-18 23:12:59.885831] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:40.694 [2024-11-18 23:12:59.885839] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:40.694 [2024-11-18 23:12:59.885845] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:40.694 [2024-11-18 23:12:59.885852] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:40.694 [2024-11-18 23:12:59.885860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.694 [2024-11-18 23:12:59.885868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:40.694 [2024-11-18 23:12:59.885880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.938 ms 00:17:40.694 [2024-11-18 23:12:59.885888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.694 [2024-11-18 23:12:59.889141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.694 [2024-11-18 23:12:59.889194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:40.694 [2024-11-18 23:12:59.889206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.233 ms 00:17:40.694 [2024-11-18 23:12:59.889215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.694 [2024-11-18 23:12:59.889375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.694 [2024-11-18 23:12:59.889392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:40.694 [2024-11-18 23:12:59.889406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.134 ms 00:17:40.694 [2024-11-18 23:12:59.889419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.694 [2024-11-18 23:12:59.899879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:40.694 [2024-11-18 23:12:59.899929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:40.694 [2024-11-18 23:12:59.899941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:40.694 [2024-11-18 23:12:59.899959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.694 [2024-11-18 23:12:59.900059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:40.694 [2024-11-18 23:12:59.900075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:40.694 [2024-11-18 23:12:59.900084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:40.694 [2024-11-18 23:12:59.900093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.694 [2024-11-18 23:12:59.900145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:40.694 [2024-11-18 23:12:59.900177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:40.694 [2024-11-18 23:12:59.900186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:40.694 [2024-11-18 23:12:59.900196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.694 [2024-11-18 23:12:59.900217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:40.694 [2024-11-18 23:12:59.900227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:40.694 [2024-11-18 23:12:59.900240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:40.694 [2024-11-18 23:12:59.900248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.694 [2024-11-18 23:12:59.919031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:40.694 [2024-11-18 23:12:59.919084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:40.694 [2024-11-18 23:12:59.919095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:40.694 [2024-11-18 23:12:59.919109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.694 [2024-11-18 23:12:59.934013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:40.694 [2024-11-18 23:12:59.934075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:40.694 [2024-11-18 23:12:59.934087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:40.694 [2024-11-18 23:12:59.934097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.694 [2024-11-18 23:12:59.934201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:40.695 [2024-11-18 23:12:59.934215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:40.695 [2024-11-18 23:12:59.934231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:40.695 [2024-11-18 23:12:59.934241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.695 [2024-11-18 23:12:59.934277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:40.695 [2024-11-18 23:12:59.934288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:40.695 [2024-11-18 23:12:59.934297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:40.695 [2024-11-18 23:12:59.934313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.695 [2024-11-18 23:12:59.934400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:40.695 [2024-11-18 23:12:59.934411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:40.695 [2024-11-18 23:12:59.934421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:40.695 [2024-11-18 23:12:59.934430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.695 [2024-11-18 23:12:59.934467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:40.695 [2024-11-18 23:12:59.934478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:40.695 [2024-11-18 23:12:59.934488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:40.695 [2024-11-18 23:12:59.934496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.695 [2024-11-18 23:12:59.934555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:40.695 [2024-11-18 23:12:59.934564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:40.695 [2024-11-18 23:12:59.934577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:40.695 [2024-11-18 23:12:59.934586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.695 [2024-11-18 23:12:59.934649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:40.695 [2024-11-18 23:12:59.934661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:40.695 [2024-11-18 23:12:59.934671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:40.695 [2024-11-18 23:12:59.934683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.695 [2024-11-18 23:12:59.934866] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 85.136 ms, result 0 00:17:40.956 00:17:40.956 00:17:40.956 23:13:00 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:17:40.956 23:13:00 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:41.529 23:13:00 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:41.529 [2024-11-18 23:13:00.904632] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:41.530 [2024-11-18 23:13:00.904780] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86029 ] 00:17:41.791 [2024-11-18 23:13:01.059132] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.791 [2024-11-18 23:13:01.110245] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:42.055 [2024-11-18 23:13:01.219892] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:42.055 [2024-11-18 23:13:01.219972] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:42.055 [2024-11-18 23:13:01.383645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.055 [2024-11-18 23:13:01.383706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:42.055 [2024-11-18 23:13:01.383722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:42.055 [2024-11-18 23:13:01.383730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.055 [2024-11-18 23:13:01.386352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.055 [2024-11-18 23:13:01.386400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:42.055 [2024-11-18 23:13:01.386413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.600 ms 00:17:42.055 [2024-11-18 23:13:01.386421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.055 [2024-11-18 23:13:01.386524] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:42.055 [2024-11-18 23:13:01.386782] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:42.055 [2024-11-18 23:13:01.386802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.055 [2024-11-18 23:13:01.386811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:42.055 [2024-11-18 23:13:01.386822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:17:42.055 [2024-11-18 23:13:01.386831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.055 [2024-11-18 23:13:01.388561] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:42.055 [2024-11-18 23:13:01.392253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.055 [2024-11-18 23:13:01.392305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:42.055 [2024-11-18 23:13:01.392320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.694 ms 00:17:42.055 [2024-11-18 23:13:01.392331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.055 [2024-11-18 23:13:01.392408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.055 [2024-11-18 23:13:01.392418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:42.055 [2024-11-18 23:13:01.392431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:17:42.055 [2024-11-18 23:13:01.392438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.055 [2024-11-18 23:13:01.400285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.055 [2024-11-18 23:13:01.400329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:42.055 [2024-11-18 23:13:01.400339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.803 ms 00:17:42.055 [2024-11-18 23:13:01.400347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.055 [2024-11-18 23:13:01.400485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.055 [2024-11-18 23:13:01.400497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:42.055 [2024-11-18 23:13:01.400510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:17:42.055 [2024-11-18 23:13:01.400518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.055 [2024-11-18 23:13:01.400546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.055 [2024-11-18 23:13:01.400559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:42.055 [2024-11-18 23:13:01.400568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:42.055 [2024-11-18 23:13:01.400576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.055 [2024-11-18 23:13:01.400600] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:42.055 [2024-11-18 23:13:01.402669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.055 [2024-11-18 23:13:01.402706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:42.055 [2024-11-18 23:13:01.402723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.077 ms 00:17:42.055 [2024-11-18 23:13:01.402732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.055 [2024-11-18 23:13:01.402774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.055 [2024-11-18 23:13:01.402787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:42.055 [2024-11-18 23:13:01.402798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:42.055 [2024-11-18 23:13:01.402806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.055 [2024-11-18 23:13:01.402824] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:42.055 [2024-11-18 23:13:01.402845] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:42.055 [2024-11-18 23:13:01.402881] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:42.055 [2024-11-18 23:13:01.402902] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:42.055 [2024-11-18 23:13:01.403010] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:42.055 [2024-11-18 23:13:01.403023] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:42.055 [2024-11-18 23:13:01.403035] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:42.055 [2024-11-18 23:13:01.403049] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:42.055 [2024-11-18 23:13:01.403059] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:42.055 [2024-11-18 23:13:01.403067] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:42.056 [2024-11-18 23:13:01.403075] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:42.056 [2024-11-18 23:13:01.403082] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:42.056 [2024-11-18 23:13:01.403090] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:42.056 [2024-11-18 23:13:01.403099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.056 [2024-11-18 23:13:01.403111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:42.056 [2024-11-18 23:13:01.403122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:17:42.056 [2024-11-18 23:13:01.403129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.056 [2024-11-18 23:13:01.403234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.056 [2024-11-18 23:13:01.403253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:42.056 [2024-11-18 23:13:01.403266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:17:42.056 [2024-11-18 23:13:01.403274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.056 [2024-11-18 23:13:01.403403] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:42.056 [2024-11-18 23:13:01.403433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:42.056 [2024-11-18 23:13:01.403445] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:42.056 [2024-11-18 23:13:01.403457] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:42.056 [2024-11-18 23:13:01.403466] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:42.056 [2024-11-18 23:13:01.403474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:42.056 [2024-11-18 23:13:01.403484] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:42.056 [2024-11-18 23:13:01.403493] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:42.056 [2024-11-18 23:13:01.403506] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:42.056 [2024-11-18 23:13:01.403514] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:42.056 [2024-11-18 23:13:01.403523] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:42.056 [2024-11-18 23:13:01.403531] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:42.056 [2024-11-18 23:13:01.403539] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:42.056 [2024-11-18 23:13:01.403551] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:42.056 [2024-11-18 23:13:01.403561] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:42.056 [2024-11-18 23:13:01.403569] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:42.056 [2024-11-18 23:13:01.403577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:42.056 [2024-11-18 23:13:01.403584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:42.056 [2024-11-18 23:13:01.403592] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:42.056 [2024-11-18 23:13:01.403599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:42.056 [2024-11-18 23:13:01.403607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:42.056 [2024-11-18 23:13:01.403615] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:42.056 [2024-11-18 23:13:01.403624] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:42.056 [2024-11-18 23:13:01.403632] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:42.056 [2024-11-18 23:13:01.403646] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:42.056 [2024-11-18 23:13:01.403654] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:42.056 [2024-11-18 23:13:01.403662] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:42.056 [2024-11-18 23:13:01.403670] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:42.056 [2024-11-18 23:13:01.403680] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:42.056 [2024-11-18 23:13:01.403687] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:42.056 [2024-11-18 23:13:01.403694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:42.056 [2024-11-18 23:13:01.403701] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:42.056 [2024-11-18 23:13:01.403707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:42.056 [2024-11-18 23:13:01.403713] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:42.056 [2024-11-18 23:13:01.403722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:42.056 [2024-11-18 23:13:01.403730] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:42.056 [2024-11-18 23:13:01.403737] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:42.056 [2024-11-18 23:13:01.403744] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:42.056 [2024-11-18 23:13:01.403751] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:42.056 [2024-11-18 23:13:01.403757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:42.056 [2024-11-18 23:13:01.403766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:42.056 [2024-11-18 23:13:01.403774] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:42.056 [2024-11-18 23:13:01.403782] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:42.056 [2024-11-18 23:13:01.403788] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:42.056 [2024-11-18 23:13:01.403795] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:42.056 [2024-11-18 23:13:01.403803] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:42.056 [2024-11-18 23:13:01.403811] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:42.056 [2024-11-18 23:13:01.403819] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:42.056 [2024-11-18 23:13:01.403826] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:42.056 [2024-11-18 23:13:01.403833] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:42.056 [2024-11-18 23:13:01.403841] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:42.056 [2024-11-18 23:13:01.403848] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:42.056 [2024-11-18 23:13:01.403855] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:42.056 [2024-11-18 23:13:01.403864] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:42.056 [2024-11-18 23:13:01.403878] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:42.056 [2024-11-18 23:13:01.403886] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:42.056 [2024-11-18 23:13:01.403898] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:42.056 [2024-11-18 23:13:01.403906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:42.056 [2024-11-18 23:13:01.403913] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:42.056 [2024-11-18 23:13:01.403920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:42.056 [2024-11-18 23:13:01.403928] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:42.056 [2024-11-18 23:13:01.403936] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:42.056 [2024-11-18 23:13:01.403946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:42.056 [2024-11-18 23:13:01.403954] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:42.056 [2024-11-18 23:13:01.403962] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:42.057 [2024-11-18 23:13:01.403969] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:42.057 [2024-11-18 23:13:01.403976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:42.057 [2024-11-18 23:13:01.403983] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:42.057 [2024-11-18 23:13:01.403993] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:42.057 [2024-11-18 23:13:01.404001] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:42.057 [2024-11-18 23:13:01.404011] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:42.057 [2024-11-18 23:13:01.404020] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:42.057 [2024-11-18 23:13:01.404030] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:42.057 [2024-11-18 23:13:01.404037] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:42.057 [2024-11-18 23:13:01.404045] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:42.057 [2024-11-18 23:13:01.404054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.057 [2024-11-18 23:13:01.404063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:42.057 [2024-11-18 23:13:01.404074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.746 ms 00:17:42.057 [2024-11-18 23:13:01.404081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.057 [2024-11-18 23:13:01.427355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.057 [2024-11-18 23:13:01.427411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:42.057 [2024-11-18 23:13:01.427426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.217 ms 00:17:42.057 [2024-11-18 23:13:01.427437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.057 [2024-11-18 23:13:01.427587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.057 [2024-11-18 23:13:01.427608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:42.057 [2024-11-18 23:13:01.427618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:17:42.057 [2024-11-18 23:13:01.427630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.320 [2024-11-18 23:13:01.440082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.320 [2024-11-18 23:13:01.440133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:42.320 [2024-11-18 23:13:01.440145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.427 ms 00:17:42.320 [2024-11-18 23:13:01.440181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.320 [2024-11-18 23:13:01.440253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.320 [2024-11-18 23:13:01.440263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:42.320 [2024-11-18 23:13:01.440275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:42.320 [2024-11-18 23:13:01.440284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.320 [2024-11-18 23:13:01.440798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.320 [2024-11-18 23:13:01.440840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:42.320 [2024-11-18 23:13:01.440851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.493 ms 00:17:42.320 [2024-11-18 23:13:01.440867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.320 [2024-11-18 23:13:01.441023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.320 [2024-11-18 23:13:01.441042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:42.320 [2024-11-18 23:13:01.441052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:17:42.320 [2024-11-18 23:13:01.441064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.320 [2024-11-18 23:13:01.448389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.320 [2024-11-18 23:13:01.448439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:42.320 [2024-11-18 23:13:01.448454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.299 ms 00:17:42.320 [2024-11-18 23:13:01.448461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.320 [2024-11-18 23:13:01.452255] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:42.320 [2024-11-18 23:13:01.452307] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:42.320 [2024-11-18 23:13:01.452319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.320 [2024-11-18 23:13:01.452329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:42.320 [2024-11-18 23:13:01.452338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.760 ms 00:17:42.320 [2024-11-18 23:13:01.452345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.320 [2024-11-18 23:13:01.468616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.320 [2024-11-18 23:13:01.468662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:42.320 [2024-11-18 23:13:01.468674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.208 ms 00:17:42.320 [2024-11-18 23:13:01.468683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.320 [2024-11-18 23:13:01.471640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.320 [2024-11-18 23:13:01.471685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:42.320 [2024-11-18 23:13:01.471695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.881 ms 00:17:42.320 [2024-11-18 23:13:01.471703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.320 [2024-11-18 23:13:01.474720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.320 [2024-11-18 23:13:01.474764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:42.320 [2024-11-18 23:13:01.474783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.964 ms 00:17:42.320 [2024-11-18 23:13:01.474791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.320 [2024-11-18 23:13:01.475124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.320 [2024-11-18 23:13:01.475139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:42.320 [2024-11-18 23:13:01.475150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:17:42.320 [2024-11-18 23:13:01.475179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.320 [2024-11-18 23:13:01.502217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.320 [2024-11-18 23:13:01.502266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:42.321 [2024-11-18 23:13:01.502279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.985 ms 00:17:42.321 [2024-11-18 23:13:01.502288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.321 [2024-11-18 23:13:01.510439] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:42.321 [2024-11-18 23:13:01.529331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.321 [2024-11-18 23:13:01.529382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:42.321 [2024-11-18 23:13:01.529395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.960 ms 00:17:42.321 [2024-11-18 23:13:01.529404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.321 [2024-11-18 23:13:01.529538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.321 [2024-11-18 23:13:01.529554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:42.321 [2024-11-18 23:13:01.529564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:42.321 [2024-11-18 23:13:01.529573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.321 [2024-11-18 23:13:01.529635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.321 [2024-11-18 23:13:01.529645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:42.321 [2024-11-18 23:13:01.529654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:17:42.321 [2024-11-18 23:13:01.529663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.321 [2024-11-18 23:13:01.529685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.321 [2024-11-18 23:13:01.529695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:42.321 [2024-11-18 23:13:01.529703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:42.321 [2024-11-18 23:13:01.529711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.321 [2024-11-18 23:13:01.529748] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:42.321 [2024-11-18 23:13:01.529763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.321 [2024-11-18 23:13:01.529771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:42.321 [2024-11-18 23:13:01.529780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:17:42.321 [2024-11-18 23:13:01.529792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.321 [2024-11-18 23:13:01.535656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.321 [2024-11-18 23:13:01.535714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:42.321 [2024-11-18 23:13:01.535729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.843 ms 00:17:42.321 [2024-11-18 23:13:01.535738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.321 [2024-11-18 23:13:01.535830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.321 [2024-11-18 23:13:01.535844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:42.321 [2024-11-18 23:13:01.535853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:17:42.321 [2024-11-18 23:13:01.535862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.321 [2024-11-18 23:13:01.536892] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:42.321 [2024-11-18 23:13:01.538226] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 152.922 ms, result 0 00:17:42.321 [2024-11-18 23:13:01.540026] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:42.321 [2024-11-18 23:13:01.546960] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:42.584  [2024-11-18T23:13:01.962Z] Copying: 4096/4096 [kB] (average 11 MBps)[2024-11-18 23:13:01.911230] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:42.584 [2024-11-18 23:13:01.912302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.584 [2024-11-18 23:13:01.912349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:42.584 [2024-11-18 23:13:01.912371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:42.584 [2024-11-18 23:13:01.912381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.584 [2024-11-18 23:13:01.912403] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:42.584 [2024-11-18 23:13:01.913052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.584 [2024-11-18 23:13:01.913087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:42.584 [2024-11-18 23:13:01.913098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.635 ms 00:17:42.584 [2024-11-18 23:13:01.913106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.584 [2024-11-18 23:13:01.916436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.584 [2024-11-18 23:13:01.916483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:42.584 [2024-11-18 23:13:01.916502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.304 ms 00:17:42.584 [2024-11-18 23:13:01.916510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.584 [2024-11-18 23:13:01.921087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.584 [2024-11-18 23:13:01.921138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:42.584 [2024-11-18 23:13:01.921150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.555 ms 00:17:42.584 [2024-11-18 23:13:01.921176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.584 [2024-11-18 23:13:01.928088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.584 [2024-11-18 23:13:01.928129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:42.584 [2024-11-18 23:13:01.928141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.875 ms 00:17:42.584 [2024-11-18 23:13:01.928150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.584 [2024-11-18 23:13:01.931039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.584 [2024-11-18 23:13:01.931087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:42.584 [2024-11-18 23:13:01.931097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.822 ms 00:17:42.584 [2024-11-18 23:13:01.931115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.584 [2024-11-18 23:13:01.936770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.584 [2024-11-18 23:13:01.936818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:42.584 [2024-11-18 23:13:01.936837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.596 ms 00:17:42.584 [2024-11-18 23:13:01.936846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.584 [2024-11-18 23:13:01.936959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.584 [2024-11-18 23:13:01.936971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:42.584 [2024-11-18 23:13:01.936981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:17:42.584 [2024-11-18 23:13:01.936990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.584 [2024-11-18 23:13:01.940110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.584 [2024-11-18 23:13:01.940168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:42.584 [2024-11-18 23:13:01.940178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.099 ms 00:17:42.584 [2024-11-18 23:13:01.940186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.584 [2024-11-18 23:13:01.942966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.584 [2024-11-18 23:13:01.943011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:42.584 [2024-11-18 23:13:01.943020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.737 ms 00:17:42.584 [2024-11-18 23:13:01.943028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.584 [2024-11-18 23:13:01.945223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.584 [2024-11-18 23:13:01.945266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:42.584 [2024-11-18 23:13:01.945275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.151 ms 00:17:42.584 [2024-11-18 23:13:01.945282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.584 [2024-11-18 23:13:01.947629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.584 [2024-11-18 23:13:01.947675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:42.584 [2024-11-18 23:13:01.947684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.276 ms 00:17:42.584 [2024-11-18 23:13:01.947692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.584 [2024-11-18 23:13:01.947732] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:42.584 [2024-11-18 23:13:01.947754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:42.584 [2024-11-18 23:13:01.947765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:42.584 [2024-11-18 23:13:01.947774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:42.584 [2024-11-18 23:13:01.947781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:42.584 [2024-11-18 23:13:01.947789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:42.584 [2024-11-18 23:13:01.947796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:42.584 [2024-11-18 23:13:01.947804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:42.584 [2024-11-18 23:13:01.947813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:42.584 [2024-11-18 23:13:01.947821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:42.584 [2024-11-18 23:13:01.947831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:42.584 [2024-11-18 23:13:01.947838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:42.584 [2024-11-18 23:13:01.947846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:42.584 [2024-11-18 23:13:01.947855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:42.584 [2024-11-18 23:13:01.947862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:42.584 [2024-11-18 23:13:01.947870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:42.584 [2024-11-18 23:13:01.947877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:42.584 [2024-11-18 23:13:01.947886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:42.584 [2024-11-18 23:13:01.947894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:42.584 [2024-11-18 23:13:01.947901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:42.584 [2024-11-18 23:13:01.947909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:42.584 [2024-11-18 23:13:01.947916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:42.584 [2024-11-18 23:13:01.947923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:42.584 [2024-11-18 23:13:01.947931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:42.584 [2024-11-18 23:13:01.947938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:42.584 [2024-11-18 23:13:01.947945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:42.584 [2024-11-18 23:13:01.947952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:42.584 [2024-11-18 23:13:01.947959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:42.584 [2024-11-18 23:13:01.947967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:42.584 [2024-11-18 23:13:01.947975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:42.584 [2024-11-18 23:13:01.947984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:42.584 [2024-11-18 23:13:01.947991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:42.584 [2024-11-18 23:13:01.947998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:42.584 [2024-11-18 23:13:01.948006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:42.584 [2024-11-18 23:13:01.948014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:42.584 [2024-11-18 23:13:01.948022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:42.584 [2024-11-18 23:13:01.948030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:42.584 [2024-11-18 23:13:01.948037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:42.584 [2024-11-18 23:13:01.948044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:42.584 [2024-11-18 23:13:01.948051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:42.584 [2024-11-18 23:13:01.948058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:42.584 [2024-11-18 23:13:01.948065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:42.584 [2024-11-18 23:13:01.948074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:42.584 [2024-11-18 23:13:01.948081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:42.584 [2024-11-18 23:13:01.948089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:42.584 [2024-11-18 23:13:01.948096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:42.584 [2024-11-18 23:13:01.948103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:42.584 [2024-11-18 23:13:01.948112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:42.584 [2024-11-18 23:13:01.948130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:42.584 [2024-11-18 23:13:01.948138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:42.584 [2024-11-18 23:13:01.948146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:42.585 [2024-11-18 23:13:01.948169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:42.585 [2024-11-18 23:13:01.948177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:42.585 [2024-11-18 23:13:01.948185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:42.585 [2024-11-18 23:13:01.948192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:42.585 [2024-11-18 23:13:01.948200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:42.585 [2024-11-18 23:13:01.948209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:42.585 [2024-11-18 23:13:01.948216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:42.585 [2024-11-18 23:13:01.948223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:42.585 [2024-11-18 23:13:01.948231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:42.585 [2024-11-18 23:13:01.948239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:42.585 [2024-11-18 23:13:01.948246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:42.585 [2024-11-18 23:13:01.948257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:42.585 [2024-11-18 23:13:01.948266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:42.585 [2024-11-18 23:13:01.948274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:42.585 [2024-11-18 23:13:01.948281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:42.585 [2024-11-18 23:13:01.948289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:42.585 [2024-11-18 23:13:01.948296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:42.585 [2024-11-18 23:13:01.948304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:42.585 [2024-11-18 23:13:01.948312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:42.585 [2024-11-18 23:13:01.948319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:42.585 [2024-11-18 23:13:01.948328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:42.585 [2024-11-18 23:13:01.948336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:42.585 [2024-11-18 23:13:01.948344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:42.585 [2024-11-18 23:13:01.948352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:42.585 [2024-11-18 23:13:01.948359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:42.585 [2024-11-18 23:13:01.948367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:42.585 [2024-11-18 23:13:01.948376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:42.585 [2024-11-18 23:13:01.948384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:42.585 [2024-11-18 23:13:01.948391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:42.585 [2024-11-18 23:13:01.948400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:42.585 [2024-11-18 23:13:01.948407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:42.585 [2024-11-18 23:13:01.948415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:42.585 [2024-11-18 23:13:01.948422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:42.585 [2024-11-18 23:13:01.948432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:42.585 [2024-11-18 23:13:01.948441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:42.585 [2024-11-18 23:13:01.948448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:42.585 [2024-11-18 23:13:01.948456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:42.585 [2024-11-18 23:13:01.948464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:42.585 [2024-11-18 23:13:01.948471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:42.585 [2024-11-18 23:13:01.948481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:42.585 [2024-11-18 23:13:01.948491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:42.585 [2024-11-18 23:13:01.948499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:42.585 [2024-11-18 23:13:01.948506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:42.585 [2024-11-18 23:13:01.948515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:42.585 [2024-11-18 23:13:01.948523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:42.585 [2024-11-18 23:13:01.948531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:42.585 [2024-11-18 23:13:01.948540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:42.585 [2024-11-18 23:13:01.948550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:42.585 [2024-11-18 23:13:01.948557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:42.585 [2024-11-18 23:13:01.948565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:42.585 [2024-11-18 23:13:01.948582] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:42.585 [2024-11-18 23:13:01.948597] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ec74e3b2-a15b-4a81-b737-658f06791b00 00:17:42.585 [2024-11-18 23:13:01.948612] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:42.585 [2024-11-18 23:13:01.948621] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:42.585 [2024-11-18 23:13:01.948628] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:42.585 [2024-11-18 23:13:01.948636] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:42.585 [2024-11-18 23:13:01.948643] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:42.585 [2024-11-18 23:13:01.948653] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:42.585 [2024-11-18 23:13:01.948660] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:42.585 [2024-11-18 23:13:01.948666] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:42.585 [2024-11-18 23:13:01.948673] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:42.585 [2024-11-18 23:13:01.948680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.585 [2024-11-18 23:13:01.948688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:42.585 [2024-11-18 23:13:01.948702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.950 ms 00:17:42.585 [2024-11-18 23:13:01.948710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.585 [2024-11-18 23:13:01.950598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.585 [2024-11-18 23:13:01.950634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:42.585 [2024-11-18 23:13:01.950644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.870 ms 00:17:42.585 [2024-11-18 23:13:01.950651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.585 [2024-11-18 23:13:01.950789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.585 [2024-11-18 23:13:01.950799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:42.585 [2024-11-18 23:13:01.950813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:17:42.585 [2024-11-18 23:13:01.950820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.846 [2024-11-18 23:13:01.958099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.846 [2024-11-18 23:13:01.958146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:42.846 [2024-11-18 23:13:01.958192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.846 [2024-11-18 23:13:01.958202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.846 [2024-11-18 23:13:01.958265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.846 [2024-11-18 23:13:01.958281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:42.846 [2024-11-18 23:13:01.958294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.846 [2024-11-18 23:13:01.958302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.846 [2024-11-18 23:13:01.958346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.846 [2024-11-18 23:13:01.958356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:42.846 [2024-11-18 23:13:01.958371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.846 [2024-11-18 23:13:01.958379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.846 [2024-11-18 23:13:01.958397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.846 [2024-11-18 23:13:01.958405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:42.846 [2024-11-18 23:13:01.958417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.846 [2024-11-18 23:13:01.958424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.846 [2024-11-18 23:13:01.971949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.846 [2024-11-18 23:13:01.972007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:42.846 [2024-11-18 23:13:01.972018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.846 [2024-11-18 23:13:01.972027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.846 [2024-11-18 23:13:01.983046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.846 [2024-11-18 23:13:01.983114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:42.846 [2024-11-18 23:13:01.983126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.846 [2024-11-18 23:13:01.983134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.846 [2024-11-18 23:13:01.983243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.846 [2024-11-18 23:13:01.983254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:42.847 [2024-11-18 23:13:01.983266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.847 [2024-11-18 23:13:01.983274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.847 [2024-11-18 23:13:01.983308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.847 [2024-11-18 23:13:01.983334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:42.847 [2024-11-18 23:13:01.983343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.847 [2024-11-18 23:13:01.983354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.847 [2024-11-18 23:13:01.983438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.847 [2024-11-18 23:13:01.983449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:42.847 [2024-11-18 23:13:01.983458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.847 [2024-11-18 23:13:01.983466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.847 [2024-11-18 23:13:01.983498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.847 [2024-11-18 23:13:01.983507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:42.847 [2024-11-18 23:13:01.983516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.847 [2024-11-18 23:13:01.983525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.847 [2024-11-18 23:13:01.983585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.847 [2024-11-18 23:13:01.983595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:42.847 [2024-11-18 23:13:01.983604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.847 [2024-11-18 23:13:01.983616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.847 [2024-11-18 23:13:01.983665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.847 [2024-11-18 23:13:01.983678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:42.847 [2024-11-18 23:13:01.983688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.847 [2024-11-18 23:13:01.983703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.847 [2024-11-18 23:13:01.983859] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 71.527 ms, result 0 00:17:43.108 00:17:43.108 00:17:43.108 23:13:02 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:43.108 23:13:02 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=86046 00:17:43.108 23:13:02 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 86046 00:17:43.108 23:13:02 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 86046 ']' 00:17:43.108 23:13:02 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:43.108 23:13:02 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:43.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:43.108 23:13:02 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:43.108 23:13:02 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:43.108 23:13:02 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:43.108 [2024-11-18 23:13:02.408486] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:43.109 [2024-11-18 23:13:02.408700] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86046 ] 00:17:43.370 [2024-11-18 23:13:02.558913] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.370 [2024-11-18 23:13:02.633328] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:43.943 23:13:03 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:43.943 23:13:03 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:17:43.943 23:13:03 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:44.203 [2024-11-18 23:13:03.473060] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:44.203 [2024-11-18 23:13:03.473477] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:44.467 [2024-11-18 23:13:03.652953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.467 [2024-11-18 23:13:03.653266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:44.467 [2024-11-18 23:13:03.653296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:44.467 [2024-11-18 23:13:03.653309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.467 [2024-11-18 23:13:03.656062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.467 [2024-11-18 23:13:03.656130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:44.467 [2024-11-18 23:13:03.656142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.724 ms 00:17:44.467 [2024-11-18 23:13:03.656171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.467 [2024-11-18 23:13:03.656303] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:44.467 [2024-11-18 23:13:03.656611] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:44.467 [2024-11-18 23:13:03.656631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.467 [2024-11-18 23:13:03.656642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:44.467 [2024-11-18 23:13:03.656654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.340 ms 00:17:44.467 [2024-11-18 23:13:03.656667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.467 [2024-11-18 23:13:03.659084] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:44.467 [2024-11-18 23:13:03.664120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.467 [2024-11-18 23:13:03.664208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:44.467 [2024-11-18 23:13:03.664229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.032 ms 00:17:44.467 [2024-11-18 23:13:03.664237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.467 [2024-11-18 23:13:03.664328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.467 [2024-11-18 23:13:03.664340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:44.467 [2024-11-18 23:13:03.664382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:17:44.467 [2024-11-18 23:13:03.664392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.467 [2024-11-18 23:13:03.676223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.467 [2024-11-18 23:13:03.676267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:44.467 [2024-11-18 23:13:03.676283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.771 ms 00:17:44.467 [2024-11-18 23:13:03.676293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.467 [2024-11-18 23:13:03.676428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.467 [2024-11-18 23:13:03.676440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:44.467 [2024-11-18 23:13:03.676452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:44.467 [2024-11-18 23:13:03.676461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.467 [2024-11-18 23:13:03.676494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.467 [2024-11-18 23:13:03.676503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:44.467 [2024-11-18 23:13:03.676519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:44.467 [2024-11-18 23:13:03.676527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.467 [2024-11-18 23:13:03.676554] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:44.467 [2024-11-18 23:13:03.679346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.467 [2024-11-18 23:13:03.679547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:44.467 [2024-11-18 23:13:03.679566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.800 ms 00:17:44.467 [2024-11-18 23:13:03.679582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.467 [2024-11-18 23:13:03.679636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.467 [2024-11-18 23:13:03.679648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:44.467 [2024-11-18 23:13:03.679661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:17:44.467 [2024-11-18 23:13:03.679672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.467 [2024-11-18 23:13:03.679694] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:44.467 [2024-11-18 23:13:03.679723] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:44.467 [2024-11-18 23:13:03.679775] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:44.467 [2024-11-18 23:13:03.679802] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:44.467 [2024-11-18 23:13:03.679923] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:44.467 [2024-11-18 23:13:03.679938] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:44.467 [2024-11-18 23:13:03.679950] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:44.467 [2024-11-18 23:13:03.679965] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:44.467 [2024-11-18 23:13:03.679974] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:44.467 [2024-11-18 23:13:03.679989] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:44.467 [2024-11-18 23:13:03.679997] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:44.467 [2024-11-18 23:13:03.680007] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:44.467 [2024-11-18 23:13:03.680019] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:44.467 [2024-11-18 23:13:03.680033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.467 [2024-11-18 23:13:03.680040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:44.467 [2024-11-18 23:13:03.680050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.340 ms 00:17:44.467 [2024-11-18 23:13:03.680058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.467 [2024-11-18 23:13:03.680153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.467 [2024-11-18 23:13:03.680188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:44.467 [2024-11-18 23:13:03.680204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:17:44.467 [2024-11-18 23:13:03.680212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.467 [2024-11-18 23:13:03.680322] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:44.467 [2024-11-18 23:13:03.680343] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:44.467 [2024-11-18 23:13:03.680358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:44.467 [2024-11-18 23:13:03.680367] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.467 [2024-11-18 23:13:03.680381] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:44.467 [2024-11-18 23:13:03.680388] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:44.467 [2024-11-18 23:13:03.680399] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:44.467 [2024-11-18 23:13:03.680408] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:44.468 [2024-11-18 23:13:03.680427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:44.468 [2024-11-18 23:13:03.680434] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:44.468 [2024-11-18 23:13:03.680444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:44.468 [2024-11-18 23:13:03.680452] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:44.468 [2024-11-18 23:13:03.680462] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:44.468 [2024-11-18 23:13:03.680470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:44.468 [2024-11-18 23:13:03.680479] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:44.468 [2024-11-18 23:13:03.680486] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.468 [2024-11-18 23:13:03.680494] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:44.468 [2024-11-18 23:13:03.680501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:44.468 [2024-11-18 23:13:03.680512] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.468 [2024-11-18 23:13:03.680519] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:44.468 [2024-11-18 23:13:03.680531] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:44.468 [2024-11-18 23:13:03.680538] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:44.468 [2024-11-18 23:13:03.680547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:44.468 [2024-11-18 23:13:03.680561] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:44.468 [2024-11-18 23:13:03.680571] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:44.468 [2024-11-18 23:13:03.680578] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:44.468 [2024-11-18 23:13:03.680588] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:44.468 [2024-11-18 23:13:03.680595] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:44.468 [2024-11-18 23:13:03.680604] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:44.468 [2024-11-18 23:13:03.680610] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:44.468 [2024-11-18 23:13:03.680618] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:44.468 [2024-11-18 23:13:03.680625] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:44.468 [2024-11-18 23:13:03.680633] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:44.468 [2024-11-18 23:13:03.680640] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:44.468 [2024-11-18 23:13:03.680649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:44.468 [2024-11-18 23:13:03.680656] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:44.468 [2024-11-18 23:13:03.680668] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:44.468 [2024-11-18 23:13:03.680674] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:44.468 [2024-11-18 23:13:03.680683] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:44.468 [2024-11-18 23:13:03.680690] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.468 [2024-11-18 23:13:03.680698] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:44.468 [2024-11-18 23:13:03.680704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:44.468 [2024-11-18 23:13:03.680712] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.468 [2024-11-18 23:13:03.680719] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:44.468 [2024-11-18 23:13:03.680728] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:44.468 [2024-11-18 23:13:03.680736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:44.468 [2024-11-18 23:13:03.680746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.468 [2024-11-18 23:13:03.680754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:44.468 [2024-11-18 23:13:03.680764] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:44.468 [2024-11-18 23:13:03.680770] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:44.468 [2024-11-18 23:13:03.680781] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:44.468 [2024-11-18 23:13:03.680788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:44.468 [2024-11-18 23:13:03.680801] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:44.468 [2024-11-18 23:13:03.680811] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:44.468 [2024-11-18 23:13:03.680823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:44.468 [2024-11-18 23:13:03.680835] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:44.468 [2024-11-18 23:13:03.680844] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:44.468 [2024-11-18 23:13:03.680852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:44.468 [2024-11-18 23:13:03.680861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:44.468 [2024-11-18 23:13:03.680869] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:44.468 [2024-11-18 23:13:03.680878] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:44.468 [2024-11-18 23:13:03.680885] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:44.468 [2024-11-18 23:13:03.680894] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:44.468 [2024-11-18 23:13:03.680901] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:44.468 [2024-11-18 23:13:03.680910] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:44.468 [2024-11-18 23:13:03.680918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:44.468 [2024-11-18 23:13:03.680929] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:44.468 [2024-11-18 23:13:03.680936] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:44.468 [2024-11-18 23:13:03.680948] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:44.468 [2024-11-18 23:13:03.680955] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:44.468 [2024-11-18 23:13:03.680965] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:44.468 [2024-11-18 23:13:03.680977] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:44.468 [2024-11-18 23:13:03.680986] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:44.468 [2024-11-18 23:13:03.680993] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:44.468 [2024-11-18 23:13:03.681005] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:44.468 [2024-11-18 23:13:03.681013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.468 [2024-11-18 23:13:03.681023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:44.468 [2024-11-18 23:13:03.681031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.764 ms 00:17:44.468 [2024-11-18 23:13:03.681040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.468 [2024-11-18 23:13:03.701910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.468 [2024-11-18 23:13:03.701971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:44.468 [2024-11-18 23:13:03.701985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.779 ms 00:17:44.468 [2024-11-18 23:13:03.701996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.468 [2024-11-18 23:13:03.702137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.468 [2024-11-18 23:13:03.702190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:44.468 [2024-11-18 23:13:03.702200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:17:44.468 [2024-11-18 23:13:03.702211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.468 [2024-11-18 23:13:03.719249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.468 [2024-11-18 23:13:03.719306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:44.468 [2024-11-18 23:13:03.719330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.013 ms 00:17:44.468 [2024-11-18 23:13:03.719343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.468 [2024-11-18 23:13:03.719422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.468 [2024-11-18 23:13:03.719436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:44.468 [2024-11-18 23:13:03.719446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:44.468 [2024-11-18 23:13:03.719458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.468 [2024-11-18 23:13:03.720178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.468 [2024-11-18 23:13:03.720221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:44.468 [2024-11-18 23:13:03.720238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.674 ms 00:17:44.468 [2024-11-18 23:13:03.720249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.468 [2024-11-18 23:13:03.720429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.468 [2024-11-18 23:13:03.720448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:44.468 [2024-11-18 23:13:03.720457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:17:44.468 [2024-11-18 23:13:03.720469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.468 [2024-11-18 23:13:03.742459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.468 [2024-11-18 23:13:03.742536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:44.468 [2024-11-18 23:13:03.742554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.961 ms 00:17:44.468 [2024-11-18 23:13:03.742568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.468 [2024-11-18 23:13:03.747734] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:44.469 [2024-11-18 23:13:03.747800] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:44.469 [2024-11-18 23:13:03.747816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.469 [2024-11-18 23:13:03.747828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:44.469 [2024-11-18 23:13:03.747838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.072 ms 00:17:44.469 [2024-11-18 23:13:03.747848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.469 [2024-11-18 23:13:03.764082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.469 [2024-11-18 23:13:03.764144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:44.469 [2024-11-18 23:13:03.764173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.146 ms 00:17:44.469 [2024-11-18 23:13:03.764189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.469 [2024-11-18 23:13:03.767481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.469 [2024-11-18 23:13:03.767540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:44.469 [2024-11-18 23:13:03.767551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.189 ms 00:17:44.469 [2024-11-18 23:13:03.767561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.469 [2024-11-18 23:13:03.770327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.469 [2024-11-18 23:13:03.770385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:44.469 [2024-11-18 23:13:03.770396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.706 ms 00:17:44.469 [2024-11-18 23:13:03.770406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.469 [2024-11-18 23:13:03.770782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.469 [2024-11-18 23:13:03.770798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:44.469 [2024-11-18 23:13:03.770808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:17:44.469 [2024-11-18 23:13:03.770818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.469 [2024-11-18 23:13:03.800612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.469 [2024-11-18 23:13:03.800683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:44.469 [2024-11-18 23:13:03.800697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.768 ms 00:17:44.469 [2024-11-18 23:13:03.800713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.469 [2024-11-18 23:13:03.809260] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:44.469 [2024-11-18 23:13:03.833726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.469 [2024-11-18 23:13:03.833785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:44.469 [2024-11-18 23:13:03.833802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.908 ms 00:17:44.469 [2024-11-18 23:13:03.833819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.469 [2024-11-18 23:13:03.833923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.469 [2024-11-18 23:13:03.833936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:44.469 [2024-11-18 23:13:03.833952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:17:44.469 [2024-11-18 23:13:03.833962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.469 [2024-11-18 23:13:03.834032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.469 [2024-11-18 23:13:03.834045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:44.469 [2024-11-18 23:13:03.834061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:17:44.469 [2024-11-18 23:13:03.834069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.469 [2024-11-18 23:13:03.834098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.469 [2024-11-18 23:13:03.834107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:44.469 [2024-11-18 23:13:03.834124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:44.469 [2024-11-18 23:13:03.834135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.469 [2024-11-18 23:13:03.834216] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:44.469 [2024-11-18 23:13:03.834232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.469 [2024-11-18 23:13:03.834243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:44.469 [2024-11-18 23:13:03.834251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:17:44.469 [2024-11-18 23:13:03.834262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.731 [2024-11-18 23:13:03.841676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.731 [2024-11-18 23:13:03.841742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:44.731 [2024-11-18 23:13:03.841755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.389 ms 00:17:44.731 [2024-11-18 23:13:03.841766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.731 [2024-11-18 23:13:03.841873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.731 [2024-11-18 23:13:03.841887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:44.731 [2024-11-18 23:13:03.841896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:17:44.731 [2024-11-18 23:13:03.841908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.731 [2024-11-18 23:13:03.843291] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:44.731 [2024-11-18 23:13:03.844799] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 189.935 ms, result 0 00:17:44.731 [2024-11-18 23:13:03.847450] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:44.731 Some configs were skipped because the RPC state that can call them passed over. 00:17:44.731 23:13:03 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:44.731 [2024-11-18 23:13:04.081065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.731 [2024-11-18 23:13:04.081308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:44.731 [2024-11-18 23:13:04.081406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.308 ms 00:17:44.731 [2024-11-18 23:13:04.081442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.731 [2024-11-18 23:13:04.081506] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.759 ms, result 0 00:17:44.731 true 00:17:44.992 23:13:04 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:44.992 [2024-11-18 23:13:04.311835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.992 [2024-11-18 23:13:04.312086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:44.992 [2024-11-18 23:13:04.312178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.969 ms 00:17:44.992 [2024-11-18 23:13:04.312211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.992 [2024-11-18 23:13:04.312280] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.417 ms, result 0 00:17:44.992 true 00:17:44.992 23:13:04 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 86046 00:17:44.992 23:13:04 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 86046 ']' 00:17:44.992 23:13:04 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 86046 00:17:44.992 23:13:04 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:17:44.992 23:13:04 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:44.992 23:13:04 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86046 00:17:44.992 23:13:04 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:44.992 23:13:04 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:44.992 23:13:04 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86046' 00:17:44.992 killing process with pid 86046 00:17:44.992 23:13:04 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 86046 00:17:44.992 23:13:04 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 86046 00:17:45.256 [2024-11-18 23:13:04.533661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.256 [2024-11-18 23:13:04.533726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:45.256 [2024-11-18 23:13:04.533743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:45.256 [2024-11-18 23:13:04.533752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.256 [2024-11-18 23:13:04.533780] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:45.256 [2024-11-18 23:13:04.534332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.256 [2024-11-18 23:13:04.534362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:45.256 [2024-11-18 23:13:04.534373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.539 ms 00:17:45.256 [2024-11-18 23:13:04.534385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.256 [2024-11-18 23:13:04.534699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.256 [2024-11-18 23:13:04.534714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:45.256 [2024-11-18 23:13:04.534724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:17:45.256 [2024-11-18 23:13:04.534734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.256 [2024-11-18 23:13:04.539511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.256 [2024-11-18 23:13:04.539732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:45.256 [2024-11-18 23:13:04.539750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.748 ms 00:17:45.256 [2024-11-18 23:13:04.539760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.256 [2024-11-18 23:13:04.546800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.256 [2024-11-18 23:13:04.546950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:45.256 [2024-11-18 23:13:04.546967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.991 ms 00:17:45.256 [2024-11-18 23:13:04.546980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.256 [2024-11-18 23:13:04.549742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.256 [2024-11-18 23:13:04.549792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:45.256 [2024-11-18 23:13:04.549802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.678 ms 00:17:45.256 [2024-11-18 23:13:04.549811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.256 [2024-11-18 23:13:04.555678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.256 [2024-11-18 23:13:04.555828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:45.256 [2024-11-18 23:13:04.555885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.821 ms 00:17:45.256 [2024-11-18 23:13:04.555912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.256 [2024-11-18 23:13:04.556150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.256 [2024-11-18 23:13:04.556253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:45.256 [2024-11-18 23:13:04.556281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:17:45.256 [2024-11-18 23:13:04.556357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.256 [2024-11-18 23:13:04.559625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.256 [2024-11-18 23:13:04.559784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:45.256 [2024-11-18 23:13:04.559800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.229 ms 00:17:45.256 [2024-11-18 23:13:04.559814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.256 [2024-11-18 23:13:04.562417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.256 [2024-11-18 23:13:04.562479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:45.256 [2024-11-18 23:13:04.562489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.562 ms 00:17:45.256 [2024-11-18 23:13:04.562499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.256 [2024-11-18 23:13:04.564122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.256 [2024-11-18 23:13:04.564188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:45.256 [2024-11-18 23:13:04.564198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.577 ms 00:17:45.256 [2024-11-18 23:13:04.564207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.256 [2024-11-18 23:13:04.565815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.256 [2024-11-18 23:13:04.565865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:45.256 [2024-11-18 23:13:04.565875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.534 ms 00:17:45.256 [2024-11-18 23:13:04.565885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.256 [2024-11-18 23:13:04.565926] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:45.256 [2024-11-18 23:13:04.565944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:45.256 [2024-11-18 23:13:04.565956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:45.256 [2024-11-18 23:13:04.565969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:45.256 [2024-11-18 23:13:04.565977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:45.256 [2024-11-18 23:13:04.565989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:45.256 [2024-11-18 23:13:04.565999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:45.256 [2024-11-18 23:13:04.566009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:45.256 [2024-11-18 23:13:04.566019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:45.256 [2024-11-18 23:13:04.566032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:45.256 [2024-11-18 23:13:04.566040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:45.256 [2024-11-18 23:13:04.566050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:45.256 [2024-11-18 23:13:04.566058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:45.256 [2024-11-18 23:13:04.566068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:45.256 [2024-11-18 23:13:04.566075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:45.256 [2024-11-18 23:13:04.566085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:45.256 [2024-11-18 23:13:04.566092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:45.256 [2024-11-18 23:13:04.566102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:45.256 [2024-11-18 23:13:04.566110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:45.256 [2024-11-18 23:13:04.566121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:45.256 [2024-11-18 23:13:04.566128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:45.257 [2024-11-18 23:13:04.566138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:45.257 [2024-11-18 23:13:04.566147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:45.257 [2024-11-18 23:13:04.566181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:45.257 [2024-11-18 23:13:04.566190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:45.257 [2024-11-18 23:13:04.566200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:45.257 [2024-11-18 23:13:04.566208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:45.257 [2024-11-18 23:13:04.566218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:45.257 [2024-11-18 23:13:04.566227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:45.257 [2024-11-18 23:13:04.566238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:45.257 [2024-11-18 23:13:04.566247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:45.257 [2024-11-18 23:13:04.566259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:45.257 [2024-11-18 23:13:04.566268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:45.257 [2024-11-18 23:13:04.566277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:45.257 [2024-11-18 23:13:04.566285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:45.257 [2024-11-18 23:13:04.566298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:45.257 [2024-11-18 23:13:04.566307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:45.257 [2024-11-18 23:13:04.566316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:45.257 [2024-11-18 23:13:04.566323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:45.257 [2024-11-18 23:13:04.566334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:45.257 [2024-11-18 23:13:04.566343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:45.257 [2024-11-18 23:13:04.566353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:45.257 [2024-11-18 23:13:04.566372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:45.257 [2024-11-18 23:13:04.566382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:45.257 [2024-11-18 23:13:04.566390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:45.257 [2024-11-18 23:13:04.566402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:45.257 [2024-11-18 23:13:04.566410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:45.257 [2024-11-18 23:13:04.566419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:45.257 [2024-11-18 23:13:04.566426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:45.257 [2024-11-18 23:13:04.566436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:45.257 [2024-11-18 23:13:04.566442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:45.257 [2024-11-18 23:13:04.566456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:45.257 [2024-11-18 23:13:04.566464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:45.257 [2024-11-18 23:13:04.566473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:45.257 [2024-11-18 23:13:04.566480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:45.257 [2024-11-18 23:13:04.566489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:45.257 [2024-11-18 23:13:04.566498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:45.257 [2024-11-18 23:13:04.566507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:45.257 [2024-11-18 23:13:04.566514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:45.257 [2024-11-18 23:13:04.566523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:45.257 [2024-11-18 23:13:04.566530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:45.257 [2024-11-18 23:13:04.566541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:45.257 [2024-11-18 23:13:04.566549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:45.257 [2024-11-18 23:13:04.566561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:45.257 [2024-11-18 23:13:04.566568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:45.257 [2024-11-18 23:13:04.566578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:45.257 [2024-11-18 23:13:04.566586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:45.257 [2024-11-18 23:13:04.566600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:45.257 [2024-11-18 23:13:04.566608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:45.257 [2024-11-18 23:13:04.566617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:45.257 [2024-11-18 23:13:04.566625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:45.257 [2024-11-18 23:13:04.566637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:45.257 [2024-11-18 23:13:04.566644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:45.257 [2024-11-18 23:13:04.566653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:45.257 [2024-11-18 23:13:04.566660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:45.257 [2024-11-18 23:13:04.566670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:45.257 [2024-11-18 23:13:04.566678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:45.257 [2024-11-18 23:13:04.566690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:45.257 [2024-11-18 23:13:04.566697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:45.257 [2024-11-18 23:13:04.566706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:45.257 [2024-11-18 23:13:04.566714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:45.257 [2024-11-18 23:13:04.566724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:45.257 [2024-11-18 23:13:04.566734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:45.257 [2024-11-18 23:13:04.566747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:45.257 [2024-11-18 23:13:04.566755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:45.257 [2024-11-18 23:13:04.566766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:45.257 [2024-11-18 23:13:04.566773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:45.257 [2024-11-18 23:13:04.566785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:45.257 [2024-11-18 23:13:04.566800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:45.257 [2024-11-18 23:13:04.566810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:45.257 [2024-11-18 23:13:04.566818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:45.258 [2024-11-18 23:13:04.566828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:45.258 [2024-11-18 23:13:04.566835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:45.258 [2024-11-18 23:13:04.566845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:45.258 [2024-11-18 23:13:04.566852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:45.258 [2024-11-18 23:13:04.566862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:45.258 [2024-11-18 23:13:04.566870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:45.258 [2024-11-18 23:13:04.566880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:45.258 [2024-11-18 23:13:04.566887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:45.258 [2024-11-18 23:13:04.566899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:45.258 [2024-11-18 23:13:04.566908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:45.258 [2024-11-18 23:13:04.566927] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:45.258 [2024-11-18 23:13:04.566935] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ec74e3b2-a15b-4a81-b737-658f06791b00 00:17:45.258 [2024-11-18 23:13:04.566947] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:45.258 [2024-11-18 23:13:04.566960] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:45.258 [2024-11-18 23:13:04.566972] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:45.258 [2024-11-18 23:13:04.566980] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:45.258 [2024-11-18 23:13:04.566999] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:45.258 [2024-11-18 23:13:04.567012] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:45.258 [2024-11-18 23:13:04.567021] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:45.258 [2024-11-18 23:13:04.567030] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:45.258 [2024-11-18 23:13:04.567040] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:45.258 [2024-11-18 23:13:04.567048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.258 [2024-11-18 23:13:04.567059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:45.258 [2024-11-18 23:13:04.567068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.123 ms 00:17:45.258 [2024-11-18 23:13:04.567083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.258 [2024-11-18 23:13:04.569100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.258 [2024-11-18 23:13:04.569133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:45.258 [2024-11-18 23:13:04.569145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.980 ms 00:17:45.258 [2024-11-18 23:13:04.569171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.258 [2024-11-18 23:13:04.569300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.258 [2024-11-18 23:13:04.569314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:45.258 [2024-11-18 23:13:04.569324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:17:45.258 [2024-11-18 23:13:04.569334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.258 [2024-11-18 23:13:04.576451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.258 [2024-11-18 23:13:04.576497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:45.258 [2024-11-18 23:13:04.576507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.258 [2024-11-18 23:13:04.576521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.258 [2024-11-18 23:13:04.576589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.258 [2024-11-18 23:13:04.576600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:45.258 [2024-11-18 23:13:04.576609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.258 [2024-11-18 23:13:04.576622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.258 [2024-11-18 23:13:04.576666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.258 [2024-11-18 23:13:04.576681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:45.258 [2024-11-18 23:13:04.576694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.258 [2024-11-18 23:13:04.576705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.258 [2024-11-18 23:13:04.576724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.258 [2024-11-18 23:13:04.576734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:45.258 [2024-11-18 23:13:04.576743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.258 [2024-11-18 23:13:04.576752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.258 [2024-11-18 23:13:04.588927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.258 [2024-11-18 23:13:04.588978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:45.258 [2024-11-18 23:13:04.588993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.258 [2024-11-18 23:13:04.589003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.258 [2024-11-18 23:13:04.597898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.258 [2024-11-18 23:13:04.597951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:45.258 [2024-11-18 23:13:04.597962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.258 [2024-11-18 23:13:04.597975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.258 [2024-11-18 23:13:04.598020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.258 [2024-11-18 23:13:04.598032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:45.258 [2024-11-18 23:13:04.598043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.258 [2024-11-18 23:13:04.598053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.258 [2024-11-18 23:13:04.598087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.258 [2024-11-18 23:13:04.598097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:45.258 [2024-11-18 23:13:04.598107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.258 [2024-11-18 23:13:04.598117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.258 [2024-11-18 23:13:04.598224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.258 [2024-11-18 23:13:04.598238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:45.258 [2024-11-18 23:13:04.598249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.258 [2024-11-18 23:13:04.598260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.258 [2024-11-18 23:13:04.598313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.258 [2024-11-18 23:13:04.598325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:45.258 [2024-11-18 23:13:04.598334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.258 [2024-11-18 23:13:04.598347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.258 [2024-11-18 23:13:04.598390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.258 [2024-11-18 23:13:04.598403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:45.258 [2024-11-18 23:13:04.598414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.258 [2024-11-18 23:13:04.598426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.258 [2024-11-18 23:13:04.598473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.258 [2024-11-18 23:13:04.598487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:45.258 [2024-11-18 23:13:04.598495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.258 [2024-11-18 23:13:04.598505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.258 [2024-11-18 23:13:04.598656] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 64.971 ms, result 0 00:17:45.829 23:13:04 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:45.829 [2024-11-18 23:13:05.032655] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:45.829 [2024-11-18 23:13:05.032834] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86087 ] 00:17:45.829 [2024-11-18 23:13:05.185783] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:46.090 [2024-11-18 23:13:05.244002] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:46.091 [2024-11-18 23:13:05.384027] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:46.091 [2024-11-18 23:13:05.384128] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:46.353 [2024-11-18 23:13:05.546663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.353 [2024-11-18 23:13:05.546923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:46.353 [2024-11-18 23:13:05.546951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:46.353 [2024-11-18 23:13:05.546961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.353 [2024-11-18 23:13:05.549818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.353 [2024-11-18 23:13:05.549874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:46.353 [2024-11-18 23:13:05.549889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.825 ms 00:17:46.354 [2024-11-18 23:13:05.549898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.354 [2024-11-18 23:13:05.550004] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:46.354 [2024-11-18 23:13:05.550461] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:46.354 [2024-11-18 23:13:05.550532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.354 [2024-11-18 23:13:05.550561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:46.354 [2024-11-18 23:13:05.550586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.538 ms 00:17:46.354 [2024-11-18 23:13:05.550655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.354 [2024-11-18 23:13:05.552947] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:46.354 [2024-11-18 23:13:05.557588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.354 [2024-11-18 23:13:05.557664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:46.354 [2024-11-18 23:13:05.557676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.642 ms 00:17:46.354 [2024-11-18 23:13:05.557697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.354 [2024-11-18 23:13:05.557807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.354 [2024-11-18 23:13:05.557819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:46.354 [2024-11-18 23:13:05.557834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:17:46.354 [2024-11-18 23:13:05.557842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.354 [2024-11-18 23:13:05.568748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.354 [2024-11-18 23:13:05.568952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:46.354 [2024-11-18 23:13:05.568972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.855 ms 00:17:46.354 [2024-11-18 23:13:05.568982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.354 [2024-11-18 23:13:05.569140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.354 [2024-11-18 23:13:05.569176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:46.354 [2024-11-18 23:13:05.569188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:17:46.354 [2024-11-18 23:13:05.569197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.354 [2024-11-18 23:13:05.569239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.354 [2024-11-18 23:13:05.569256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:46.354 [2024-11-18 23:13:05.569265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:46.354 [2024-11-18 23:13:05.569273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.354 [2024-11-18 23:13:05.569296] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:46.354 [2024-11-18 23:13:05.572870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.354 [2024-11-18 23:13:05.572928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:46.354 [2024-11-18 23:13:05.572942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.577 ms 00:17:46.354 [2024-11-18 23:13:05.572950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.354 [2024-11-18 23:13:05.573016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.354 [2024-11-18 23:13:05.573035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:46.354 [2024-11-18 23:13:05.573047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:17:46.354 [2024-11-18 23:13:05.573062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.354 [2024-11-18 23:13:05.573105] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:46.354 [2024-11-18 23:13:05.573131] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:46.354 [2024-11-18 23:13:05.573344] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:46.354 [2024-11-18 23:13:05.573413] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:46.354 [2024-11-18 23:13:05.573555] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:46.354 [2024-11-18 23:13:05.573590] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:46.354 [2024-11-18 23:13:05.573732] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:46.354 [2024-11-18 23:13:05.573746] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:46.354 [2024-11-18 23:13:05.573757] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:46.354 [2024-11-18 23:13:05.573766] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:46.354 [2024-11-18 23:13:05.573775] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:46.354 [2024-11-18 23:13:05.573783] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:46.354 [2024-11-18 23:13:05.573791] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:46.354 [2024-11-18 23:13:05.573801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.354 [2024-11-18 23:13:05.573814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:46.354 [2024-11-18 23:13:05.573826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.700 ms 00:17:46.354 [2024-11-18 23:13:05.573834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.354 [2024-11-18 23:13:05.573938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.354 [2024-11-18 23:13:05.573949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:46.354 [2024-11-18 23:13:05.573957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:17:46.354 [2024-11-18 23:13:05.573971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.354 [2024-11-18 23:13:05.574081] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:46.354 [2024-11-18 23:13:05.574102] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:46.354 [2024-11-18 23:13:05.574112] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:46.354 [2024-11-18 23:13:05.574124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:46.354 [2024-11-18 23:13:05.574133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:46.354 [2024-11-18 23:13:05.574140] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:46.354 [2024-11-18 23:13:05.574149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:46.354 [2024-11-18 23:13:05.574173] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:46.354 [2024-11-18 23:13:05.574184] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:46.354 [2024-11-18 23:13:05.574191] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:46.354 [2024-11-18 23:13:05.574198] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:46.354 [2024-11-18 23:13:05.574205] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:46.354 [2024-11-18 23:13:05.574213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:46.354 [2024-11-18 23:13:05.574220] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:46.354 [2024-11-18 23:13:05.574227] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:46.354 [2024-11-18 23:13:05.574234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:46.354 [2024-11-18 23:13:05.574241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:46.354 [2024-11-18 23:13:05.574249] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:46.354 [2024-11-18 23:13:05.574257] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:46.354 [2024-11-18 23:13:05.574265] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:46.354 [2024-11-18 23:13:05.574274] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:46.354 [2024-11-18 23:13:05.574281] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:46.354 [2024-11-18 23:13:05.574291] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:46.354 [2024-11-18 23:13:05.574300] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:46.354 [2024-11-18 23:13:05.574315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:46.354 [2024-11-18 23:13:05.574325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:46.354 [2024-11-18 23:13:05.574334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:46.354 [2024-11-18 23:13:05.574341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:46.354 [2024-11-18 23:13:05.574349] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:46.354 [2024-11-18 23:13:05.574357] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:46.354 [2024-11-18 23:13:05.574365] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:46.354 [2024-11-18 23:13:05.574374] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:46.354 [2024-11-18 23:13:05.574387] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:46.354 [2024-11-18 23:13:05.574395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:46.354 [2024-11-18 23:13:05.574403] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:46.354 [2024-11-18 23:13:05.574411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:46.354 [2024-11-18 23:13:05.574419] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:46.354 [2024-11-18 23:13:05.574427] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:46.354 [2024-11-18 23:13:05.574435] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:46.354 [2024-11-18 23:13:05.574443] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:46.354 [2024-11-18 23:13:05.574454] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:46.354 [2024-11-18 23:13:05.574462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:46.354 [2024-11-18 23:13:05.574470] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:46.354 [2024-11-18 23:13:05.574478] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:46.354 [2024-11-18 23:13:05.574492] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:46.354 [2024-11-18 23:13:05.574504] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:46.355 [2024-11-18 23:13:05.574513] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:46.355 [2024-11-18 23:13:05.574523] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:46.355 [2024-11-18 23:13:05.574531] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:46.355 [2024-11-18 23:13:05.574539] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:46.355 [2024-11-18 23:13:05.574547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:46.355 [2024-11-18 23:13:05.574555] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:46.355 [2024-11-18 23:13:05.574562] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:46.355 [2024-11-18 23:13:05.574573] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:46.355 [2024-11-18 23:13:05.574584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:46.355 [2024-11-18 23:13:05.574594] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:46.355 [2024-11-18 23:13:05.574606] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:46.355 [2024-11-18 23:13:05.574614] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:46.355 [2024-11-18 23:13:05.574622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:46.355 [2024-11-18 23:13:05.574631] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:46.355 [2024-11-18 23:13:05.574639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:46.355 [2024-11-18 23:13:05.574647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:46.355 [2024-11-18 23:13:05.574656] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:46.355 [2024-11-18 23:13:05.574664] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:46.355 [2024-11-18 23:13:05.574673] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:46.355 [2024-11-18 23:13:05.574680] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:46.355 [2024-11-18 23:13:05.574688] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:46.355 [2024-11-18 23:13:05.574695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:46.355 [2024-11-18 23:13:05.574703] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:46.355 [2024-11-18 23:13:05.574711] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:46.355 [2024-11-18 23:13:05.574720] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:46.355 [2024-11-18 23:13:05.574728] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:46.355 [2024-11-18 23:13:05.574740] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:46.355 [2024-11-18 23:13:05.574747] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:46.355 [2024-11-18 23:13:05.574755] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:46.355 [2024-11-18 23:13:05.574762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.355 [2024-11-18 23:13:05.574770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:46.355 [2024-11-18 23:13:05.574782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.751 ms 00:17:46.355 [2024-11-18 23:13:05.574789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.355 [2024-11-18 23:13:05.602935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.355 [2024-11-18 23:13:05.603023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:46.355 [2024-11-18 23:13:05.603043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.088 ms 00:17:46.355 [2024-11-18 23:13:05.603062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.355 [2024-11-18 23:13:05.603300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.355 [2024-11-18 23:13:05.603344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:46.355 [2024-11-18 23:13:05.603360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:17:46.355 [2024-11-18 23:13:05.603399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.355 [2024-11-18 23:13:05.618275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.355 [2024-11-18 23:13:05.618477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:46.355 [2024-11-18 23:13:05.618502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.840 ms 00:17:46.355 [2024-11-18 23:13:05.618512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.355 [2024-11-18 23:13:05.618595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.355 [2024-11-18 23:13:05.618610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:46.355 [2024-11-18 23:13:05.618622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:46.355 [2024-11-18 23:13:05.618630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.355 [2024-11-18 23:13:05.619339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.355 [2024-11-18 23:13:05.619371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:46.355 [2024-11-18 23:13:05.619385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.682 ms 00:17:46.355 [2024-11-18 23:13:05.619402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.355 [2024-11-18 23:13:05.619573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.355 [2024-11-18 23:13:05.619591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:46.355 [2024-11-18 23:13:05.619602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:17:46.355 [2024-11-18 23:13:05.619614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.355 [2024-11-18 23:13:05.629346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.355 [2024-11-18 23:13:05.629527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:46.355 [2024-11-18 23:13:05.629545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.706 ms 00:17:46.355 [2024-11-18 23:13:05.629554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.355 [2024-11-18 23:13:05.634284] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:46.355 [2024-11-18 23:13:05.634347] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:46.355 [2024-11-18 23:13:05.634365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.355 [2024-11-18 23:13:05.634374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:46.355 [2024-11-18 23:13:05.634384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.688 ms 00:17:46.355 [2024-11-18 23:13:05.634392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.355 [2024-11-18 23:13:05.650770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.355 [2024-11-18 23:13:05.650823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:46.355 [2024-11-18 23:13:05.650847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.291 ms 00:17:46.355 [2024-11-18 23:13:05.650856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.355 [2024-11-18 23:13:05.653983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.355 [2024-11-18 23:13:05.654035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:46.355 [2024-11-18 23:13:05.654046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.991 ms 00:17:46.355 [2024-11-18 23:13:05.654054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.355 [2024-11-18 23:13:05.656846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.355 [2024-11-18 23:13:05.657031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:46.355 [2024-11-18 23:13:05.657061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.732 ms 00:17:46.355 [2024-11-18 23:13:05.657068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.355 [2024-11-18 23:13:05.657443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.355 [2024-11-18 23:13:05.657459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:46.355 [2024-11-18 23:13:05.657473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:17:46.355 [2024-11-18 23:13:05.657488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.355 [2024-11-18 23:13:05.686478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.355 [2024-11-18 23:13:05.686551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:46.355 [2024-11-18 23:13:05.686568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.961 ms 00:17:46.355 [2024-11-18 23:13:05.686577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.355 [2024-11-18 23:13:05.695115] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:46.355 [2024-11-18 23:13:05.719889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.355 [2024-11-18 23:13:05.719950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:46.355 [2024-11-18 23:13:05.719967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.208 ms 00:17:46.355 [2024-11-18 23:13:05.719976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.355 [2024-11-18 23:13:05.720079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.355 [2024-11-18 23:13:05.720098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:46.355 [2024-11-18 23:13:05.720115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:17:46.355 [2024-11-18 23:13:05.720124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.355 [2024-11-18 23:13:05.720237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.355 [2024-11-18 23:13:05.720248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:46.355 [2024-11-18 23:13:05.720258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:17:46.355 [2024-11-18 23:13:05.720267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.355 [2024-11-18 23:13:05.720296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.356 [2024-11-18 23:13:05.720308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:46.356 [2024-11-18 23:13:05.720318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:46.356 [2024-11-18 23:13:05.720327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.356 [2024-11-18 23:13:05.720371] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:46.356 [2024-11-18 23:13:05.720386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.356 [2024-11-18 23:13:05.720395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:46.356 [2024-11-18 23:13:05.720406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:17:46.356 [2024-11-18 23:13:05.720414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.616 [2024-11-18 23:13:05.727418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.616 [2024-11-18 23:13:05.727472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:46.616 [2024-11-18 23:13:05.727485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.978 ms 00:17:46.616 [2024-11-18 23:13:05.727494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.616 [2024-11-18 23:13:05.727605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.616 [2024-11-18 23:13:05.727622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:46.616 [2024-11-18 23:13:05.727632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:17:46.616 [2024-11-18 23:13:05.727640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.616 [2024-11-18 23:13:05.728896] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:46.616 [2024-11-18 23:13:05.730414] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 181.845 ms, result 0 00:17:46.616 [2024-11-18 23:13:05.732148] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:46.616 [2024-11-18 23:13:05.739361] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:47.575  [2024-11-18T23:13:07.906Z] Copying: 15/256 [MB] (15 MBps) [2024-11-18T23:13:08.857Z] Copying: 26/256 [MB] (10 MBps) [2024-11-18T23:13:10.244Z] Copying: 36/256 [MB] (10 MBps) [2024-11-18T23:13:10.817Z] Copying: 61/256 [MB] (24 MBps) [2024-11-18T23:13:12.204Z] Copying: 79/256 [MB] (18 MBps) [2024-11-18T23:13:13.150Z] Copying: 98/256 [MB] (18 MBps) [2024-11-18T23:13:14.115Z] Copying: 108/256 [MB] (10 MBps) [2024-11-18T23:13:15.063Z] Copying: 119/256 [MB] (11 MBps) [2024-11-18T23:13:16.006Z] Copying: 134/256 [MB] (14 MBps) [2024-11-18T23:13:16.947Z] Copying: 147/256 [MB] (13 MBps) [2024-11-18T23:13:17.892Z] Copying: 158/256 [MB] (10 MBps) [2024-11-18T23:13:18.836Z] Copying: 169/256 [MB] (11 MBps) [2024-11-18T23:13:19.832Z] Copying: 189/256 [MB] (19 MBps) [2024-11-18T23:13:21.222Z] Copying: 207/256 [MB] (18 MBps) [2024-11-18T23:13:22.166Z] Copying: 227/256 [MB] (20 MBps) [2024-11-18T23:13:22.428Z] Copying: 247/256 [MB] (19 MBps) [2024-11-18T23:13:22.691Z] Copying: 256/256 [MB] (average 15 MBps)[2024-11-18 23:13:22.524544] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:03.313 [2024-11-18 23:13:22.527235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.313 [2024-11-18 23:13:22.527294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:03.313 [2024-11-18 23:13:22.527344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:03.313 [2024-11-18 23:13:22.527359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.313 [2024-11-18 23:13:22.527395] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:03.313 [2024-11-18 23:13:22.528417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.313 [2024-11-18 23:13:22.528484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:03.313 [2024-11-18 23:13:22.528501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.000 ms 00:18:03.313 [2024-11-18 23:13:22.528513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.313 [2024-11-18 23:13:22.528937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.313 [2024-11-18 23:13:22.528965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:03.313 [2024-11-18 23:13:22.529366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.382 ms 00:18:03.313 [2024-11-18 23:13:22.529378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.313 [2024-11-18 23:13:22.534993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.313 [2024-11-18 23:13:22.535024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:03.313 [2024-11-18 23:13:22.535038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.576 ms 00:18:03.313 [2024-11-18 23:13:22.535050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.313 [2024-11-18 23:13:22.542975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.313 [2024-11-18 23:13:22.543024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:03.313 [2024-11-18 23:13:22.543036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.868 ms 00:18:03.313 [2024-11-18 23:13:22.543045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.313 [2024-11-18 23:13:22.546259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.313 [2024-11-18 23:13:22.546310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:03.313 [2024-11-18 23:13:22.546322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.131 ms 00:18:03.313 [2024-11-18 23:13:22.546346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.313 [2024-11-18 23:13:22.551173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.313 [2024-11-18 23:13:22.551222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:03.313 [2024-11-18 23:13:22.551243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.774 ms 00:18:03.313 [2024-11-18 23:13:22.551252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.313 [2024-11-18 23:13:22.551412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.313 [2024-11-18 23:13:22.551425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:03.313 [2024-11-18 23:13:22.551435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:18:03.313 [2024-11-18 23:13:22.551445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.313 [2024-11-18 23:13:22.554578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.313 [2024-11-18 23:13:22.554627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:03.313 [2024-11-18 23:13:22.554639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.113 ms 00:18:03.313 [2024-11-18 23:13:22.554647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.313 [2024-11-18 23:13:22.557572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.313 [2024-11-18 23:13:22.557620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:03.313 [2024-11-18 23:13:22.557631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.879 ms 00:18:03.313 [2024-11-18 23:13:22.557640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.313 [2024-11-18 23:13:22.559954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.313 [2024-11-18 23:13:22.560005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:03.313 [2024-11-18 23:13:22.560015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.269 ms 00:18:03.313 [2024-11-18 23:13:22.560024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.313 [2024-11-18 23:13:22.562526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.314 [2024-11-18 23:13:22.562595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:03.314 [2024-11-18 23:13:22.562605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.415 ms 00:18:03.314 [2024-11-18 23:13:22.562613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.314 [2024-11-18 23:13:22.562657] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:03.314 [2024-11-18 23:13:22.562684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:03.314 [2024-11-18 23:13:22.562701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:03.314 [2024-11-18 23:13:22.562710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:03.314 [2024-11-18 23:13:22.562719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:03.314 [2024-11-18 23:13:22.562729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:03.314 [2024-11-18 23:13:22.562737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:03.314 [2024-11-18 23:13:22.562747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:03.314 [2024-11-18 23:13:22.562755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:03.314 [2024-11-18 23:13:22.562764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:03.314 [2024-11-18 23:13:22.562773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:03.314 [2024-11-18 23:13:22.562781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:03.314 [2024-11-18 23:13:22.562790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:03.314 [2024-11-18 23:13:22.562798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:03.314 [2024-11-18 23:13:22.562806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:03.314 [2024-11-18 23:13:22.562814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:03.314 [2024-11-18 23:13:22.562822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:03.314 [2024-11-18 23:13:22.562830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:03.314 [2024-11-18 23:13:22.562838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:03.314 [2024-11-18 23:13:22.562846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:03.314 [2024-11-18 23:13:22.562854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:03.314 [2024-11-18 23:13:22.562862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:03.314 [2024-11-18 23:13:22.562871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:03.314 [2024-11-18 23:13:22.562878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:03.314 [2024-11-18 23:13:22.562886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:03.314 [2024-11-18 23:13:22.562894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:03.314 [2024-11-18 23:13:22.562902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:03.314 [2024-11-18 23:13:22.562911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:03.314 [2024-11-18 23:13:22.562921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:03.314 [2024-11-18 23:13:22.562929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:03.314 [2024-11-18 23:13:22.562937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:03.314 [2024-11-18 23:13:22.562945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:03.314 [2024-11-18 23:13:22.562954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:03.314 [2024-11-18 23:13:22.562962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:03.314 [2024-11-18 23:13:22.562969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:03.314 [2024-11-18 23:13:22.562978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:03.314 [2024-11-18 23:13:22.562986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:03.314 [2024-11-18 23:13:22.562993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:03.314 [2024-11-18 23:13:22.563001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:03.314 [2024-11-18 23:13:22.563009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:03.314 [2024-11-18 23:13:22.563017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:03.314 [2024-11-18 23:13:22.563025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:03.314 [2024-11-18 23:13:22.563033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:03.314 [2024-11-18 23:13:22.563041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:03.314 [2024-11-18 23:13:22.563048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:03.314 [2024-11-18 23:13:22.563056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:03.314 [2024-11-18 23:13:22.563064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:03.314 [2024-11-18 23:13:22.563072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:03.314 [2024-11-18 23:13:22.563079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:03.314 [2024-11-18 23:13:22.563087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:03.314 [2024-11-18 23:13:22.563094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:03.314 [2024-11-18 23:13:22.563102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:03.314 [2024-11-18 23:13:22.563111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:03.314 [2024-11-18 23:13:22.563120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:03.314 [2024-11-18 23:13:22.563128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:03.314 [2024-11-18 23:13:22.563136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:03.314 [2024-11-18 23:13:22.563144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:03.314 [2024-11-18 23:13:22.563152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:03.314 [2024-11-18 23:13:22.563181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:03.314 [2024-11-18 23:13:22.563190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:03.314 [2024-11-18 23:13:22.563199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:03.314 [2024-11-18 23:13:22.563208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:03.314 [2024-11-18 23:13:22.563216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:03.314 [2024-11-18 23:13:22.563225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:03.314 [2024-11-18 23:13:22.563235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:03.315 [2024-11-18 23:13:22.563244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:03.315 [2024-11-18 23:13:22.563252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:03.315 [2024-11-18 23:13:22.563261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:03.315 [2024-11-18 23:13:22.563269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:03.315 [2024-11-18 23:13:22.563278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:03.315 [2024-11-18 23:13:22.563287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:03.315 [2024-11-18 23:13:22.563296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:03.315 [2024-11-18 23:13:22.563304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:03.315 [2024-11-18 23:13:22.563328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:03.315 [2024-11-18 23:13:22.563337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:03.315 [2024-11-18 23:13:22.563346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:03.315 [2024-11-18 23:13:22.563354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:03.315 [2024-11-18 23:13:22.563363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:03.315 [2024-11-18 23:13:22.563371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:03.315 [2024-11-18 23:13:22.563380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:03.315 [2024-11-18 23:13:22.563388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:03.315 [2024-11-18 23:13:22.563396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:03.315 [2024-11-18 23:13:22.563404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:03.315 [2024-11-18 23:13:22.563412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:03.315 [2024-11-18 23:13:22.563421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:03.315 [2024-11-18 23:13:22.563431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:03.315 [2024-11-18 23:13:22.563440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:03.315 [2024-11-18 23:13:22.563449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:03.315 [2024-11-18 23:13:22.563457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:03.315 [2024-11-18 23:13:22.563465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:03.315 [2024-11-18 23:13:22.563474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:03.315 [2024-11-18 23:13:22.563482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:03.315 [2024-11-18 23:13:22.563493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:03.315 [2024-11-18 23:13:22.563502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:03.315 [2024-11-18 23:13:22.563511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:03.315 [2024-11-18 23:13:22.563519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:03.315 [2024-11-18 23:13:22.563530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:03.315 [2024-11-18 23:13:22.563538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:03.315 [2024-11-18 23:13:22.563546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:03.315 [2024-11-18 23:13:22.563554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:03.315 [2024-11-18 23:13:22.563562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:03.315 [2024-11-18 23:13:22.563579] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:03.315 [2024-11-18 23:13:22.563588] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ec74e3b2-a15b-4a81-b737-658f06791b00 00:18:03.315 [2024-11-18 23:13:22.563609] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:03.315 [2024-11-18 23:13:22.563617] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:03.315 [2024-11-18 23:13:22.563627] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:03.315 [2024-11-18 23:13:22.563637] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:03.315 [2024-11-18 23:13:22.563644] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:03.315 [2024-11-18 23:13:22.563654] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:03.315 [2024-11-18 23:13:22.563662] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:03.315 [2024-11-18 23:13:22.563669] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:03.315 [2024-11-18 23:13:22.563677] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:03.315 [2024-11-18 23:13:22.563685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.315 [2024-11-18 23:13:22.563694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:03.315 [2024-11-18 23:13:22.563709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.029 ms 00:18:03.315 [2024-11-18 23:13:22.563717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.315 [2024-11-18 23:13:22.567087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.315 [2024-11-18 23:13:22.567259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:03.315 [2024-11-18 23:13:22.567686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.346 ms 00:18:03.315 [2024-11-18 23:13:22.567739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.315 [2024-11-18 23:13:22.567990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.315 [2024-11-18 23:13:22.568096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:03.315 [2024-11-18 23:13:22.568151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:18:03.315 [2024-11-18 23:13:22.568199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.315 [2024-11-18 23:13:22.578237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:03.315 [2024-11-18 23:13:22.578433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:03.315 [2024-11-18 23:13:22.578490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:03.315 [2024-11-18 23:13:22.578514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.315 [2024-11-18 23:13:22.578627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:03.315 [2024-11-18 23:13:22.578662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:03.315 [2024-11-18 23:13:22.578684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:03.315 [2024-11-18 23:13:22.578750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.315 [2024-11-18 23:13:22.578832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:03.315 [2024-11-18 23:13:22.578865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:03.316 [2024-11-18 23:13:22.578887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:03.316 [2024-11-18 23:13:22.578907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.316 [2024-11-18 23:13:22.578945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:03.316 [2024-11-18 23:13:22.579019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:03.316 [2024-11-18 23:13:22.579050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:03.316 [2024-11-18 23:13:22.579069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.316 [2024-11-18 23:13:22.598870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:03.316 [2024-11-18 23:13:22.599085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:03.316 [2024-11-18 23:13:22.599153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:03.316 [2024-11-18 23:13:22.599199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.316 [2024-11-18 23:13:22.614743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:03.316 [2024-11-18 23:13:22.614812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:03.316 [2024-11-18 23:13:22.614824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:03.316 [2024-11-18 23:13:22.614840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.316 [2024-11-18 23:13:22.614909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:03.316 [2024-11-18 23:13:22.614925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:03.316 [2024-11-18 23:13:22.614935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:03.316 [2024-11-18 23:13:22.614943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.316 [2024-11-18 23:13:22.614980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:03.316 [2024-11-18 23:13:22.614991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:03.316 [2024-11-18 23:13:22.615000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:03.316 [2024-11-18 23:13:22.615013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.316 [2024-11-18 23:13:22.615099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:03.316 [2024-11-18 23:13:22.615110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:03.316 [2024-11-18 23:13:22.615119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:03.316 [2024-11-18 23:13:22.615133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.316 [2024-11-18 23:13:22.615202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:03.316 [2024-11-18 23:13:22.615214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:03.316 [2024-11-18 23:13:22.615224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:03.316 [2024-11-18 23:13:22.615233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.316 [2024-11-18 23:13:22.615294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:03.316 [2024-11-18 23:13:22.615305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:03.316 [2024-11-18 23:13:22.615331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:03.316 [2024-11-18 23:13:22.615341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.316 [2024-11-18 23:13:22.615424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:03.316 [2024-11-18 23:13:22.615437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:03.316 [2024-11-18 23:13:22.615448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:03.316 [2024-11-18 23:13:22.615462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.316 [2024-11-18 23:13:22.615654] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 88.396 ms, result 0 00:18:03.577 00:18:03.577 00:18:03.577 23:13:22 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:18:04.160 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:18:04.160 23:13:23 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:18:04.160 23:13:23 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:18:04.160 23:13:23 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:18:04.160 23:13:23 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:04.160 23:13:23 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:18:04.160 23:13:23 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:18:04.458 Process with pid 86046 is not found 00:18:04.458 23:13:23 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 86046 00:18:04.458 23:13:23 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 86046 ']' 00:18:04.458 23:13:23 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 86046 00:18:04.458 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (86046) - No such process 00:18:04.458 23:13:23 ftl.ftl_trim -- common/autotest_common.sh@977 -- # echo 'Process with pid 86046 is not found' 00:18:04.458 ************************************ 00:18:04.458 END TEST ftl_trim 00:18:04.458 ************************************ 00:18:04.458 00:18:04.458 real 1m13.125s 00:18:04.458 user 1m35.220s 00:18:04.458 sys 0m5.910s 00:18:04.458 23:13:23 ftl.ftl_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:04.458 23:13:23 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:04.458 23:13:23 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:18:04.458 23:13:23 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:18:04.458 23:13:23 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:04.458 23:13:23 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:04.458 ************************************ 00:18:04.458 START TEST ftl_restore 00:18:04.458 ************************************ 00:18:04.458 23:13:23 ftl.ftl_restore -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:18:04.458 * Looking for test storage... 00:18:04.458 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:04.458 23:13:23 ftl.ftl_restore -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:04.458 23:13:23 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:04.458 23:13:23 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # lcov --version 00:18:04.458 23:13:23 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:04.458 23:13:23 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:04.458 23:13:23 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:04.458 23:13:23 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:04.458 23:13:23 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:18:04.458 23:13:23 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:18:04.458 23:13:23 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:18:04.458 23:13:23 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:18:04.458 23:13:23 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:18:04.458 23:13:23 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:18:04.458 23:13:23 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:18:04.458 23:13:23 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:04.458 23:13:23 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:18:04.458 23:13:23 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:18:04.458 23:13:23 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:04.458 23:13:23 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:04.458 23:13:23 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:18:04.458 23:13:23 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:18:04.458 23:13:23 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:04.458 23:13:23 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:18:04.458 23:13:23 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:18:04.458 23:13:23 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:18:04.458 23:13:23 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:18:04.458 23:13:23 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:04.459 23:13:23 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:18:04.459 23:13:23 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:18:04.459 23:13:23 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:04.459 23:13:23 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:04.459 23:13:23 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:18:04.459 23:13:23 ftl.ftl_restore -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:04.459 23:13:23 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:04.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:04.459 --rc genhtml_branch_coverage=1 00:18:04.459 --rc genhtml_function_coverage=1 00:18:04.459 --rc genhtml_legend=1 00:18:04.459 --rc geninfo_all_blocks=1 00:18:04.459 --rc geninfo_unexecuted_blocks=1 00:18:04.459 00:18:04.459 ' 00:18:04.459 23:13:23 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:04.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:04.459 --rc genhtml_branch_coverage=1 00:18:04.459 --rc genhtml_function_coverage=1 00:18:04.459 --rc genhtml_legend=1 00:18:04.459 --rc geninfo_all_blocks=1 00:18:04.459 --rc geninfo_unexecuted_blocks=1 00:18:04.459 00:18:04.459 ' 00:18:04.459 23:13:23 ftl.ftl_restore -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:04.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:04.459 --rc genhtml_branch_coverage=1 00:18:04.459 --rc genhtml_function_coverage=1 00:18:04.459 --rc genhtml_legend=1 00:18:04.459 --rc geninfo_all_blocks=1 00:18:04.459 --rc geninfo_unexecuted_blocks=1 00:18:04.459 00:18:04.459 ' 00:18:04.459 23:13:23 ftl.ftl_restore -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:04.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:04.459 --rc genhtml_branch_coverage=1 00:18:04.459 --rc genhtml_function_coverage=1 00:18:04.459 --rc genhtml_legend=1 00:18:04.459 --rc geninfo_all_blocks=1 00:18:04.459 --rc geninfo_unexecuted_blocks=1 00:18:04.459 00:18:04.459 ' 00:18:04.459 23:13:23 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:04.459 23:13:23 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:18:04.459 23:13:23 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:04.459 23:13:23 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:04.459 23:13:23 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:04.459 23:13:23 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:04.459 23:13:23 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:04.459 23:13:23 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:04.459 23:13:23 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:04.459 23:13:23 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:04.459 23:13:23 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:04.459 23:13:23 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:04.459 23:13:23 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:04.459 23:13:23 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:04.459 23:13:23 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:04.459 23:13:23 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:04.459 23:13:23 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:04.459 23:13:23 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:04.459 23:13:23 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:04.459 23:13:23 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:04.459 23:13:23 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:04.459 23:13:23 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:04.459 23:13:23 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:04.459 23:13:23 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:04.459 23:13:23 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:04.459 23:13:23 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:04.459 23:13:23 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:04.459 23:13:23 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:04.459 23:13:23 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:04.459 23:13:23 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:04.459 23:13:23 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:18:04.459 23:13:23 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.uQ8S75AbA4 00:18:04.459 23:13:23 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:18:04.459 23:13:23 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:18:04.459 23:13:23 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:18:04.459 23:13:23 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:18:04.459 23:13:23 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:18:04.459 23:13:23 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:18:04.459 23:13:23 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:18:04.459 23:13:23 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:18:04.459 23:13:23 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=86351 00:18:04.459 23:13:23 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 86351 00:18:04.459 23:13:23 ftl.ftl_restore -- common/autotest_common.sh@831 -- # '[' -z 86351 ']' 00:18:04.459 23:13:23 ftl.ftl_restore -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:04.459 23:13:23 ftl.ftl_restore -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:04.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:04.459 23:13:23 ftl.ftl_restore -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:04.459 23:13:23 ftl.ftl_restore -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:04.459 23:13:23 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:04.459 23:13:23 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:18:04.721 [2024-11-18 23:13:23.865612] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:18:04.721 [2024-11-18 23:13:23.865736] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86351 ] 00:18:04.721 [2024-11-18 23:13:24.016797] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:04.721 [2024-11-18 23:13:24.082903] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:05.661 23:13:24 ftl.ftl_restore -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:05.661 23:13:24 ftl.ftl_restore -- common/autotest_common.sh@864 -- # return 0 00:18:05.661 23:13:24 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:05.661 23:13:24 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:18:05.661 23:13:24 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:05.661 23:13:24 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:18:05.661 23:13:24 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:18:05.661 23:13:24 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:05.661 23:13:25 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:05.661 23:13:25 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:18:05.661 23:13:25 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:05.661 23:13:25 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:18:05.661 23:13:25 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:05.661 23:13:25 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:18:05.661 23:13:25 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:18:05.661 23:13:25 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:05.923 23:13:25 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:05.923 { 00:18:05.923 "name": "nvme0n1", 00:18:05.923 "aliases": [ 00:18:05.923 "b1cdc956-2f68-4893-86d7-56b0e2b377bd" 00:18:05.923 ], 00:18:05.923 "product_name": "NVMe disk", 00:18:05.923 "block_size": 4096, 00:18:05.923 "num_blocks": 1310720, 00:18:05.923 "uuid": "b1cdc956-2f68-4893-86d7-56b0e2b377bd", 00:18:05.923 "numa_id": -1, 00:18:05.923 "assigned_rate_limits": { 00:18:05.923 "rw_ios_per_sec": 0, 00:18:05.923 "rw_mbytes_per_sec": 0, 00:18:05.923 "r_mbytes_per_sec": 0, 00:18:05.923 "w_mbytes_per_sec": 0 00:18:05.923 }, 00:18:05.923 "claimed": true, 00:18:05.923 "claim_type": "read_many_write_one", 00:18:05.923 "zoned": false, 00:18:05.923 "supported_io_types": { 00:18:05.923 "read": true, 00:18:05.923 "write": true, 00:18:05.923 "unmap": true, 00:18:05.923 "flush": true, 00:18:05.923 "reset": true, 00:18:05.923 "nvme_admin": true, 00:18:05.923 "nvme_io": true, 00:18:05.923 "nvme_io_md": false, 00:18:05.923 "write_zeroes": true, 00:18:05.923 "zcopy": false, 00:18:05.923 "get_zone_info": false, 00:18:05.923 "zone_management": false, 00:18:05.923 "zone_append": false, 00:18:05.923 "compare": true, 00:18:05.923 "compare_and_write": false, 00:18:05.923 "abort": true, 00:18:05.923 "seek_hole": false, 00:18:05.923 "seek_data": false, 00:18:05.923 "copy": true, 00:18:05.923 "nvme_iov_md": false 00:18:05.923 }, 00:18:05.923 "driver_specific": { 00:18:05.923 "nvme": [ 00:18:05.923 { 00:18:05.923 "pci_address": "0000:00:11.0", 00:18:05.923 "trid": { 00:18:05.923 "trtype": "PCIe", 00:18:05.923 "traddr": "0000:00:11.0" 00:18:05.923 }, 00:18:05.923 "ctrlr_data": { 00:18:05.923 "cntlid": 0, 00:18:05.923 "vendor_id": "0x1b36", 00:18:05.923 "model_number": "QEMU NVMe Ctrl", 00:18:05.923 "serial_number": "12341", 00:18:05.923 "firmware_revision": "8.0.0", 00:18:05.923 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:05.923 "oacs": { 00:18:05.923 "security": 0, 00:18:05.923 "format": 1, 00:18:05.923 "firmware": 0, 00:18:05.923 "ns_manage": 1 00:18:05.923 }, 00:18:05.923 "multi_ctrlr": false, 00:18:05.923 "ana_reporting": false 00:18:05.923 }, 00:18:05.923 "vs": { 00:18:05.923 "nvme_version": "1.4" 00:18:05.923 }, 00:18:05.923 "ns_data": { 00:18:05.923 "id": 1, 00:18:05.923 "can_share": false 00:18:05.923 } 00:18:05.923 } 00:18:05.923 ], 00:18:05.923 "mp_policy": "active_passive" 00:18:05.923 } 00:18:05.923 } 00:18:05.923 ]' 00:18:05.923 23:13:25 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:05.923 23:13:25 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:18:05.923 23:13:25 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:06.184 23:13:25 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=1310720 00:18:06.184 23:13:25 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:18:06.184 23:13:25 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 5120 00:18:06.184 23:13:25 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:18:06.184 23:13:25 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:06.184 23:13:25 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:18:06.184 23:13:25 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:06.184 23:13:25 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:06.184 23:13:25 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=051ca3a0-d80b-4595-9d69-ca3ffb60917e 00:18:06.184 23:13:25 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:18:06.184 23:13:25 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 051ca3a0-d80b-4595-9d69-ca3ffb60917e 00:18:06.446 23:13:25 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:06.706 23:13:25 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=c9da61ad-5018-444f-af3f-0fb2cd04932d 00:18:06.707 23:13:25 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u c9da61ad-5018-444f-af3f-0fb2cd04932d 00:18:06.968 23:13:26 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=5b3cd09c-67ed-479c-b5a8-3b006b3defd8 00:18:06.968 23:13:26 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:18:06.968 23:13:26 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 5b3cd09c-67ed-479c-b5a8-3b006b3defd8 00:18:06.968 23:13:26 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:18:06.968 23:13:26 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:06.968 23:13:26 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=5b3cd09c-67ed-479c-b5a8-3b006b3defd8 00:18:06.968 23:13:26 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:18:06.968 23:13:26 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 5b3cd09c-67ed-479c-b5a8-3b006b3defd8 00:18:06.968 23:13:26 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=5b3cd09c-67ed-479c-b5a8-3b006b3defd8 00:18:06.968 23:13:26 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:06.968 23:13:26 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:18:06.968 23:13:26 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:18:06.968 23:13:26 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5b3cd09c-67ed-479c-b5a8-3b006b3defd8 00:18:06.968 23:13:26 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:06.968 { 00:18:06.968 "name": "5b3cd09c-67ed-479c-b5a8-3b006b3defd8", 00:18:06.968 "aliases": [ 00:18:06.968 "lvs/nvme0n1p0" 00:18:06.968 ], 00:18:06.968 "product_name": "Logical Volume", 00:18:06.968 "block_size": 4096, 00:18:06.968 "num_blocks": 26476544, 00:18:06.968 "uuid": "5b3cd09c-67ed-479c-b5a8-3b006b3defd8", 00:18:06.968 "assigned_rate_limits": { 00:18:06.968 "rw_ios_per_sec": 0, 00:18:06.968 "rw_mbytes_per_sec": 0, 00:18:06.968 "r_mbytes_per_sec": 0, 00:18:06.968 "w_mbytes_per_sec": 0 00:18:06.968 }, 00:18:06.968 "claimed": false, 00:18:06.968 "zoned": false, 00:18:06.968 "supported_io_types": { 00:18:06.968 "read": true, 00:18:06.968 "write": true, 00:18:06.968 "unmap": true, 00:18:06.968 "flush": false, 00:18:06.968 "reset": true, 00:18:06.968 "nvme_admin": false, 00:18:06.968 "nvme_io": false, 00:18:06.968 "nvme_io_md": false, 00:18:06.968 "write_zeroes": true, 00:18:06.968 "zcopy": false, 00:18:06.968 "get_zone_info": false, 00:18:06.968 "zone_management": false, 00:18:06.968 "zone_append": false, 00:18:06.968 "compare": false, 00:18:06.968 "compare_and_write": false, 00:18:06.968 "abort": false, 00:18:06.968 "seek_hole": true, 00:18:06.968 "seek_data": true, 00:18:06.968 "copy": false, 00:18:06.968 "nvme_iov_md": false 00:18:06.968 }, 00:18:06.968 "driver_specific": { 00:18:06.968 "lvol": { 00:18:06.968 "lvol_store_uuid": "c9da61ad-5018-444f-af3f-0fb2cd04932d", 00:18:06.968 "base_bdev": "nvme0n1", 00:18:06.968 "thin_provision": true, 00:18:06.968 "num_allocated_clusters": 0, 00:18:06.968 "snapshot": false, 00:18:06.968 "clone": false, 00:18:06.968 "esnap_clone": false 00:18:06.968 } 00:18:06.968 } 00:18:06.968 } 00:18:06.968 ]' 00:18:06.968 23:13:26 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:06.968 23:13:26 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:18:06.968 23:13:26 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:07.229 23:13:26 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:07.229 23:13:26 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:07.229 23:13:26 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:18:07.229 23:13:26 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:18:07.229 23:13:26 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:18:07.229 23:13:26 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:07.491 23:13:26 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:07.491 23:13:26 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:07.491 23:13:26 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 5b3cd09c-67ed-479c-b5a8-3b006b3defd8 00:18:07.491 23:13:26 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=5b3cd09c-67ed-479c-b5a8-3b006b3defd8 00:18:07.491 23:13:26 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:07.491 23:13:26 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:18:07.491 23:13:26 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:18:07.491 23:13:26 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5b3cd09c-67ed-479c-b5a8-3b006b3defd8 00:18:07.491 23:13:26 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:07.491 { 00:18:07.491 "name": "5b3cd09c-67ed-479c-b5a8-3b006b3defd8", 00:18:07.491 "aliases": [ 00:18:07.491 "lvs/nvme0n1p0" 00:18:07.491 ], 00:18:07.491 "product_name": "Logical Volume", 00:18:07.491 "block_size": 4096, 00:18:07.491 "num_blocks": 26476544, 00:18:07.491 "uuid": "5b3cd09c-67ed-479c-b5a8-3b006b3defd8", 00:18:07.491 "assigned_rate_limits": { 00:18:07.491 "rw_ios_per_sec": 0, 00:18:07.491 "rw_mbytes_per_sec": 0, 00:18:07.491 "r_mbytes_per_sec": 0, 00:18:07.491 "w_mbytes_per_sec": 0 00:18:07.491 }, 00:18:07.491 "claimed": false, 00:18:07.491 "zoned": false, 00:18:07.491 "supported_io_types": { 00:18:07.491 "read": true, 00:18:07.491 "write": true, 00:18:07.491 "unmap": true, 00:18:07.491 "flush": false, 00:18:07.491 "reset": true, 00:18:07.491 "nvme_admin": false, 00:18:07.491 "nvme_io": false, 00:18:07.491 "nvme_io_md": false, 00:18:07.491 "write_zeroes": true, 00:18:07.491 "zcopy": false, 00:18:07.491 "get_zone_info": false, 00:18:07.491 "zone_management": false, 00:18:07.491 "zone_append": false, 00:18:07.491 "compare": false, 00:18:07.491 "compare_and_write": false, 00:18:07.491 "abort": false, 00:18:07.491 "seek_hole": true, 00:18:07.491 "seek_data": true, 00:18:07.491 "copy": false, 00:18:07.491 "nvme_iov_md": false 00:18:07.491 }, 00:18:07.491 "driver_specific": { 00:18:07.491 "lvol": { 00:18:07.491 "lvol_store_uuid": "c9da61ad-5018-444f-af3f-0fb2cd04932d", 00:18:07.491 "base_bdev": "nvme0n1", 00:18:07.491 "thin_provision": true, 00:18:07.491 "num_allocated_clusters": 0, 00:18:07.491 "snapshot": false, 00:18:07.491 "clone": false, 00:18:07.491 "esnap_clone": false 00:18:07.491 } 00:18:07.491 } 00:18:07.491 } 00:18:07.491 ]' 00:18:07.491 23:13:26 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:07.491 23:13:26 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:18:07.491 23:13:26 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:07.762 23:13:26 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:07.762 23:13:26 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:07.762 23:13:26 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:18:07.762 23:13:26 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:18:07.762 23:13:26 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:07.762 23:13:27 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:18:07.762 23:13:27 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 5b3cd09c-67ed-479c-b5a8-3b006b3defd8 00:18:07.762 23:13:27 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=5b3cd09c-67ed-479c-b5a8-3b006b3defd8 00:18:07.762 23:13:27 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:07.762 23:13:27 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:18:07.762 23:13:27 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:18:07.762 23:13:27 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5b3cd09c-67ed-479c-b5a8-3b006b3defd8 00:18:08.025 23:13:27 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:08.025 { 00:18:08.025 "name": "5b3cd09c-67ed-479c-b5a8-3b006b3defd8", 00:18:08.025 "aliases": [ 00:18:08.025 "lvs/nvme0n1p0" 00:18:08.025 ], 00:18:08.025 "product_name": "Logical Volume", 00:18:08.025 "block_size": 4096, 00:18:08.025 "num_blocks": 26476544, 00:18:08.025 "uuid": "5b3cd09c-67ed-479c-b5a8-3b006b3defd8", 00:18:08.025 "assigned_rate_limits": { 00:18:08.025 "rw_ios_per_sec": 0, 00:18:08.025 "rw_mbytes_per_sec": 0, 00:18:08.025 "r_mbytes_per_sec": 0, 00:18:08.025 "w_mbytes_per_sec": 0 00:18:08.025 }, 00:18:08.025 "claimed": false, 00:18:08.025 "zoned": false, 00:18:08.025 "supported_io_types": { 00:18:08.025 "read": true, 00:18:08.025 "write": true, 00:18:08.025 "unmap": true, 00:18:08.025 "flush": false, 00:18:08.025 "reset": true, 00:18:08.025 "nvme_admin": false, 00:18:08.025 "nvme_io": false, 00:18:08.025 "nvme_io_md": false, 00:18:08.025 "write_zeroes": true, 00:18:08.025 "zcopy": false, 00:18:08.025 "get_zone_info": false, 00:18:08.025 "zone_management": false, 00:18:08.025 "zone_append": false, 00:18:08.025 "compare": false, 00:18:08.025 "compare_and_write": false, 00:18:08.025 "abort": false, 00:18:08.025 "seek_hole": true, 00:18:08.025 "seek_data": true, 00:18:08.025 "copy": false, 00:18:08.025 "nvme_iov_md": false 00:18:08.025 }, 00:18:08.025 "driver_specific": { 00:18:08.025 "lvol": { 00:18:08.025 "lvol_store_uuid": "c9da61ad-5018-444f-af3f-0fb2cd04932d", 00:18:08.025 "base_bdev": "nvme0n1", 00:18:08.025 "thin_provision": true, 00:18:08.025 "num_allocated_clusters": 0, 00:18:08.025 "snapshot": false, 00:18:08.025 "clone": false, 00:18:08.025 "esnap_clone": false 00:18:08.025 } 00:18:08.025 } 00:18:08.025 } 00:18:08.025 ]' 00:18:08.025 23:13:27 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:08.025 23:13:27 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:18:08.025 23:13:27 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:08.025 23:13:27 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:08.025 23:13:27 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:08.025 23:13:27 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:18:08.025 23:13:27 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:18:08.026 23:13:27 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 5b3cd09c-67ed-479c-b5a8-3b006b3defd8 --l2p_dram_limit 10' 00:18:08.026 23:13:27 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:18:08.026 23:13:27 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:18:08.026 23:13:27 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:18:08.026 23:13:27 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:18:08.026 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:18:08.026 23:13:27 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 5b3cd09c-67ed-479c-b5a8-3b006b3defd8 --l2p_dram_limit 10 -c nvc0n1p0 00:18:08.287 [2024-11-18 23:13:27.511443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.287 [2024-11-18 23:13:27.511568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:08.287 [2024-11-18 23:13:27.511618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:08.287 [2024-11-18 23:13:27.511639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.287 [2024-11-18 23:13:27.511713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.287 [2024-11-18 23:13:27.511736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:08.287 [2024-11-18 23:13:27.511753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:18:08.287 [2024-11-18 23:13:27.511778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.287 [2024-11-18 23:13:27.511818] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:08.287 [2024-11-18 23:13:27.512165] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:08.287 [2024-11-18 23:13:27.512234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.287 [2024-11-18 23:13:27.512259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:08.287 [2024-11-18 23:13:27.512282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.425 ms 00:18:08.287 [2024-11-18 23:13:27.512300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.287 [2024-11-18 23:13:27.512374] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID e018dca0-090d-4735-a005-a51940083468 00:18:08.287 [2024-11-18 23:13:27.513704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.287 [2024-11-18 23:13:27.513790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:08.287 [2024-11-18 23:13:27.513805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:18:08.287 [2024-11-18 23:13:27.513815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.287 [2024-11-18 23:13:27.520782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.287 [2024-11-18 23:13:27.520877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:08.287 [2024-11-18 23:13:27.520891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.921 ms 00:18:08.287 [2024-11-18 23:13:27.520898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.287 [2024-11-18 23:13:27.520965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.287 [2024-11-18 23:13:27.520975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:08.287 [2024-11-18 23:13:27.520983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:18:08.287 [2024-11-18 23:13:27.520992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.287 [2024-11-18 23:13:27.521033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.287 [2024-11-18 23:13:27.521041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:08.287 [2024-11-18 23:13:27.521049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:08.287 [2024-11-18 23:13:27.521055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.287 [2024-11-18 23:13:27.521074] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:08.287 [2024-11-18 23:13:27.522768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.287 [2024-11-18 23:13:27.522796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:08.287 [2024-11-18 23:13:27.522806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.701 ms 00:18:08.287 [2024-11-18 23:13:27.522814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.287 [2024-11-18 23:13:27.522847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.287 [2024-11-18 23:13:27.522858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:08.287 [2024-11-18 23:13:27.522865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:18:08.287 [2024-11-18 23:13:27.522874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.287 [2024-11-18 23:13:27.522892] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:08.287 [2024-11-18 23:13:27.523013] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:08.287 [2024-11-18 23:13:27.523023] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:08.287 [2024-11-18 23:13:27.523034] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:08.287 [2024-11-18 23:13:27.523043] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:08.287 [2024-11-18 23:13:27.523053] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:08.287 [2024-11-18 23:13:27.523064] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:08.287 [2024-11-18 23:13:27.523075] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:08.287 [2024-11-18 23:13:27.523081] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:08.287 [2024-11-18 23:13:27.523088] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:08.287 [2024-11-18 23:13:27.523096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.287 [2024-11-18 23:13:27.523103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:08.287 [2024-11-18 23:13:27.523112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.204 ms 00:18:08.287 [2024-11-18 23:13:27.523119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.287 [2024-11-18 23:13:27.523210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.287 [2024-11-18 23:13:27.523222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:08.287 [2024-11-18 23:13:27.523229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:18:08.287 [2024-11-18 23:13:27.523237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.287 [2024-11-18 23:13:27.523324] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:08.287 [2024-11-18 23:13:27.523336] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:08.287 [2024-11-18 23:13:27.523343] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:08.287 [2024-11-18 23:13:27.523351] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:08.287 [2024-11-18 23:13:27.523358] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:08.287 [2024-11-18 23:13:27.523364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:08.287 [2024-11-18 23:13:27.523369] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:08.287 [2024-11-18 23:13:27.523377] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:08.287 [2024-11-18 23:13:27.523383] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:08.287 [2024-11-18 23:13:27.523391] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:08.287 [2024-11-18 23:13:27.523396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:08.287 [2024-11-18 23:13:27.523403] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:08.287 [2024-11-18 23:13:27.523408] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:08.287 [2024-11-18 23:13:27.523417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:08.287 [2024-11-18 23:13:27.523423] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:08.287 [2024-11-18 23:13:27.523431] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:08.287 [2024-11-18 23:13:27.523437] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:08.288 [2024-11-18 23:13:27.523444] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:08.288 [2024-11-18 23:13:27.523450] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:08.288 [2024-11-18 23:13:27.523459] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:08.288 [2024-11-18 23:13:27.523465] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:08.288 [2024-11-18 23:13:27.523473] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:08.288 [2024-11-18 23:13:27.523478] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:08.288 [2024-11-18 23:13:27.523486] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:08.288 [2024-11-18 23:13:27.523491] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:08.288 [2024-11-18 23:13:27.523500] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:08.288 [2024-11-18 23:13:27.523506] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:08.288 [2024-11-18 23:13:27.523513] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:08.288 [2024-11-18 23:13:27.523518] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:08.288 [2024-11-18 23:13:27.523527] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:08.288 [2024-11-18 23:13:27.523533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:08.288 [2024-11-18 23:13:27.523540] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:08.288 [2024-11-18 23:13:27.523546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:08.288 [2024-11-18 23:13:27.523555] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:08.288 [2024-11-18 23:13:27.523561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:08.288 [2024-11-18 23:13:27.523568] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:08.288 [2024-11-18 23:13:27.523573] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:08.288 [2024-11-18 23:13:27.523581] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:08.288 [2024-11-18 23:13:27.523587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:08.288 [2024-11-18 23:13:27.523595] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:08.288 [2024-11-18 23:13:27.523600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:08.288 [2024-11-18 23:13:27.523607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:08.288 [2024-11-18 23:13:27.523612] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:08.288 [2024-11-18 23:13:27.523619] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:08.288 [2024-11-18 23:13:27.523627] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:08.288 [2024-11-18 23:13:27.523637] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:08.288 [2024-11-18 23:13:27.523643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:08.288 [2024-11-18 23:13:27.523651] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:08.288 [2024-11-18 23:13:27.523657] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:08.288 [2024-11-18 23:13:27.523665] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:08.288 [2024-11-18 23:13:27.523671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:08.288 [2024-11-18 23:13:27.523680] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:08.288 [2024-11-18 23:13:27.523686] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:08.288 [2024-11-18 23:13:27.523696] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:08.288 [2024-11-18 23:13:27.523705] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:08.288 [2024-11-18 23:13:27.523714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:08.288 [2024-11-18 23:13:27.523721] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:08.288 [2024-11-18 23:13:27.523728] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:08.288 [2024-11-18 23:13:27.523735] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:08.288 [2024-11-18 23:13:27.523744] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:08.288 [2024-11-18 23:13:27.523750] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:08.288 [2024-11-18 23:13:27.523760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:08.288 [2024-11-18 23:13:27.523766] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:08.288 [2024-11-18 23:13:27.523774] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:08.288 [2024-11-18 23:13:27.523780] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:08.288 [2024-11-18 23:13:27.523788] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:08.288 [2024-11-18 23:13:27.523793] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:08.288 [2024-11-18 23:13:27.523800] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:08.288 [2024-11-18 23:13:27.523806] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:08.288 [2024-11-18 23:13:27.523813] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:08.288 [2024-11-18 23:13:27.523821] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:08.288 [2024-11-18 23:13:27.523829] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:08.288 [2024-11-18 23:13:27.523834] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:08.288 [2024-11-18 23:13:27.523841] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:08.288 [2024-11-18 23:13:27.523846] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:08.288 [2024-11-18 23:13:27.523853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.288 [2024-11-18 23:13:27.523858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:08.288 [2024-11-18 23:13:27.523867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.592 ms 00:18:08.288 [2024-11-18 23:13:27.523873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.288 [2024-11-18 23:13:27.523905] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:08.288 [2024-11-18 23:13:27.523912] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:11.590 [2024-11-18 23:13:30.928491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.590 [2024-11-18 23:13:30.928599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:11.590 [2024-11-18 23:13:30.928628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3404.558 ms 00:18:11.590 [2024-11-18 23:13:30.928638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.590 [2024-11-18 23:13:30.948690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.590 [2024-11-18 23:13:30.948756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:11.590 [2024-11-18 23:13:30.948775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.890 ms 00:18:11.590 [2024-11-18 23:13:30.948785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.590 [2024-11-18 23:13:30.948934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.590 [2024-11-18 23:13:30.948946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:11.590 [2024-11-18 23:13:30.948965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:18:11.590 [2024-11-18 23:13:30.948981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.851 [2024-11-18 23:13:30.965201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.851 [2024-11-18 23:13:30.965252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:11.851 [2024-11-18 23:13:30.965268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.171 ms 00:18:11.851 [2024-11-18 23:13:30.965277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.851 [2024-11-18 23:13:30.965321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.851 [2024-11-18 23:13:30.965334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:11.851 [2024-11-18 23:13:30.965346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:11.851 [2024-11-18 23:13:30.965355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.851 [2024-11-18 23:13:30.966089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.851 [2024-11-18 23:13:30.966129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:11.851 [2024-11-18 23:13:30.966145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.671 ms 00:18:11.851 [2024-11-18 23:13:30.966175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.851 [2024-11-18 23:13:30.966318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.851 [2024-11-18 23:13:30.966331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:11.851 [2024-11-18 23:13:30.966347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:18:11.851 [2024-11-18 23:13:30.966356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.851 [2024-11-18 23:13:30.991524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.851 [2024-11-18 23:13:30.991587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:11.851 [2024-11-18 23:13:30.991605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.135 ms 00:18:11.851 [2024-11-18 23:13:30.991617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.851 [2024-11-18 23:13:31.003320] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:11.851 [2024-11-18 23:13:31.008391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.851 [2024-11-18 23:13:31.008445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:11.851 [2024-11-18 23:13:31.008460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.620 ms 00:18:11.852 [2024-11-18 23:13:31.008472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.852 [2024-11-18 23:13:31.097358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.852 [2024-11-18 23:13:31.097434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:11.852 [2024-11-18 23:13:31.097450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.848 ms 00:18:11.852 [2024-11-18 23:13:31.097475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.852 [2024-11-18 23:13:31.097713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.852 [2024-11-18 23:13:31.097731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:11.852 [2024-11-18 23:13:31.097741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.176 ms 00:18:11.852 [2024-11-18 23:13:31.097752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.852 [2024-11-18 23:13:31.104531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.852 [2024-11-18 23:13:31.104596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:11.852 [2024-11-18 23:13:31.104609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.736 ms 00:18:11.852 [2024-11-18 23:13:31.104621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.852 [2024-11-18 23:13:31.110704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.852 [2024-11-18 23:13:31.111014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:11.852 [2024-11-18 23:13:31.111037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.022 ms 00:18:11.852 [2024-11-18 23:13:31.111049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.852 [2024-11-18 23:13:31.111601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.852 [2024-11-18 23:13:31.111644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:11.852 [2024-11-18 23:13:31.111659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.379 ms 00:18:11.852 [2024-11-18 23:13:31.111676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.852 [2024-11-18 23:13:31.162779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.852 [2024-11-18 23:13:31.162842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:11.852 [2024-11-18 23:13:31.162856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.053 ms 00:18:11.852 [2024-11-18 23:13:31.162869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.852 [2024-11-18 23:13:31.171709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.852 [2024-11-18 23:13:31.171770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:11.852 [2024-11-18 23:13:31.171782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.768 ms 00:18:11.852 [2024-11-18 23:13:31.171794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.852 [2024-11-18 23:13:31.178690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.852 [2024-11-18 23:13:31.178750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:11.852 [2024-11-18 23:13:31.178761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.840 ms 00:18:11.852 [2024-11-18 23:13:31.178773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.852 [2024-11-18 23:13:31.186060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.852 [2024-11-18 23:13:31.186352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:11.852 [2024-11-18 23:13:31.186375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.234 ms 00:18:11.852 [2024-11-18 23:13:31.186390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.852 [2024-11-18 23:13:31.186463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.852 [2024-11-18 23:13:31.186485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:11.852 [2024-11-18 23:13:31.186496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:11.852 [2024-11-18 23:13:31.186508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.852 [2024-11-18 23:13:31.186598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.852 [2024-11-18 23:13:31.186612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:11.852 [2024-11-18 23:13:31.186622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:18:11.852 [2024-11-18 23:13:31.186647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.852 [2024-11-18 23:13:31.188070] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3676.049 ms, result 0 00:18:11.852 { 00:18:11.852 "name": "ftl0", 00:18:11.852 "uuid": "e018dca0-090d-4735-a005-a51940083468" 00:18:11.852 } 00:18:11.852 23:13:31 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:18:11.852 23:13:31 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:12.112 23:13:31 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:18:12.112 23:13:31 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:12.387 [2024-11-18 23:13:31.637405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.387 [2024-11-18 23:13:31.637637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:12.387 [2024-11-18 23:13:31.637668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:12.388 [2024-11-18 23:13:31.637678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.388 [2024-11-18 23:13:31.637721] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:12.388 [2024-11-18 23:13:31.638733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.388 [2024-11-18 23:13:31.638792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:12.388 [2024-11-18 23:13:31.638806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.995 ms 00:18:12.388 [2024-11-18 23:13:31.638817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.388 [2024-11-18 23:13:31.639089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.388 [2024-11-18 23:13:31.639105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:12.388 [2024-11-18 23:13:31.639119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.245 ms 00:18:12.388 [2024-11-18 23:13:31.639130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.388 [2024-11-18 23:13:31.642477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.388 [2024-11-18 23:13:31.642514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:12.388 [2024-11-18 23:13:31.642524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.331 ms 00:18:12.388 [2024-11-18 23:13:31.642536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.388 [2024-11-18 23:13:31.648810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.389 [2024-11-18 23:13:31.649031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:12.389 [2024-11-18 23:13:31.649054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.255 ms 00:18:12.389 [2024-11-18 23:13:31.649066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.389 [2024-11-18 23:13:31.652410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.389 [2024-11-18 23:13:31.652480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:12.389 [2024-11-18 23:13:31.652491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.214 ms 00:18:12.389 [2024-11-18 23:13:31.652503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.389 [2024-11-18 23:13:31.658369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.389 [2024-11-18 23:13:31.658410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:12.389 [2024-11-18 23:13:31.658421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.813 ms 00:18:12.389 [2024-11-18 23:13:31.658430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.389 [2024-11-18 23:13:31.658569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.389 [2024-11-18 23:13:31.658583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:12.389 [2024-11-18 23:13:31.658593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:18:12.389 [2024-11-18 23:13:31.658602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.389 [2024-11-18 23:13:31.661512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.389 [2024-11-18 23:13:31.661551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:12.389 [2024-11-18 23:13:31.661561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.890 ms 00:18:12.389 [2024-11-18 23:13:31.661569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.389 [2024-11-18 23:13:31.663905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.389 [2024-11-18 23:13:31.663944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:12.389 [2024-11-18 23:13:31.663953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.303 ms 00:18:12.389 [2024-11-18 23:13:31.663962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.389 [2024-11-18 23:13:31.665799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.389 [2024-11-18 23:13:31.665837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:12.389 [2024-11-18 23:13:31.665847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.804 ms 00:18:12.389 [2024-11-18 23:13:31.665858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.389 [2024-11-18 23:13:31.667595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.389 [2024-11-18 23:13:31.667632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:12.389 [2024-11-18 23:13:31.667641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.669 ms 00:18:12.389 [2024-11-18 23:13:31.667654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.389 [2024-11-18 23:13:31.667685] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:12.390 [2024-11-18 23:13:31.667703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:12.390 [2024-11-18 23:13:31.667713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:12.390 [2024-11-18 23:13:31.667724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:12.390 [2024-11-18 23:13:31.667732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:12.390 [2024-11-18 23:13:31.667744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:12.390 [2024-11-18 23:13:31.667752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:12.390 [2024-11-18 23:13:31.667762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:12.390 [2024-11-18 23:13:31.667770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:12.390 [2024-11-18 23:13:31.667780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:12.390 [2024-11-18 23:13:31.667788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:12.390 [2024-11-18 23:13:31.667797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:12.392 [2024-11-18 23:13:31.667805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:12.392 [2024-11-18 23:13:31.667814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:12.392 [2024-11-18 23:13:31.667822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:12.393 [2024-11-18 23:13:31.667831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:12.393 [2024-11-18 23:13:31.667839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:12.393 [2024-11-18 23:13:31.667850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:12.393 [2024-11-18 23:13:31.667857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:12.393 [2024-11-18 23:13:31.667866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:12.393 [2024-11-18 23:13:31.667874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:12.393 [2024-11-18 23:13:31.667885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:12.393 [2024-11-18 23:13:31.667893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:12.393 [2024-11-18 23:13:31.667903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:12.393 [2024-11-18 23:13:31.667910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:12.393 [2024-11-18 23:13:31.667921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:12.393 [2024-11-18 23:13:31.667929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:12.393 [2024-11-18 23:13:31.667939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:12.393 [2024-11-18 23:13:31.667947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:12.393 [2024-11-18 23:13:31.667957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:12.394 [2024-11-18 23:13:31.667965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:12.394 [2024-11-18 23:13:31.667974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:12.394 [2024-11-18 23:13:31.667982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:12.394 [2024-11-18 23:13:31.667992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:12.394 [2024-11-18 23:13:31.668000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:12.394 [2024-11-18 23:13:31.668010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:12.394 [2024-11-18 23:13:31.668018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:12.394 [2024-11-18 23:13:31.668030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:12.394 [2024-11-18 23:13:31.668037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:12.394 [2024-11-18 23:13:31.668047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:12.395 [2024-11-18 23:13:31.668055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:12.395 [2024-11-18 23:13:31.668064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:12.395 [2024-11-18 23:13:31.668072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:12.395 [2024-11-18 23:13:31.668082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:12.395 [2024-11-18 23:13:31.668090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:12.395 [2024-11-18 23:13:31.668099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:12.395 [2024-11-18 23:13:31.668107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:12.395 [2024-11-18 23:13:31.668116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:12.395 [2024-11-18 23:13:31.668124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:12.395 [2024-11-18 23:13:31.668133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:12.395 [2024-11-18 23:13:31.668141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:12.395 [2024-11-18 23:13:31.668150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:12.395 [2024-11-18 23:13:31.668177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:12.395 [2024-11-18 23:13:31.668191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:12.395 [2024-11-18 23:13:31.668199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:12.395 [2024-11-18 23:13:31.668209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:12.395 [2024-11-18 23:13:31.668218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:12.396 [2024-11-18 23:13:31.668227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:12.396 [2024-11-18 23:13:31.668235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:12.396 [2024-11-18 23:13:31.668245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:12.396 [2024-11-18 23:13:31.668254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:12.396 [2024-11-18 23:13:31.668264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:12.396 [2024-11-18 23:13:31.668272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:12.396 [2024-11-18 23:13:31.668281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:12.396 [2024-11-18 23:13:31.668289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:12.396 [2024-11-18 23:13:31.668306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:12.396 [2024-11-18 23:13:31.668313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:12.396 [2024-11-18 23:13:31.668323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:12.396 [2024-11-18 23:13:31.668330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:12.396 [2024-11-18 23:13:31.668342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:12.396 [2024-11-18 23:13:31.668350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:12.396 [2024-11-18 23:13:31.668360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:12.396 [2024-11-18 23:13:31.668367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:12.396 [2024-11-18 23:13:31.668376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:12.396 [2024-11-18 23:13:31.668384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:12.396 [2024-11-18 23:13:31.668394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:12.396 [2024-11-18 23:13:31.668401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:12.396 [2024-11-18 23:13:31.668412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:12.396 [2024-11-18 23:13:31.668420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:12.396 [2024-11-18 23:13:31.668429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:12.397 [2024-11-18 23:13:31.668436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:12.397 [2024-11-18 23:13:31.668446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:12.397 [2024-11-18 23:13:31.668454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:12.397 [2024-11-18 23:13:31.668463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:12.397 [2024-11-18 23:13:31.668471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:12.397 [2024-11-18 23:13:31.668483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:12.397 [2024-11-18 23:13:31.668491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:12.397 [2024-11-18 23:13:31.668500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:12.397 [2024-11-18 23:13:31.668509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:12.397 [2024-11-18 23:13:31.668518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:12.397 [2024-11-18 23:13:31.668525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:12.397 [2024-11-18 23:13:31.668535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:12.397 [2024-11-18 23:13:31.668543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:12.397 [2024-11-18 23:13:31.668553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:12.397 [2024-11-18 23:13:31.668560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:12.397 [2024-11-18 23:13:31.668570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:12.397 [2024-11-18 23:13:31.668578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:12.397 [2024-11-18 23:13:31.668587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:12.397 [2024-11-18 23:13:31.668595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:12.397 [2024-11-18 23:13:31.668604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:12.398 [2024-11-18 23:13:31.668612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:12.398 [2024-11-18 23:13:31.668631] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:12.398 [2024-11-18 23:13:31.668639] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e018dca0-090d-4735-a005-a51940083468 00:18:12.398 [2024-11-18 23:13:31.668651] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:12.398 [2024-11-18 23:13:31.668662] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:12.398 [2024-11-18 23:13:31.668672] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:12.398 [2024-11-18 23:13:31.668680] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:12.398 [2024-11-18 23:13:31.668689] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:12.398 [2024-11-18 23:13:31.668696] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:12.398 [2024-11-18 23:13:31.668706] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:12.398 [2024-11-18 23:13:31.668713] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:12.398 [2024-11-18 23:13:31.668721] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:12.398 [2024-11-18 23:13:31.668729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.398 [2024-11-18 23:13:31.668740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:12.398 [2024-11-18 23:13:31.668748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.045 ms 00:18:12.398 [2024-11-18 23:13:31.668758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.398 [2024-11-18 23:13:31.670672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.398 [2024-11-18 23:13:31.670697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:12.398 [2024-11-18 23:13:31.670706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.897 ms 00:18:12.398 [2024-11-18 23:13:31.670715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.398 [2024-11-18 23:13:31.670811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.398 [2024-11-18 23:13:31.670822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:12.398 [2024-11-18 23:13:31.670830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:18:12.399 [2024-11-18 23:13:31.670839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.399 [2024-11-18 23:13:31.677802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:12.399 [2024-11-18 23:13:31.677839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:12.399 [2024-11-18 23:13:31.677849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:12.399 [2024-11-18 23:13:31.677858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.399 [2024-11-18 23:13:31.677915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:12.399 [2024-11-18 23:13:31.677926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:12.399 [2024-11-18 23:13:31.677934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:12.399 [2024-11-18 23:13:31.677944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.399 [2024-11-18 23:13:31.678009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:12.399 [2024-11-18 23:13:31.678025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:12.399 [2024-11-18 23:13:31.678033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:12.399 [2024-11-18 23:13:31.678043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.399 [2024-11-18 23:13:31.678060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:12.399 [2024-11-18 23:13:31.678072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:12.399 [2024-11-18 23:13:31.678080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:12.400 [2024-11-18 23:13:31.678090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.400 [2024-11-18 23:13:31.690280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:12.400 [2024-11-18 23:13:31.690321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:12.400 [2024-11-18 23:13:31.690331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:12.400 [2024-11-18 23:13:31.690342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.400 [2024-11-18 23:13:31.700850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:12.400 [2024-11-18 23:13:31.700896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:12.400 [2024-11-18 23:13:31.700906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:12.400 [2024-11-18 23:13:31.700919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.400 [2024-11-18 23:13:31.700987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:12.400 [2024-11-18 23:13:31.701003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:12.400 [2024-11-18 23:13:31.701011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:12.400 [2024-11-18 23:13:31.701021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.404 [2024-11-18 23:13:31.701092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:12.404 [2024-11-18 23:13:31.701105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:12.405 [2024-11-18 23:13:31.701116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:12.405 [2024-11-18 23:13:31.701131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.405 [2024-11-18 23:13:31.701221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:12.405 [2024-11-18 23:13:31.701234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:12.405 [2024-11-18 23:13:31.701243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:12.405 [2024-11-18 23:13:31.701252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.405 [2024-11-18 23:13:31.701289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:12.405 [2024-11-18 23:13:31.701301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:12.405 [2024-11-18 23:13:31.701328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:12.405 [2024-11-18 23:13:31.701341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.405 [2024-11-18 23:13:31.701383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:12.405 [2024-11-18 23:13:31.701396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:12.405 [2024-11-18 23:13:31.701405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:12.405 [2024-11-18 23:13:31.701415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.405 [2024-11-18 23:13:31.701461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:12.405 [2024-11-18 23:13:31.701474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:12.405 [2024-11-18 23:13:31.701484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:12.405 [2024-11-18 23:13:31.701494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.406 [2024-11-18 23:13:31.701641] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 64.202 ms, result 0 00:18:12.406 true 00:18:12.406 23:13:31 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 86351 00:18:12.406 23:13:31 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 86351 ']' 00:18:12.406 23:13:31 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 86351 00:18:12.406 23:13:31 ftl.ftl_restore -- common/autotest_common.sh@955 -- # uname 00:18:12.406 23:13:31 ftl.ftl_restore -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:12.406 23:13:31 ftl.ftl_restore -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86351 00:18:12.406 killing process with pid 86351 00:18:12.406 23:13:31 ftl.ftl_restore -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:12.406 23:13:31 ftl.ftl_restore -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:12.406 23:13:31 ftl.ftl_restore -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86351' 00:18:12.406 23:13:31 ftl.ftl_restore -- common/autotest_common.sh@969 -- # kill 86351 00:18:12.406 23:13:31 ftl.ftl_restore -- common/autotest_common.sh@974 -- # wait 86351 00:18:17.696 23:13:36 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:18:22.980 262144+0 records in 00:18:22.980 262144+0 records out 00:18:22.980 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.51246 s, 238 MB/s 00:18:22.980 23:13:41 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:18:24.367 23:13:43 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:24.367 [2024-11-18 23:13:43.588337] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:18:24.367 [2024-11-18 23:13:43.588447] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86575 ] 00:18:24.367 [2024-11-18 23:13:43.727192] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.628 [2024-11-18 23:13:43.774126] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:24.628 [2024-11-18 23:13:43.872553] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:24.628 [2024-11-18 23:13:43.872610] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:24.891 [2024-11-18 23:13:44.020149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.891 [2024-11-18 23:13:44.020190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:24.891 [2024-11-18 23:13:44.020206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:24.891 [2024-11-18 23:13:44.020212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.891 [2024-11-18 23:13:44.020250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.891 [2024-11-18 23:13:44.020258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:24.891 [2024-11-18 23:13:44.020264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:18:24.891 [2024-11-18 23:13:44.020270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.891 [2024-11-18 23:13:44.020287] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:24.891 [2024-11-18 23:13:44.020469] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:24.891 [2024-11-18 23:13:44.020480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.891 [2024-11-18 23:13:44.020487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:24.891 [2024-11-18 23:13:44.020498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.200 ms 00:18:24.891 [2024-11-18 23:13:44.020505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.891 [2024-11-18 23:13:44.021753] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:24.891 [2024-11-18 23:13:44.024190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.891 [2024-11-18 23:13:44.024218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:24.891 [2024-11-18 23:13:44.024228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.439 ms 00:18:24.891 [2024-11-18 23:13:44.024234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.891 [2024-11-18 23:13:44.024282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.891 [2024-11-18 23:13:44.024290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:24.891 [2024-11-18 23:13:44.024296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:18:24.891 [2024-11-18 23:13:44.024302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.891 [2024-11-18 23:13:44.030372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.891 [2024-11-18 23:13:44.030398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:24.891 [2024-11-18 23:13:44.030406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.021 ms 00:18:24.891 [2024-11-18 23:13:44.030416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.891 [2024-11-18 23:13:44.030484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.891 [2024-11-18 23:13:44.030492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:24.891 [2024-11-18 23:13:44.030498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:18:24.891 [2024-11-18 23:13:44.030504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.891 [2024-11-18 23:13:44.030536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.891 [2024-11-18 23:13:44.030546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:24.891 [2024-11-18 23:13:44.030553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:24.891 [2024-11-18 23:13:44.030561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.891 [2024-11-18 23:13:44.030581] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:24.891 [2024-11-18 23:13:44.032124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.891 [2024-11-18 23:13:44.032148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:24.891 [2024-11-18 23:13:44.032169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.550 ms 00:18:24.891 [2024-11-18 23:13:44.032180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.891 [2024-11-18 23:13:44.032205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.891 [2024-11-18 23:13:44.032212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:24.891 [2024-11-18 23:13:44.032221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:24.891 [2024-11-18 23:13:44.032226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.891 [2024-11-18 23:13:44.032244] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:24.891 [2024-11-18 23:13:44.032261] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:24.891 [2024-11-18 23:13:44.032292] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:24.891 [2024-11-18 23:13:44.032311] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:24.891 [2024-11-18 23:13:44.032393] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:24.891 [2024-11-18 23:13:44.032402] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:24.891 [2024-11-18 23:13:44.032410] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:24.891 [2024-11-18 23:13:44.032418] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:24.891 [2024-11-18 23:13:44.032427] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:24.891 [2024-11-18 23:13:44.032436] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:24.891 [2024-11-18 23:13:44.032443] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:24.891 [2024-11-18 23:13:44.032449] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:24.891 [2024-11-18 23:13:44.032455] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:24.891 [2024-11-18 23:13:44.032461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.891 [2024-11-18 23:13:44.032467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:24.891 [2024-11-18 23:13:44.032473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.220 ms 00:18:24.891 [2024-11-18 23:13:44.032481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.891 [2024-11-18 23:13:44.032544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.891 [2024-11-18 23:13:44.032554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:24.891 [2024-11-18 23:13:44.032560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:18:24.891 [2024-11-18 23:13:44.032566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.891 [2024-11-18 23:13:44.032642] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:24.891 [2024-11-18 23:13:44.032650] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:24.891 [2024-11-18 23:13:44.032656] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:24.891 [2024-11-18 23:13:44.032666] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:24.891 [2024-11-18 23:13:44.032672] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:24.891 [2024-11-18 23:13:44.032677] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:24.891 [2024-11-18 23:13:44.032683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:24.891 [2024-11-18 23:13:44.032689] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:24.891 [2024-11-18 23:13:44.032694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:24.891 [2024-11-18 23:13:44.032700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:24.891 [2024-11-18 23:13:44.032706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:24.891 [2024-11-18 23:13:44.032711] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:24.891 [2024-11-18 23:13:44.032716] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:24.891 [2024-11-18 23:13:44.032723] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:24.891 [2024-11-18 23:13:44.032729] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:24.892 [2024-11-18 23:13:44.032734] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:24.892 [2024-11-18 23:13:44.032740] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:24.892 [2024-11-18 23:13:44.032745] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:24.892 [2024-11-18 23:13:44.032750] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:24.892 [2024-11-18 23:13:44.032755] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:24.892 [2024-11-18 23:13:44.032760] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:24.892 [2024-11-18 23:13:44.032765] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:24.892 [2024-11-18 23:13:44.032770] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:24.892 [2024-11-18 23:13:44.032777] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:24.892 [2024-11-18 23:13:44.032783] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:24.892 [2024-11-18 23:13:44.032789] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:24.892 [2024-11-18 23:13:44.032794] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:24.892 [2024-11-18 23:13:44.032800] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:24.892 [2024-11-18 23:13:44.032806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:24.892 [2024-11-18 23:13:44.032815] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:24.892 [2024-11-18 23:13:44.032821] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:24.892 [2024-11-18 23:13:44.032827] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:24.892 [2024-11-18 23:13:44.032833] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:24.892 [2024-11-18 23:13:44.032839] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:24.892 [2024-11-18 23:13:44.032845] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:24.892 [2024-11-18 23:13:44.032850] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:24.892 [2024-11-18 23:13:44.032856] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:24.892 [2024-11-18 23:13:44.032862] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:24.892 [2024-11-18 23:13:44.032867] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:24.892 [2024-11-18 23:13:44.032873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:24.892 [2024-11-18 23:13:44.032879] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:24.892 [2024-11-18 23:13:44.032884] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:24.892 [2024-11-18 23:13:44.032891] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:24.892 [2024-11-18 23:13:44.032898] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:24.892 [2024-11-18 23:13:44.032906] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:24.892 [2024-11-18 23:13:44.032914] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:24.892 [2024-11-18 23:13:44.032925] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:24.892 [2024-11-18 23:13:44.032932] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:24.892 [2024-11-18 23:13:44.032938] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:24.892 [2024-11-18 23:13:44.032944] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:24.892 [2024-11-18 23:13:44.032950] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:24.892 [2024-11-18 23:13:44.032956] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:24.892 [2024-11-18 23:13:44.032961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:24.892 [2024-11-18 23:13:44.032969] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:24.892 [2024-11-18 23:13:44.032977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:24.892 [2024-11-18 23:13:44.032985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:24.892 [2024-11-18 23:13:44.032991] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:24.892 [2024-11-18 23:13:44.032997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:24.892 [2024-11-18 23:13:44.033003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:24.892 [2024-11-18 23:13:44.033010] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:24.892 [2024-11-18 23:13:44.033017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:24.892 [2024-11-18 23:13:44.033025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:24.892 [2024-11-18 23:13:44.033031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:24.892 [2024-11-18 23:13:44.033037] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:24.892 [2024-11-18 23:13:44.033044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:24.892 [2024-11-18 23:13:44.033050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:24.892 [2024-11-18 23:13:44.033056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:24.892 [2024-11-18 23:13:44.033063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:24.892 [2024-11-18 23:13:44.033069] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:24.892 [2024-11-18 23:13:44.033076] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:24.892 [2024-11-18 23:13:44.033083] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:24.892 [2024-11-18 23:13:44.033091] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:24.892 [2024-11-18 23:13:44.033097] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:24.892 [2024-11-18 23:13:44.033104] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:24.892 [2024-11-18 23:13:44.033110] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:24.892 [2024-11-18 23:13:44.033116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.892 [2024-11-18 23:13:44.033124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:24.892 [2024-11-18 23:13:44.033133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.527 ms 00:18:24.892 [2024-11-18 23:13:44.033139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.892 [2024-11-18 23:13:44.060785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.892 [2024-11-18 23:13:44.060838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:24.892 [2024-11-18 23:13:44.060870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.600 ms 00:18:24.892 [2024-11-18 23:13:44.060886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.892 [2024-11-18 23:13:44.061015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.892 [2024-11-18 23:13:44.061029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:24.892 [2024-11-18 23:13:44.061042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:18:24.892 [2024-11-18 23:13:44.061058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.892 [2024-11-18 23:13:44.072485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.892 [2024-11-18 23:13:44.072519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:24.892 [2024-11-18 23:13:44.072530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.351 ms 00:18:24.892 [2024-11-18 23:13:44.072538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.892 [2024-11-18 23:13:44.072567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.892 [2024-11-18 23:13:44.072575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:24.892 [2024-11-18 23:13:44.072584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:24.892 [2024-11-18 23:13:44.072591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.892 [2024-11-18 23:13:44.073032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.892 [2024-11-18 23:13:44.073061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:24.892 [2024-11-18 23:13:44.073070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.392 ms 00:18:24.892 [2024-11-18 23:13:44.073078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.892 [2024-11-18 23:13:44.073238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.892 [2024-11-18 23:13:44.073252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:24.892 [2024-11-18 23:13:44.073261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:18:24.892 [2024-11-18 23:13:44.073270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.892 [2024-11-18 23:13:44.079204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.893 [2024-11-18 23:13:44.079232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:24.893 [2024-11-18 23:13:44.079246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.913 ms 00:18:24.893 [2024-11-18 23:13:44.079254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.893 [2024-11-18 23:13:44.082419] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:18:24.893 [2024-11-18 23:13:44.082558] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:24.893 [2024-11-18 23:13:44.082574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.893 [2024-11-18 23:13:44.082588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:24.893 [2024-11-18 23:13:44.082596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.229 ms 00:18:24.893 [2024-11-18 23:13:44.082603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.893 [2024-11-18 23:13:44.097438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.893 [2024-11-18 23:13:44.097571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:24.893 [2024-11-18 23:13:44.097588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.799 ms 00:18:24.893 [2024-11-18 23:13:44.097604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.893 [2024-11-18 23:13:44.099751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.893 [2024-11-18 23:13:44.099786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:24.893 [2024-11-18 23:13:44.099795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.052 ms 00:18:24.893 [2024-11-18 23:13:44.099802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.893 [2024-11-18 23:13:44.101583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.893 [2024-11-18 23:13:44.101614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:24.893 [2024-11-18 23:13:44.101623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.747 ms 00:18:24.893 [2024-11-18 23:13:44.101629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.893 [2024-11-18 23:13:44.101952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.893 [2024-11-18 23:13:44.101967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:24.893 [2024-11-18 23:13:44.101976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:18:24.893 [2024-11-18 23:13:44.101984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.893 [2024-11-18 23:13:44.120998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.893 [2024-11-18 23:13:44.121191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:24.893 [2024-11-18 23:13:44.121216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.996 ms 00:18:24.893 [2024-11-18 23:13:44.121226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.893 [2024-11-18 23:13:44.129042] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:24.893 [2024-11-18 23:13:44.132042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.893 [2024-11-18 23:13:44.132074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:24.893 [2024-11-18 23:13:44.132086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.776 ms 00:18:24.893 [2024-11-18 23:13:44.132099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.893 [2024-11-18 23:13:44.132207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.893 [2024-11-18 23:13:44.132222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:24.893 [2024-11-18 23:13:44.132232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:24.893 [2024-11-18 23:13:44.132240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.893 [2024-11-18 23:13:44.132309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.893 [2024-11-18 23:13:44.132336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:24.893 [2024-11-18 23:13:44.132345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:18:24.893 [2024-11-18 23:13:44.132354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.893 [2024-11-18 23:13:44.132378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.893 [2024-11-18 23:13:44.132391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:24.893 [2024-11-18 23:13:44.132400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:24.893 [2024-11-18 23:13:44.132407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.893 [2024-11-18 23:13:44.132443] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:24.893 [2024-11-18 23:13:44.132454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.893 [2024-11-18 23:13:44.132462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:24.893 [2024-11-18 23:13:44.132472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:24.893 [2024-11-18 23:13:44.132481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.893 [2024-11-18 23:13:44.136290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.893 [2024-11-18 23:13:44.136331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:24.893 [2024-11-18 23:13:44.136341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.789 ms 00:18:24.893 [2024-11-18 23:13:44.136349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.893 [2024-11-18 23:13:44.136427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.893 [2024-11-18 23:13:44.136437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:24.893 [2024-11-18 23:13:44.136449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:18:24.893 [2024-11-18 23:13:44.136457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.893 [2024-11-18 23:13:44.137503] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 116.884 ms, result 0 00:18:25.836  [2024-11-18T23:13:46.158Z] Copying: 19/1024 [MB] (19 MBps) [2024-11-18T23:13:47.544Z] Copying: 40/1024 [MB] (21 MBps) [2024-11-18T23:13:48.485Z] Copying: 58/1024 [MB] (17 MBps) [2024-11-18T23:13:49.430Z] Copying: 90/1024 [MB] (32 MBps) [2024-11-18T23:13:50.371Z] Copying: 113/1024 [MB] (22 MBps) [2024-11-18T23:13:51.315Z] Copying: 132/1024 [MB] (19 MBps) [2024-11-18T23:13:52.259Z] Copying: 152/1024 [MB] (20 MBps) [2024-11-18T23:13:53.204Z] Copying: 175/1024 [MB] (23 MBps) [2024-11-18T23:13:54.592Z] Copying: 194/1024 [MB] (18 MBps) [2024-11-18T23:13:55.163Z] Copying: 223/1024 [MB] (29 MBps) [2024-11-18T23:13:56.550Z] Copying: 243/1024 [MB] (19 MBps) [2024-11-18T23:13:57.493Z] Copying: 264/1024 [MB] (21 MBps) [2024-11-18T23:13:58.437Z] Copying: 278/1024 [MB] (14 MBps) [2024-11-18T23:13:59.447Z] Copying: 290/1024 [MB] (12 MBps) [2024-11-18T23:14:00.389Z] Copying: 308/1024 [MB] (17 MBps) [2024-11-18T23:14:01.332Z] Copying: 323/1024 [MB] (14 MBps) [2024-11-18T23:14:02.276Z] Copying: 341/1024 [MB] (18 MBps) [2024-11-18T23:14:03.219Z] Copying: 355/1024 [MB] (14 MBps) [2024-11-18T23:14:04.163Z] Copying: 366/1024 [MB] (10 MBps) [2024-11-18T23:14:05.549Z] Copying: 376/1024 [MB] (10 MBps) [2024-11-18T23:14:06.494Z] Copying: 387/1024 [MB] (10 MBps) [2024-11-18T23:14:07.439Z] Copying: 397/1024 [MB] (10 MBps) [2024-11-18T23:14:08.383Z] Copying: 408/1024 [MB] (10 MBps) [2024-11-18T23:14:09.333Z] Copying: 419/1024 [MB] (10 MBps) [2024-11-18T23:14:10.277Z] Copying: 429/1024 [MB] (10 MBps) [2024-11-18T23:14:11.220Z] Copying: 439/1024 [MB] (10 MBps) [2024-11-18T23:14:12.164Z] Copying: 451/1024 [MB] (11 MBps) [2024-11-18T23:14:13.553Z] Copying: 494/1024 [MB] (43 MBps) [2024-11-18T23:14:14.496Z] Copying: 514/1024 [MB] (20 MBps) [2024-11-18T23:14:15.440Z] Copying: 530/1024 [MB] (15 MBps) [2024-11-18T23:14:16.384Z] Copying: 546/1024 [MB] (15 MBps) [2024-11-18T23:14:17.329Z] Copying: 559/1024 [MB] (13 MBps) [2024-11-18T23:14:18.272Z] Copying: 576/1024 [MB] (17 MBps) [2024-11-18T23:14:19.215Z] Copying: 587/1024 [MB] (11 MBps) [2024-11-18T23:14:20.159Z] Copying: 615/1024 [MB] (27 MBps) [2024-11-18T23:14:21.554Z] Copying: 632/1024 [MB] (16 MBps) [2024-11-18T23:14:22.498Z] Copying: 647/1024 [MB] (15 MBps) [2024-11-18T23:14:23.442Z] Copying: 667/1024 [MB] (19 MBps) [2024-11-18T23:14:24.385Z] Copying: 685/1024 [MB] (17 MBps) [2024-11-18T23:14:25.330Z] Copying: 699/1024 [MB] (14 MBps) [2024-11-18T23:14:26.274Z] Copying: 716/1024 [MB] (16 MBps) [2024-11-18T23:14:27.279Z] Copying: 734/1024 [MB] (18 MBps) [2024-11-18T23:14:28.225Z] Copying: 748/1024 [MB] (14 MBps) [2024-11-18T23:14:29.169Z] Copying: 759/1024 [MB] (10 MBps) [2024-11-18T23:14:30.554Z] Copying: 774/1024 [MB] (15 MBps) [2024-11-18T23:14:31.497Z] Copying: 800/1024 [MB] (25 MBps) [2024-11-18T23:14:32.443Z] Copying: 840/1024 [MB] (40 MBps) [2024-11-18T23:14:33.386Z] Copying: 854/1024 [MB] (13 MBps) [2024-11-18T23:14:34.330Z] Copying: 869/1024 [MB] (15 MBps) [2024-11-18T23:14:35.275Z] Copying: 885/1024 [MB] (16 MBps) [2024-11-18T23:14:36.219Z] Copying: 901/1024 [MB] (15 MBps) [2024-11-18T23:14:37.164Z] Copying: 915/1024 [MB] (14 MBps) [2024-11-18T23:14:38.552Z] Copying: 941/1024 [MB] (25 MBps) [2024-11-18T23:14:39.498Z] Copying: 983/1024 [MB] (42 MBps) [2024-11-18T23:14:39.498Z] Copying: 1023/1024 [MB] (39 MBps) [2024-11-18T23:14:39.498Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-11-18 23:14:39.200412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.120 [2024-11-18 23:14:39.200509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:20.120 [2024-11-18 23:14:39.200600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:20.120 [2024-11-18 23:14:39.200621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.120 [2024-11-18 23:14:39.200676] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:20.120 [2024-11-18 23:14:39.201245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.120 [2024-11-18 23:14:39.201326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:20.120 [2024-11-18 23:14:39.201370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.531 ms 00:19:20.120 [2024-11-18 23:14:39.201396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.120 [2024-11-18 23:14:39.203335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.120 [2024-11-18 23:14:39.203420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:20.120 [2024-11-18 23:14:39.203465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.912 ms 00:19:20.120 [2024-11-18 23:14:39.203483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.120 [2024-11-18 23:14:39.217269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.120 [2024-11-18 23:14:39.217363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:20.120 [2024-11-18 23:14:39.217410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.762 ms 00:19:20.120 [2024-11-18 23:14:39.217428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.120 [2024-11-18 23:14:39.222234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.120 [2024-11-18 23:14:39.222316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:20.120 [2024-11-18 23:14:39.222359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.768 ms 00:19:20.120 [2024-11-18 23:14:39.222377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.120 [2024-11-18 23:14:39.223985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.120 [2024-11-18 23:14:39.224068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:20.120 [2024-11-18 23:14:39.224107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.506 ms 00:19:20.120 [2024-11-18 23:14:39.224125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.120 [2024-11-18 23:14:39.227838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.120 [2024-11-18 23:14:39.227943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:20.120 [2024-11-18 23:14:39.227987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.599 ms 00:19:20.120 [2024-11-18 23:14:39.228005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.121 [2024-11-18 23:14:39.228131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.121 [2024-11-18 23:14:39.228188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:20.121 [2024-11-18 23:14:39.228247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:19:20.121 [2024-11-18 23:14:39.228265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.121 [2024-11-18 23:14:39.230327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.121 [2024-11-18 23:14:39.230404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:20.121 [2024-11-18 23:14:39.230440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.020 ms 00:19:20.121 [2024-11-18 23:14:39.230456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.121 [2024-11-18 23:14:39.232145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.121 [2024-11-18 23:14:39.232240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:20.121 [2024-11-18 23:14:39.232278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.661 ms 00:19:20.121 [2024-11-18 23:14:39.232294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.121 [2024-11-18 23:14:39.233513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.121 [2024-11-18 23:14:39.233588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:20.121 [2024-11-18 23:14:39.233626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.190 ms 00:19:20.121 [2024-11-18 23:14:39.233642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.121 [2024-11-18 23:14:39.234888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.121 [2024-11-18 23:14:39.234965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:20.121 [2024-11-18 23:14:39.235001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.190 ms 00:19:20.121 [2024-11-18 23:14:39.235017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.121 [2024-11-18 23:14:39.235045] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:20.121 [2024-11-18 23:14:39.235066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:20.121 [2024-11-18 23:14:39.235177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:20.121 [2024-11-18 23:14:39.235205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:20.121 [2024-11-18 23:14:39.235232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:20.121 [2024-11-18 23:14:39.235257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:20.121 [2024-11-18 23:14:39.235326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:20.121 [2024-11-18 23:14:39.235349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:20.121 [2024-11-18 23:14:39.235372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:20.121 [2024-11-18 23:14:39.235395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:20.121 [2024-11-18 23:14:39.235442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:20.121 [2024-11-18 23:14:39.235491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:20.121 [2024-11-18 23:14:39.235533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:20.121 [2024-11-18 23:14:39.235555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:20.121 [2024-11-18 23:14:39.235578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:20.121 [2024-11-18 23:14:39.235628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:20.121 [2024-11-18 23:14:39.235653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:20.121 [2024-11-18 23:14:39.235676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:20.121 [2024-11-18 23:14:39.235700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:20.121 [2024-11-18 23:14:39.235746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:20.121 [2024-11-18 23:14:39.235823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:20.121 [2024-11-18 23:14:39.235864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:20.121 [2024-11-18 23:14:39.235889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:20.121 [2024-11-18 23:14:39.235912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:20.121 [2024-11-18 23:14:39.235951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:20.121 [2024-11-18 23:14:39.235975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:20.121 [2024-11-18 23:14:39.235998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:20.121 [2024-11-18 23:14:39.236022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:20.121 [2024-11-18 23:14:39.236068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:20.121 [2024-11-18 23:14:39.236092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:20.121 [2024-11-18 23:14:39.236136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:20.121 [2024-11-18 23:14:39.236171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:20.121 [2024-11-18 23:14:39.236213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:20.121 [2024-11-18 23:14:39.236263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:20.121 [2024-11-18 23:14:39.236288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:20.121 [2024-11-18 23:14:39.236329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:20.121 [2024-11-18 23:14:39.236397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:20.121 [2024-11-18 23:14:39.236420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:20.121 [2024-11-18 23:14:39.236443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:20.121 [2024-11-18 23:14:39.236466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:20.121 [2024-11-18 23:14:39.236545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:20.121 [2024-11-18 23:14:39.236569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:20.121 [2024-11-18 23:14:39.236591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:20.121 [2024-11-18 23:14:39.236636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:20.121 [2024-11-18 23:14:39.236661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:20.121 [2024-11-18 23:14:39.236688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:20.121 [2024-11-18 23:14:39.236710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:20.121 [2024-11-18 23:14:39.236732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:20.121 [2024-11-18 23:14:39.236779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:20.121 [2024-11-18 23:14:39.236820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:20.121 [2024-11-18 23:14:39.236842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:20.121 [2024-11-18 23:14:39.236865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:20.121 [2024-11-18 23:14:39.236887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:20.121 [2024-11-18 23:14:39.236937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:20.121 [2024-11-18 23:14:39.236975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:20.121 [2024-11-18 23:14:39.236997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:20.121 [2024-11-18 23:14:39.237020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:20.121 [2024-11-18 23:14:39.237043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:20.121 [2024-11-18 23:14:39.237085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:20.121 [2024-11-18 23:14:39.237092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:20.121 [2024-11-18 23:14:39.237098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:20.121 [2024-11-18 23:14:39.237104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:20.121 [2024-11-18 23:14:39.237110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:20.122 [2024-11-18 23:14:39.237116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:20.122 [2024-11-18 23:14:39.237122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:20.122 [2024-11-18 23:14:39.237127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:20.122 [2024-11-18 23:14:39.237133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:20.122 [2024-11-18 23:14:39.237138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:20.122 [2024-11-18 23:14:39.237145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:20.122 [2024-11-18 23:14:39.237150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:20.122 [2024-11-18 23:14:39.237165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:20.122 [2024-11-18 23:14:39.237171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:20.122 [2024-11-18 23:14:39.237177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:20.122 [2024-11-18 23:14:39.237183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:20.122 [2024-11-18 23:14:39.237189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:20.122 [2024-11-18 23:14:39.237194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:20.122 [2024-11-18 23:14:39.237200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:20.122 [2024-11-18 23:14:39.237206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:20.122 [2024-11-18 23:14:39.237213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:20.122 [2024-11-18 23:14:39.237218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:20.122 [2024-11-18 23:14:39.237224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:20.122 [2024-11-18 23:14:39.237229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:20.122 [2024-11-18 23:14:39.237235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:20.122 [2024-11-18 23:14:39.237240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:20.122 [2024-11-18 23:14:39.237247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:20.122 [2024-11-18 23:14:39.237253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:20.122 [2024-11-18 23:14:39.237258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:20.122 [2024-11-18 23:14:39.237264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:20.122 [2024-11-18 23:14:39.237272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:20.122 [2024-11-18 23:14:39.237279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:20.122 [2024-11-18 23:14:39.237285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:20.122 [2024-11-18 23:14:39.237290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:20.122 [2024-11-18 23:14:39.237296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:20.122 [2024-11-18 23:14:39.237301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:20.122 [2024-11-18 23:14:39.237307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:20.122 [2024-11-18 23:14:39.237313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:20.122 [2024-11-18 23:14:39.237319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:20.122 [2024-11-18 23:14:39.237324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:20.122 [2024-11-18 23:14:39.237330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:20.122 [2024-11-18 23:14:39.237335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:20.122 [2024-11-18 23:14:39.237341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:20.122 [2024-11-18 23:14:39.237354] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:20.122 [2024-11-18 23:14:39.237360] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e018dca0-090d-4735-a005-a51940083468 00:19:20.122 [2024-11-18 23:14:39.237366] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:20.122 [2024-11-18 23:14:39.237372] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:20.122 [2024-11-18 23:14:39.237377] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:20.122 [2024-11-18 23:14:39.237388] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:20.122 [2024-11-18 23:14:39.237394] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:20.122 [2024-11-18 23:14:39.237400] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:20.122 [2024-11-18 23:14:39.237406] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:20.122 [2024-11-18 23:14:39.237411] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:20.122 [2024-11-18 23:14:39.237415] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:20.122 [2024-11-18 23:14:39.237421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.122 [2024-11-18 23:14:39.237430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:20.122 [2024-11-18 23:14:39.237437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.377 ms 00:19:20.122 [2024-11-18 23:14:39.237447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.122 [2024-11-18 23:14:39.239110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.122 [2024-11-18 23:14:39.239133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:20.122 [2024-11-18 23:14:39.239140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.649 ms 00:19:20.122 [2024-11-18 23:14:39.239146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.122 [2024-11-18 23:14:39.239239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.122 [2024-11-18 23:14:39.239247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:20.122 [2024-11-18 23:14:39.239257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:19:20.122 [2024-11-18 23:14:39.239264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.122 [2024-11-18 23:14:39.244302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.122 [2024-11-18 23:14:39.244331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:20.122 [2024-11-18 23:14:39.244341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.122 [2024-11-18 23:14:39.244347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.122 [2024-11-18 23:14:39.244389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.122 [2024-11-18 23:14:39.244395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:20.122 [2024-11-18 23:14:39.244403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.122 [2024-11-18 23:14:39.244410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.122 [2024-11-18 23:14:39.244451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.122 [2024-11-18 23:14:39.244459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:20.122 [2024-11-18 23:14:39.244465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.122 [2024-11-18 23:14:39.244471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.122 [2024-11-18 23:14:39.244483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.122 [2024-11-18 23:14:39.244490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:20.122 [2024-11-18 23:14:39.244496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.122 [2024-11-18 23:14:39.244504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.122 [2024-11-18 23:14:39.254880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.122 [2024-11-18 23:14:39.255012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:20.122 [2024-11-18 23:14:39.255025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.122 [2024-11-18 23:14:39.255032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.122 [2024-11-18 23:14:39.263454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.122 [2024-11-18 23:14:39.263485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:20.122 [2024-11-18 23:14:39.263499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.122 [2024-11-18 23:14:39.263506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.123 [2024-11-18 23:14:39.263550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.123 [2024-11-18 23:14:39.263562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:20.123 [2024-11-18 23:14:39.263569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.123 [2024-11-18 23:14:39.263575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.123 [2024-11-18 23:14:39.263598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.123 [2024-11-18 23:14:39.263606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:20.123 [2024-11-18 23:14:39.263612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.123 [2024-11-18 23:14:39.263619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.123 [2024-11-18 23:14:39.263676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.123 [2024-11-18 23:14:39.263687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:20.123 [2024-11-18 23:14:39.263693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.123 [2024-11-18 23:14:39.263699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.123 [2024-11-18 23:14:39.263723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.123 [2024-11-18 23:14:39.263731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:20.123 [2024-11-18 23:14:39.263741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.123 [2024-11-18 23:14:39.263748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.123 [2024-11-18 23:14:39.263782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.123 [2024-11-18 23:14:39.263793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:20.123 [2024-11-18 23:14:39.263802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.123 [2024-11-18 23:14:39.263808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.123 [2024-11-18 23:14:39.263849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:20.123 [2024-11-18 23:14:39.263858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:20.123 [2024-11-18 23:14:39.263867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:20.123 [2024-11-18 23:14:39.263874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.123 [2024-11-18 23:14:39.263983] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 63.539 ms, result 0 00:19:20.383 00:19:20.383 00:19:20.383 23:14:39 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:19:20.383 [2024-11-18 23:14:39.722308] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:19:20.383 [2024-11-18 23:14:39.722420] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87158 ] 00:19:20.643 [2024-11-18 23:14:39.857302] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:20.643 [2024-11-18 23:14:39.897808] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:20.643 [2024-11-18 23:14:39.995973] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:20.643 [2024-11-18 23:14:39.996216] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:20.907 [2024-11-18 23:14:40.150730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.907 [2024-11-18 23:14:40.150768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:20.907 [2024-11-18 23:14:40.150785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:20.907 [2024-11-18 23:14:40.150791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.907 [2024-11-18 23:14:40.150830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.907 [2024-11-18 23:14:40.150838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:20.907 [2024-11-18 23:14:40.150849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:19:20.907 [2024-11-18 23:14:40.150855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.907 [2024-11-18 23:14:40.150871] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:20.907 [2024-11-18 23:14:40.151049] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:20.907 [2024-11-18 23:14:40.151061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.907 [2024-11-18 23:14:40.151067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:20.907 [2024-11-18 23:14:40.151075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.194 ms 00:19:20.907 [2024-11-18 23:14:40.151083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.907 [2024-11-18 23:14:40.152350] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:20.907 [2024-11-18 23:14:40.155193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.907 [2024-11-18 23:14:40.155224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:20.907 [2024-11-18 23:14:40.155233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.845 ms 00:19:20.907 [2024-11-18 23:14:40.155239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.907 [2024-11-18 23:14:40.155289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.907 [2024-11-18 23:14:40.155297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:20.907 [2024-11-18 23:14:40.155304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:19:20.907 [2024-11-18 23:14:40.155329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.907 [2024-11-18 23:14:40.161476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.907 [2024-11-18 23:14:40.161502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:20.907 [2024-11-18 23:14:40.161510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.105 ms 00:19:20.907 [2024-11-18 23:14:40.161519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.907 [2024-11-18 23:14:40.161588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.907 [2024-11-18 23:14:40.161598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:20.907 [2024-11-18 23:14:40.161604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:19:20.907 [2024-11-18 23:14:40.161611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.907 [2024-11-18 23:14:40.161649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.907 [2024-11-18 23:14:40.161663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:20.907 [2024-11-18 23:14:40.161671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:20.907 [2024-11-18 23:14:40.161680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.907 [2024-11-18 23:14:40.161700] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:20.907 [2024-11-18 23:14:40.163255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.907 [2024-11-18 23:14:40.163278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:20.907 [2024-11-18 23:14:40.163286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.561 ms 00:19:20.907 [2024-11-18 23:14:40.163292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.907 [2024-11-18 23:14:40.163330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.907 [2024-11-18 23:14:40.163341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:20.907 [2024-11-18 23:14:40.163347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:20.907 [2024-11-18 23:14:40.163353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.907 [2024-11-18 23:14:40.163370] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:20.907 [2024-11-18 23:14:40.163389] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:20.907 [2024-11-18 23:14:40.163428] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:20.907 [2024-11-18 23:14:40.163446] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:20.907 [2024-11-18 23:14:40.163532] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:20.907 [2024-11-18 23:14:40.163542] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:20.907 [2024-11-18 23:14:40.163550] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:20.907 [2024-11-18 23:14:40.163558] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:20.907 [2024-11-18 23:14:40.163569] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:20.908 [2024-11-18 23:14:40.163577] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:20.908 [2024-11-18 23:14:40.163583] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:20.908 [2024-11-18 23:14:40.163591] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:20.908 [2024-11-18 23:14:40.163598] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:20.908 [2024-11-18 23:14:40.163605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.908 [2024-11-18 23:14:40.163612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:20.908 [2024-11-18 23:14:40.163618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.238 ms 00:19:20.908 [2024-11-18 23:14:40.163623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.908 [2024-11-18 23:14:40.163692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.908 [2024-11-18 23:14:40.163703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:20.908 [2024-11-18 23:14:40.163709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:19:20.908 [2024-11-18 23:14:40.163714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.908 [2024-11-18 23:14:40.163789] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:20.908 [2024-11-18 23:14:40.163797] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:20.908 [2024-11-18 23:14:40.163804] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:20.908 [2024-11-18 23:14:40.163817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:20.908 [2024-11-18 23:14:40.163826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:20.908 [2024-11-18 23:14:40.163832] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:20.908 [2024-11-18 23:14:40.163838] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:20.908 [2024-11-18 23:14:40.163844] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:20.908 [2024-11-18 23:14:40.163850] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:20.908 [2024-11-18 23:14:40.163855] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:20.908 [2024-11-18 23:14:40.163861] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:20.908 [2024-11-18 23:14:40.163867] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:20.908 [2024-11-18 23:14:40.163873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:20.908 [2024-11-18 23:14:40.163881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:20.908 [2024-11-18 23:14:40.163886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:20.908 [2024-11-18 23:14:40.163892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:20.908 [2024-11-18 23:14:40.163898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:20.908 [2024-11-18 23:14:40.163905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:20.908 [2024-11-18 23:14:40.163910] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:20.908 [2024-11-18 23:14:40.163916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:20.908 [2024-11-18 23:14:40.163921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:20.908 [2024-11-18 23:14:40.163928] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:20.908 [2024-11-18 23:14:40.163933] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:20.908 [2024-11-18 23:14:40.163939] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:20.908 [2024-11-18 23:14:40.163944] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:20.908 [2024-11-18 23:14:40.163949] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:20.908 [2024-11-18 23:14:40.163954] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:20.908 [2024-11-18 23:14:40.163959] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:20.908 [2024-11-18 23:14:40.163964] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:20.908 [2024-11-18 23:14:40.163973] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:20.908 [2024-11-18 23:14:40.163979] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:20.908 [2024-11-18 23:14:40.163984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:20.908 [2024-11-18 23:14:40.163989] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:20.908 [2024-11-18 23:14:40.163994] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:20.908 [2024-11-18 23:14:40.163999] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:20.908 [2024-11-18 23:14:40.164004] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:20.908 [2024-11-18 23:14:40.164009] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:20.908 [2024-11-18 23:14:40.164014] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:20.908 [2024-11-18 23:14:40.164020] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:20.908 [2024-11-18 23:14:40.164025] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:20.908 [2024-11-18 23:14:40.164030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:20.908 [2024-11-18 23:14:40.164035] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:20.908 [2024-11-18 23:14:40.164040] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:20.908 [2024-11-18 23:14:40.164045] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:20.908 [2024-11-18 23:14:40.164055] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:20.908 [2024-11-18 23:14:40.164062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:20.908 [2024-11-18 23:14:40.164069] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:20.908 [2024-11-18 23:14:40.164075] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:20.908 [2024-11-18 23:14:40.164080] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:20.908 [2024-11-18 23:14:40.164089] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:20.908 [2024-11-18 23:14:40.164094] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:20.908 [2024-11-18 23:14:40.164099] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:20.908 [2024-11-18 23:14:40.164104] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:20.908 [2024-11-18 23:14:40.164110] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:20.908 [2024-11-18 23:14:40.164117] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:20.908 [2024-11-18 23:14:40.164124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:20.908 [2024-11-18 23:14:40.164129] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:20.908 [2024-11-18 23:14:40.164135] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:20.908 [2024-11-18 23:14:40.164140] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:20.908 [2024-11-18 23:14:40.164145] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:20.908 [2024-11-18 23:14:40.164151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:20.908 [2024-11-18 23:14:40.164173] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:20.908 [2024-11-18 23:14:40.164179] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:20.909 [2024-11-18 23:14:40.164185] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:20.909 [2024-11-18 23:14:40.164190] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:20.909 [2024-11-18 23:14:40.164195] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:20.909 [2024-11-18 23:14:40.164201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:20.909 [2024-11-18 23:14:40.164206] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:20.909 [2024-11-18 23:14:40.164212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:20.909 [2024-11-18 23:14:40.164218] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:20.909 [2024-11-18 23:14:40.164225] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:20.909 [2024-11-18 23:14:40.164232] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:20.909 [2024-11-18 23:14:40.164238] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:20.909 [2024-11-18 23:14:40.164244] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:20.909 [2024-11-18 23:14:40.164249] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:20.909 [2024-11-18 23:14:40.164254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.909 [2024-11-18 23:14:40.164261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:20.909 [2024-11-18 23:14:40.164270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.518 ms 00:19:20.909 [2024-11-18 23:14:40.164275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.909 [2024-11-18 23:14:40.186355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.909 [2024-11-18 23:14:40.186401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:20.909 [2024-11-18 23:14:40.186422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.038 ms 00:19:20.909 [2024-11-18 23:14:40.186437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.909 [2024-11-18 23:14:40.186549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.909 [2024-11-18 23:14:40.186560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:20.909 [2024-11-18 23:14:40.186572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:19:20.909 [2024-11-18 23:14:40.186582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.909 [2024-11-18 23:14:40.196983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.909 [2024-11-18 23:14:40.197017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:20.909 [2024-11-18 23:14:40.197033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.336 ms 00:19:20.909 [2024-11-18 23:14:40.197041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.909 [2024-11-18 23:14:40.197070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.909 [2024-11-18 23:14:40.197078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:20.909 [2024-11-18 23:14:40.197086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:19:20.909 [2024-11-18 23:14:40.197094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.909 [2024-11-18 23:14:40.197566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.909 [2024-11-18 23:14:40.197596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:20.909 [2024-11-18 23:14:40.197607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.423 ms 00:19:20.909 [2024-11-18 23:14:40.197616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.909 [2024-11-18 23:14:40.197762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.909 [2024-11-18 23:14:40.197771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:20.909 [2024-11-18 23:14:40.197780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:19:20.909 [2024-11-18 23:14:40.197789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.909 [2024-11-18 23:14:40.203616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.909 [2024-11-18 23:14:40.203642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:20.909 [2024-11-18 23:14:40.203654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.803 ms 00:19:20.909 [2024-11-18 23:14:40.203660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.909 [2024-11-18 23:14:40.206646] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:20.909 [2024-11-18 23:14:40.206673] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:20.909 [2024-11-18 23:14:40.206685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.909 [2024-11-18 23:14:40.206692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:20.909 [2024-11-18 23:14:40.206700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.955 ms 00:19:20.909 [2024-11-18 23:14:40.206706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.909 [2024-11-18 23:14:40.218400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.909 [2024-11-18 23:14:40.218434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:20.909 [2024-11-18 23:14:40.218448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.662 ms 00:19:20.909 [2024-11-18 23:14:40.218455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.909 [2024-11-18 23:14:40.220326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.909 [2024-11-18 23:14:40.220453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:20.909 [2024-11-18 23:14:40.220465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.842 ms 00:19:20.909 [2024-11-18 23:14:40.220471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.909 [2024-11-18 23:14:40.221790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.909 [2024-11-18 23:14:40.221814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:20.909 [2024-11-18 23:14:40.221822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.294 ms 00:19:20.909 [2024-11-18 23:14:40.221827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.909 [2024-11-18 23:14:40.222084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.909 [2024-11-18 23:14:40.222094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:20.909 [2024-11-18 23:14:40.222101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.207 ms 00:19:20.909 [2024-11-18 23:14:40.222106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.909 [2024-11-18 23:14:40.239390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.909 [2024-11-18 23:14:40.239530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:20.909 [2024-11-18 23:14:40.239544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.271 ms 00:19:20.909 [2024-11-18 23:14:40.239551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.909 [2024-11-18 23:14:40.245578] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:20.909 [2024-11-18 23:14:40.248057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.909 [2024-11-18 23:14:40.248083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:20.909 [2024-11-18 23:14:40.248099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.477 ms 00:19:20.909 [2024-11-18 23:14:40.248105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.909 [2024-11-18 23:14:40.248169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.909 [2024-11-18 23:14:40.248180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:20.909 [2024-11-18 23:14:40.248187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:19:20.909 [2024-11-18 23:14:40.248194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.909 [2024-11-18 23:14:40.248273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.909 [2024-11-18 23:14:40.248282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:20.909 [2024-11-18 23:14:40.248292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:19:20.909 [2024-11-18 23:14:40.248301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.909 [2024-11-18 23:14:40.248318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.909 [2024-11-18 23:14:40.248325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:20.909 [2024-11-18 23:14:40.248332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:20.909 [2024-11-18 23:14:40.248337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.909 [2024-11-18 23:14:40.248368] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:20.909 [2024-11-18 23:14:40.248376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.909 [2024-11-18 23:14:40.248383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:20.909 [2024-11-18 23:14:40.248391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:20.909 [2024-11-18 23:14:40.248397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.910 [2024-11-18 23:14:40.251935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.910 [2024-11-18 23:14:40.251964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:20.910 [2024-11-18 23:14:40.251973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.517 ms 00:19:20.910 [2024-11-18 23:14:40.251979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.910 [2024-11-18 23:14:40.252036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.910 [2024-11-18 23:14:40.252044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:20.910 [2024-11-18 23:14:40.252051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:19:20.910 [2024-11-18 23:14:40.252061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.910 [2024-11-18 23:14:40.252998] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 101.899 ms, result 0 00:19:22.298  [2024-11-18T23:14:42.620Z] Copying: 22/1024 [MB] (22 MBps) [2024-11-18T23:14:43.564Z] Copying: 36/1024 [MB] (13 MBps) [2024-11-18T23:14:44.508Z] Copying: 51/1024 [MB] (15 MBps) [2024-11-18T23:14:45.450Z] Copying: 62/1024 [MB] (10 MBps) [2024-11-18T23:14:46.839Z] Copying: 79/1024 [MB] (16 MBps) [2024-11-18T23:14:47.412Z] Copying: 99/1024 [MB] (20 MBps) [2024-11-18T23:14:48.801Z] Copying: 126/1024 [MB] (26 MBps) [2024-11-18T23:14:49.743Z] Copying: 144/1024 [MB] (17 MBps) [2024-11-18T23:14:50.687Z] Copying: 155/1024 [MB] (10 MBps) [2024-11-18T23:14:51.632Z] Copying: 166/1024 [MB] (11 MBps) [2024-11-18T23:14:52.576Z] Copying: 185/1024 [MB] (18 MBps) [2024-11-18T23:14:53.518Z] Copying: 202/1024 [MB] (16 MBps) [2024-11-18T23:14:54.463Z] Copying: 213/1024 [MB] (11 MBps) [2024-11-18T23:14:55.406Z] Copying: 232/1024 [MB] (18 MBps) [2024-11-18T23:14:56.403Z] Copying: 251/1024 [MB] (19 MBps) [2024-11-18T23:14:57.791Z] Copying: 265/1024 [MB] (13 MBps) [2024-11-18T23:14:58.735Z] Copying: 276/1024 [MB] (11 MBps) [2024-11-18T23:14:59.680Z] Copying: 286/1024 [MB] (10 MBps) [2024-11-18T23:15:00.627Z] Copying: 302/1024 [MB] (15 MBps) [2024-11-18T23:15:01.571Z] Copying: 312/1024 [MB] (10 MBps) [2024-11-18T23:15:02.515Z] Copying: 323/1024 [MB] (10 MBps) [2024-11-18T23:15:03.473Z] Copying: 335/1024 [MB] (12 MBps) [2024-11-18T23:15:04.417Z] Copying: 345/1024 [MB] (10 MBps) [2024-11-18T23:15:05.805Z] Copying: 364/1024 [MB] (18 MBps) [2024-11-18T23:15:06.751Z] Copying: 376/1024 [MB] (12 MBps) [2024-11-18T23:15:07.695Z] Copying: 388/1024 [MB] (11 MBps) [2024-11-18T23:15:08.640Z] Copying: 399/1024 [MB] (10 MBps) [2024-11-18T23:15:09.582Z] Copying: 409/1024 [MB] (10 MBps) [2024-11-18T23:15:10.526Z] Copying: 433/1024 [MB] (23 MBps) [2024-11-18T23:15:11.470Z] Copying: 448/1024 [MB] (15 MBps) [2024-11-18T23:15:12.413Z] Copying: 468/1024 [MB] (19 MBps) [2024-11-18T23:15:13.801Z] Copying: 494/1024 [MB] (25 MBps) [2024-11-18T23:15:14.746Z] Copying: 514/1024 [MB] (19 MBps) [2024-11-18T23:15:15.690Z] Copying: 528/1024 [MB] (13 MBps) [2024-11-18T23:15:16.634Z] Copying: 539/1024 [MB] (11 MBps) [2024-11-18T23:15:17.578Z] Copying: 557/1024 [MB] (18 MBps) [2024-11-18T23:15:18.522Z] Copying: 571/1024 [MB] (13 MBps) [2024-11-18T23:15:19.467Z] Copying: 587/1024 [MB] (16 MBps) [2024-11-18T23:15:20.408Z] Copying: 598/1024 [MB] (10 MBps) [2024-11-18T23:15:21.796Z] Copying: 611/1024 [MB] (12 MBps) [2024-11-18T23:15:22.742Z] Copying: 630/1024 [MB] (19 MBps) [2024-11-18T23:15:23.687Z] Copying: 641/1024 [MB] (11 MBps) [2024-11-18T23:15:24.634Z] Copying: 661/1024 [MB] (19 MBps) [2024-11-18T23:15:25.634Z] Copying: 673/1024 [MB] (12 MBps) [2024-11-18T23:15:26.579Z] Copying: 692/1024 [MB] (18 MBps) [2024-11-18T23:15:27.525Z] Copying: 708/1024 [MB] (16 MBps) [2024-11-18T23:15:28.471Z] Copying: 724/1024 [MB] (15 MBps) [2024-11-18T23:15:29.417Z] Copying: 740/1024 [MB] (16 MBps) [2024-11-18T23:15:30.804Z] Copying: 762/1024 [MB] (21 MBps) [2024-11-18T23:15:31.750Z] Copying: 781/1024 [MB] (19 MBps) [2024-11-18T23:15:32.695Z] Copying: 798/1024 [MB] (16 MBps) [2024-11-18T23:15:33.641Z] Copying: 810/1024 [MB] (12 MBps) [2024-11-18T23:15:34.585Z] Copying: 828/1024 [MB] (17 MBps) [2024-11-18T23:15:35.531Z] Copying: 846/1024 [MB] (18 MBps) [2024-11-18T23:15:36.478Z] Copying: 867/1024 [MB] (21 MBps) [2024-11-18T23:15:37.423Z] Copying: 878/1024 [MB] (10 MBps) [2024-11-18T23:15:38.812Z] Copying: 892/1024 [MB] (13 MBps) [2024-11-18T23:15:39.756Z] Copying: 908/1024 [MB] (16 MBps) [2024-11-18T23:15:40.706Z] Copying: 924/1024 [MB] (16 MBps) [2024-11-18T23:15:41.653Z] Copying: 939/1024 [MB] (14 MBps) [2024-11-18T23:15:42.598Z] Copying: 953/1024 [MB] (13 MBps) [2024-11-18T23:15:43.544Z] Copying: 972/1024 [MB] (19 MBps) [2024-11-18T23:15:44.490Z] Copying: 984/1024 [MB] (12 MBps) [2024-11-18T23:15:45.435Z] Copying: 1004/1024 [MB] (19 MBps) [2024-11-18T23:15:45.696Z] Copying: 1019/1024 [MB] (15 MBps) [2024-11-18T23:15:46.270Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-18 23:15:45.978569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.892 [2024-11-18 23:15:45.978660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:26.892 [2024-11-18 23:15:45.978689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:26.892 [2024-11-18 23:15:45.978719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.892 [2024-11-18 23:15:45.978772] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:26.892 [2024-11-18 23:15:45.979659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.892 [2024-11-18 23:15:45.979727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:26.892 [2024-11-18 23:15:45.979762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.853 ms 00:20:26.892 [2024-11-18 23:15:45.979792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.892 [2024-11-18 23:15:45.980348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.892 [2024-11-18 23:15:45.980422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:26.892 [2024-11-18 23:15:45.980451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.497 ms 00:20:26.892 [2024-11-18 23:15:45.980477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.892 [2024-11-18 23:15:45.985269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.892 [2024-11-18 23:15:45.985432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:26.892 [2024-11-18 23:15:45.985451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.747 ms 00:20:26.892 [2024-11-18 23:15:45.985462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.892 [2024-11-18 23:15:45.990851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.892 [2024-11-18 23:15:45.990885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:26.892 [2024-11-18 23:15:45.990894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.363 ms 00:20:26.892 [2024-11-18 23:15:45.990901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.892 [2024-11-18 23:15:45.993253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.892 [2024-11-18 23:15:45.993287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:26.892 [2024-11-18 23:15:45.993295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.290 ms 00:20:26.892 [2024-11-18 23:15:45.993301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.892 [2024-11-18 23:15:45.997265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.892 [2024-11-18 23:15:45.997298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:26.892 [2024-11-18 23:15:45.997307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.934 ms 00:20:26.892 [2024-11-18 23:15:45.997313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.892 [2024-11-18 23:15:45.997405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.892 [2024-11-18 23:15:45.997413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:26.892 [2024-11-18 23:15:45.997420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:20:26.892 [2024-11-18 23:15:45.997435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.892 [2024-11-18 23:15:46.000366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.892 [2024-11-18 23:15:46.000505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:26.893 [2024-11-18 23:15:46.000525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.917 ms 00:20:26.893 [2024-11-18 23:15:46.000535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.893 [2024-11-18 23:15:46.002196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.893 [2024-11-18 23:15:46.002226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:26.893 [2024-11-18 23:15:46.002233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.616 ms 00:20:26.893 [2024-11-18 23:15:46.002239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.893 [2024-11-18 23:15:46.004204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.893 [2024-11-18 23:15:46.004234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:26.893 [2024-11-18 23:15:46.004242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.937 ms 00:20:26.893 [2024-11-18 23:15:46.004249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.893 [2024-11-18 23:15:46.006349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.893 [2024-11-18 23:15:46.006378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:26.893 [2024-11-18 23:15:46.006385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.051 ms 00:20:26.893 [2024-11-18 23:15:46.006392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.893 [2024-11-18 23:15:46.006417] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:26.893 [2024-11-18 23:15:46.006435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:26.893 [2024-11-18 23:15:46.006443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:26.893 [2024-11-18 23:15:46.006451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:26.893 [2024-11-18 23:15:46.006457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:26.893 [2024-11-18 23:15:46.006464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:26.893 [2024-11-18 23:15:46.006470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:26.893 [2024-11-18 23:15:46.006476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:26.893 [2024-11-18 23:15:46.006482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:26.893 [2024-11-18 23:15:46.006488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:26.893 [2024-11-18 23:15:46.006495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:26.893 [2024-11-18 23:15:46.006501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:26.893 [2024-11-18 23:15:46.006507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:26.893 [2024-11-18 23:15:46.006513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:26.893 [2024-11-18 23:15:46.006519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:26.893 [2024-11-18 23:15:46.006525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:26.893 [2024-11-18 23:15:46.006532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:26.893 [2024-11-18 23:15:46.006538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:26.893 [2024-11-18 23:15:46.006543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:26.893 [2024-11-18 23:15:46.006549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:26.893 [2024-11-18 23:15:46.006556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:26.893 [2024-11-18 23:15:46.006562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:26.893 [2024-11-18 23:15:46.006568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:26.893 [2024-11-18 23:15:46.006575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:26.893 [2024-11-18 23:15:46.006581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:26.893 [2024-11-18 23:15:46.006587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:26.893 [2024-11-18 23:15:46.006593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:26.893 [2024-11-18 23:15:46.006599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:26.893 [2024-11-18 23:15:46.006605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:26.893 [2024-11-18 23:15:46.006611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:26.893 [2024-11-18 23:15:46.006618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:26.893 [2024-11-18 23:15:46.006624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:26.893 [2024-11-18 23:15:46.006630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:26.893 [2024-11-18 23:15:46.006635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:26.893 [2024-11-18 23:15:46.006641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:26.893 [2024-11-18 23:15:46.006647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:26.893 [2024-11-18 23:15:46.006653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:26.893 [2024-11-18 23:15:46.006659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:26.893 [2024-11-18 23:15:46.006665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:26.893 [2024-11-18 23:15:46.006671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:26.893 [2024-11-18 23:15:46.006676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:26.893 [2024-11-18 23:15:46.006682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:26.893 [2024-11-18 23:15:46.006688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:26.893 [2024-11-18 23:15:46.006693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:26.893 [2024-11-18 23:15:46.006699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:26.893 [2024-11-18 23:15:46.006705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:26.893 [2024-11-18 23:15:46.006715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:26.893 [2024-11-18 23:15:46.006721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:26.893 [2024-11-18 23:15:46.006726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:26.893 [2024-11-18 23:15:46.006732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:26.893 [2024-11-18 23:15:46.006739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:26.893 [2024-11-18 23:15:46.006744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:26.893 [2024-11-18 23:15:46.006751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:26.893 [2024-11-18 23:15:46.006756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:26.893 [2024-11-18 23:15:46.006762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:26.893 [2024-11-18 23:15:46.006768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:26.893 [2024-11-18 23:15:46.006797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:26.893 [2024-11-18 23:15:46.006803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:26.893 [2024-11-18 23:15:46.006809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:26.894 [2024-11-18 23:15:46.006816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:26.894 [2024-11-18 23:15:46.006823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:26.894 [2024-11-18 23:15:46.006829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:26.894 [2024-11-18 23:15:46.006836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:26.894 [2024-11-18 23:15:46.006842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:26.894 [2024-11-18 23:15:46.006848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:26.894 [2024-11-18 23:15:46.006854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:26.894 [2024-11-18 23:15:46.006860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:26.894 [2024-11-18 23:15:46.006867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:26.894 [2024-11-18 23:15:46.006874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:26.894 [2024-11-18 23:15:46.006880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:26.894 [2024-11-18 23:15:46.006886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:26.894 [2024-11-18 23:15:46.006892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:26.894 [2024-11-18 23:15:46.006898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:26.894 [2024-11-18 23:15:46.006905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:26.894 [2024-11-18 23:15:46.006911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:26.894 [2024-11-18 23:15:46.006917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:26.894 [2024-11-18 23:15:46.006923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:26.894 [2024-11-18 23:15:46.006929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:26.894 [2024-11-18 23:15:46.006935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:26.894 [2024-11-18 23:15:46.006941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:26.894 [2024-11-18 23:15:46.006947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:26.894 [2024-11-18 23:15:46.006953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:26.894 [2024-11-18 23:15:46.006959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:26.894 [2024-11-18 23:15:46.006966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:26.894 [2024-11-18 23:15:46.006972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:26.894 [2024-11-18 23:15:46.006978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:26.894 [2024-11-18 23:15:46.006985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:26.894 [2024-11-18 23:15:46.006995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:26.894 [2024-11-18 23:15:46.007001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:26.894 [2024-11-18 23:15:46.007007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:26.894 [2024-11-18 23:15:46.007012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:26.894 [2024-11-18 23:15:46.007018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:26.894 [2024-11-18 23:15:46.007023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:26.894 [2024-11-18 23:15:46.007029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:26.894 [2024-11-18 23:15:46.007036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:26.894 [2024-11-18 23:15:46.007042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:26.894 [2024-11-18 23:15:46.007047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:26.894 [2024-11-18 23:15:46.007053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:26.894 [2024-11-18 23:15:46.007059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:26.894 [2024-11-18 23:15:46.007065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:26.894 [2024-11-18 23:15:46.007070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:26.894 [2024-11-18 23:15:46.007083] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:26.894 [2024-11-18 23:15:46.007090] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e018dca0-090d-4735-a005-a51940083468 00:20:26.894 [2024-11-18 23:15:46.007096] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:26.894 [2024-11-18 23:15:46.007102] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:26.894 [2024-11-18 23:15:46.007108] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:26.894 [2024-11-18 23:15:46.007121] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:26.894 [2024-11-18 23:15:46.007127] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:26.894 [2024-11-18 23:15:46.007134] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:26.894 [2024-11-18 23:15:46.007141] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:26.894 [2024-11-18 23:15:46.007146] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:26.894 [2024-11-18 23:15:46.007151] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:26.894 [2024-11-18 23:15:46.007172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.894 [2024-11-18 23:15:46.007178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:26.894 [2024-11-18 23:15:46.007202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.756 ms 00:20:26.894 [2024-11-18 23:15:46.007218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.894 [2024-11-18 23:15:46.010191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.894 [2024-11-18 23:15:46.010214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:26.894 [2024-11-18 23:15:46.010222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.841 ms 00:20:26.894 [2024-11-18 23:15:46.010231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.894 [2024-11-18 23:15:46.010325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.894 [2024-11-18 23:15:46.010338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:26.894 [2024-11-18 23:15:46.010345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:20:26.894 [2024-11-18 23:15:46.010351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.894 [2024-11-18 23:15:46.015850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:26.894 [2024-11-18 23:15:46.015881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:26.894 [2024-11-18 23:15:46.015893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:26.894 [2024-11-18 23:15:46.015900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.894 [2024-11-18 23:15:46.015945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:26.894 [2024-11-18 23:15:46.015956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:26.894 [2024-11-18 23:15:46.015962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:26.894 [2024-11-18 23:15:46.015972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.894 [2024-11-18 23:15:46.016020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:26.894 [2024-11-18 23:15:46.016028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:26.895 [2024-11-18 23:15:46.016035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:26.895 [2024-11-18 23:15:46.016042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.895 [2024-11-18 23:15:46.016054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:26.895 [2024-11-18 23:15:46.016061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:26.895 [2024-11-18 23:15:46.016069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:26.895 [2024-11-18 23:15:46.016075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.895 [2024-11-18 23:15:46.027005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:26.895 [2024-11-18 23:15:46.027139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:26.895 [2024-11-18 23:15:46.027173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:26.895 [2024-11-18 23:15:46.027184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.895 [2024-11-18 23:15:46.035796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:26.895 [2024-11-18 23:15:46.035828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:26.895 [2024-11-18 23:15:46.035843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:26.895 [2024-11-18 23:15:46.035849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.895 [2024-11-18 23:15:46.035893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:26.895 [2024-11-18 23:15:46.035901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:26.895 [2024-11-18 23:15:46.035912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:26.895 [2024-11-18 23:15:46.035919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.895 [2024-11-18 23:15:46.035941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:26.895 [2024-11-18 23:15:46.035948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:26.895 [2024-11-18 23:15:46.035954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:26.895 [2024-11-18 23:15:46.035962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.895 [2024-11-18 23:15:46.036019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:26.895 [2024-11-18 23:15:46.036028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:26.895 [2024-11-18 23:15:46.036034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:26.895 [2024-11-18 23:15:46.036041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.895 [2024-11-18 23:15:46.036064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:26.895 [2024-11-18 23:15:46.036071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:26.895 [2024-11-18 23:15:46.036081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:26.895 [2024-11-18 23:15:46.036088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.895 [2024-11-18 23:15:46.036124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:26.895 [2024-11-18 23:15:46.036132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:26.895 [2024-11-18 23:15:46.036138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:26.895 [2024-11-18 23:15:46.036144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.895 [2024-11-18 23:15:46.036195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:26.895 [2024-11-18 23:15:46.036204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:26.895 [2024-11-18 23:15:46.036211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:26.895 [2024-11-18 23:15:46.036219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.895 [2024-11-18 23:15:46.036328] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 57.762 ms, result 0 00:20:26.895 00:20:26.895 00:20:26.895 23:15:46 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:29.434 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:20:29.434 23:15:48 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:20:29.434 [2024-11-18 23:15:48.336380] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:20:29.434 [2024-11-18 23:15:48.336513] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87870 ] 00:20:29.434 [2024-11-18 23:15:48.482477] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:29.435 [2024-11-18 23:15:48.524267] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:29.435 [2024-11-18 23:15:48.623742] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:29.435 [2024-11-18 23:15:48.623803] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:29.435 [2024-11-18 23:15:48.778420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.435 [2024-11-18 23:15:48.778457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:29.435 [2024-11-18 23:15:48.778471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:29.435 [2024-11-18 23:15:48.778477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.435 [2024-11-18 23:15:48.778516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.435 [2024-11-18 23:15:48.778524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:29.435 [2024-11-18 23:15:48.778530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:20:29.435 [2024-11-18 23:15:48.778536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.435 [2024-11-18 23:15:48.778549] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:29.435 [2024-11-18 23:15:48.778737] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:29.435 [2024-11-18 23:15:48.778749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.435 [2024-11-18 23:15:48.778755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:29.435 [2024-11-18 23:15:48.778764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.204 ms 00:20:29.435 [2024-11-18 23:15:48.778771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.435 [2024-11-18 23:15:48.780261] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:29.435 [2024-11-18 23:15:48.783169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.435 [2024-11-18 23:15:48.783197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:29.435 [2024-11-18 23:15:48.783207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.911 ms 00:20:29.435 [2024-11-18 23:15:48.783213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.435 [2024-11-18 23:15:48.783265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.435 [2024-11-18 23:15:48.783277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:29.435 [2024-11-18 23:15:48.783283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:20:29.435 [2024-11-18 23:15:48.783290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.435 [2024-11-18 23:15:48.789585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.435 [2024-11-18 23:15:48.789743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:29.435 [2024-11-18 23:15:48.789762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.235 ms 00:20:29.435 [2024-11-18 23:15:48.789770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.435 [2024-11-18 23:15:48.789846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.435 [2024-11-18 23:15:48.789854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:29.435 [2024-11-18 23:15:48.789861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:20:29.435 [2024-11-18 23:15:48.789867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.435 [2024-11-18 23:15:48.789898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.435 [2024-11-18 23:15:48.789908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:29.435 [2024-11-18 23:15:48.789916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:29.435 [2024-11-18 23:15:48.789922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.435 [2024-11-18 23:15:48.789942] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:29.435 [2024-11-18 23:15:48.791543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.435 [2024-11-18 23:15:48.791566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:29.435 [2024-11-18 23:15:48.791573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.608 ms 00:20:29.435 [2024-11-18 23:15:48.791580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.435 [2024-11-18 23:15:48.791609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.435 [2024-11-18 23:15:48.791616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:29.435 [2024-11-18 23:15:48.791625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:29.435 [2024-11-18 23:15:48.791631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.435 [2024-11-18 23:15:48.791656] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:29.435 [2024-11-18 23:15:48.791680] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:29.435 [2024-11-18 23:15:48.791719] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:29.435 [2024-11-18 23:15:48.791732] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:29.435 [2024-11-18 23:15:48.791813] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:29.435 [2024-11-18 23:15:48.791822] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:29.435 [2024-11-18 23:15:48.791831] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:29.435 [2024-11-18 23:15:48.791838] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:29.435 [2024-11-18 23:15:48.791848] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:29.435 [2024-11-18 23:15:48.791856] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:29.435 [2024-11-18 23:15:48.791862] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:29.435 [2024-11-18 23:15:48.791867] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:29.435 [2024-11-18 23:15:48.791873] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:29.435 [2024-11-18 23:15:48.791879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.435 [2024-11-18 23:15:48.791885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:29.435 [2024-11-18 23:15:48.791891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.225 ms 00:20:29.435 [2024-11-18 23:15:48.791897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.435 [2024-11-18 23:15:48.791963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.435 [2024-11-18 23:15:48.791971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:29.435 [2024-11-18 23:15:48.791977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:29.435 [2024-11-18 23:15:48.791983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.435 [2024-11-18 23:15:48.792056] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:29.435 [2024-11-18 23:15:48.792064] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:29.435 [2024-11-18 23:15:48.792071] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:29.435 [2024-11-18 23:15:48.792082] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:29.435 [2024-11-18 23:15:48.792091] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:29.435 [2024-11-18 23:15:48.792096] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:29.435 [2024-11-18 23:15:48.792102] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:29.435 [2024-11-18 23:15:48.792108] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:29.435 [2024-11-18 23:15:48.792114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:29.435 [2024-11-18 23:15:48.792120] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:29.435 [2024-11-18 23:15:48.792126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:29.435 [2024-11-18 23:15:48.792131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:29.435 [2024-11-18 23:15:48.792139] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:29.435 [2024-11-18 23:15:48.792145] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:29.435 [2024-11-18 23:15:48.792167] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:29.435 [2024-11-18 23:15:48.792174] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:29.435 [2024-11-18 23:15:48.792179] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:29.435 [2024-11-18 23:15:48.792184] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:29.435 [2024-11-18 23:15:48.792190] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:29.435 [2024-11-18 23:15:48.792196] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:29.435 [2024-11-18 23:15:48.792202] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:29.435 [2024-11-18 23:15:48.792208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:29.435 [2024-11-18 23:15:48.792214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:29.435 [2024-11-18 23:15:48.792220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:29.435 [2024-11-18 23:15:48.792227] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:29.435 [2024-11-18 23:15:48.792232] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:29.435 [2024-11-18 23:15:48.792239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:29.435 [2024-11-18 23:15:48.792245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:29.435 [2024-11-18 23:15:48.792254] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:29.435 [2024-11-18 23:15:48.792261] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:29.435 [2024-11-18 23:15:48.792267] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:29.435 [2024-11-18 23:15:48.792274] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:29.435 [2024-11-18 23:15:48.792281] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:29.435 [2024-11-18 23:15:48.792287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:29.436 [2024-11-18 23:15:48.792292] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:29.436 [2024-11-18 23:15:48.792298] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:29.436 [2024-11-18 23:15:48.792304] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:29.436 [2024-11-18 23:15:48.792310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:29.436 [2024-11-18 23:15:48.792316] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:29.436 [2024-11-18 23:15:48.792322] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:29.436 [2024-11-18 23:15:48.792328] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:29.436 [2024-11-18 23:15:48.792334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:29.436 [2024-11-18 23:15:48.792340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:29.436 [2024-11-18 23:15:48.792346] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:29.436 [2024-11-18 23:15:48.792356] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:29.436 [2024-11-18 23:15:48.792363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:29.436 [2024-11-18 23:15:48.792374] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:29.436 [2024-11-18 23:15:48.792382] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:29.436 [2024-11-18 23:15:48.792388] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:29.436 [2024-11-18 23:15:48.792393] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:29.436 [2024-11-18 23:15:48.792400] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:29.436 [2024-11-18 23:15:48.792406] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:29.436 [2024-11-18 23:15:48.792412] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:29.436 [2024-11-18 23:15:48.792419] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:29.436 [2024-11-18 23:15:48.792427] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:29.436 [2024-11-18 23:15:48.792434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:29.436 [2024-11-18 23:15:48.792441] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:29.436 [2024-11-18 23:15:48.792447] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:29.436 [2024-11-18 23:15:48.792454] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:29.436 [2024-11-18 23:15:48.792466] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:29.436 [2024-11-18 23:15:48.792474] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:29.436 [2024-11-18 23:15:48.792480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:29.436 [2024-11-18 23:15:48.792486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:29.436 [2024-11-18 23:15:48.792493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:29.436 [2024-11-18 23:15:48.792499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:29.436 [2024-11-18 23:15:48.792505] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:29.436 [2024-11-18 23:15:48.792511] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:29.436 [2024-11-18 23:15:48.792518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:29.436 [2024-11-18 23:15:48.792524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:29.436 [2024-11-18 23:15:48.792530] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:29.436 [2024-11-18 23:15:48.792537] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:29.436 [2024-11-18 23:15:48.792544] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:29.436 [2024-11-18 23:15:48.792551] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:29.436 [2024-11-18 23:15:48.792557] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:29.436 [2024-11-18 23:15:48.792564] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:29.436 [2024-11-18 23:15:48.792571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.436 [2024-11-18 23:15:48.792579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:29.436 [2024-11-18 23:15:48.792585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.567 ms 00:20:29.436 [2024-11-18 23:15:48.792590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.697 [2024-11-18 23:15:48.814909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.697 [2024-11-18 23:15:48.814984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:29.698 [2024-11-18 23:15:48.815021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.281 ms 00:20:29.698 [2024-11-18 23:15:48.815038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.698 [2024-11-18 23:15:48.815259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.698 [2024-11-18 23:15:48.815281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:29.698 [2024-11-18 23:15:48.815300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.165 ms 00:20:29.698 [2024-11-18 23:15:48.815335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.698 [2024-11-18 23:15:48.826521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.698 [2024-11-18 23:15:48.826553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:29.698 [2024-11-18 23:15:48.826561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.075 ms 00:20:29.698 [2024-11-18 23:15:48.826567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.698 [2024-11-18 23:15:48.826591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.698 [2024-11-18 23:15:48.826597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:29.698 [2024-11-18 23:15:48.826604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:20:29.698 [2024-11-18 23:15:48.826611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.698 [2024-11-18 23:15:48.827025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.698 [2024-11-18 23:15:48.827049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:29.698 [2024-11-18 23:15:48.827056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.376 ms 00:20:29.698 [2024-11-18 23:15:48.827063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.698 [2024-11-18 23:15:48.827191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.698 [2024-11-18 23:15:48.827199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:29.698 [2024-11-18 23:15:48.827206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:20:29.698 [2024-11-18 23:15:48.827212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.698 [2024-11-18 23:15:48.832676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.698 [2024-11-18 23:15:48.832703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:29.698 [2024-11-18 23:15:48.832715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.446 ms 00:20:29.698 [2024-11-18 23:15:48.832721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.698 [2024-11-18 23:15:48.835726] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:29.698 [2024-11-18 23:15:48.835754] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:29.698 [2024-11-18 23:15:48.835766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.698 [2024-11-18 23:15:48.835773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:29.698 [2024-11-18 23:15:48.835780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.974 ms 00:20:29.698 [2024-11-18 23:15:48.835785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.698 [2024-11-18 23:15:48.847343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.698 [2024-11-18 23:15:48.847375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:29.698 [2024-11-18 23:15:48.847388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.527 ms 00:20:29.698 [2024-11-18 23:15:48.847395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.698 [2024-11-18 23:15:48.849086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.698 [2024-11-18 23:15:48.849224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:29.698 [2024-11-18 23:15:48.849236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.662 ms 00:20:29.698 [2024-11-18 23:15:48.849243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.698 [2024-11-18 23:15:48.850881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.698 [2024-11-18 23:15:48.850907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:29.698 [2024-11-18 23:15:48.850914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.611 ms 00:20:29.698 [2024-11-18 23:15:48.850920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.698 [2024-11-18 23:15:48.851194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.698 [2024-11-18 23:15:48.851205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:29.698 [2024-11-18 23:15:48.851212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.221 ms 00:20:29.698 [2024-11-18 23:15:48.851222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.698 [2024-11-18 23:15:48.868341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.698 [2024-11-18 23:15:48.868483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:29.698 [2024-11-18 23:15:48.868498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.105 ms 00:20:29.698 [2024-11-18 23:15:48.868506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.698 [2024-11-18 23:15:48.874996] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:29.698 [2024-11-18 23:15:48.877513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.698 [2024-11-18 23:15:48.877545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:29.698 [2024-11-18 23:15:48.877556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.979 ms 00:20:29.698 [2024-11-18 23:15:48.877563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.698 [2024-11-18 23:15:48.877613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.698 [2024-11-18 23:15:48.877624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:29.698 [2024-11-18 23:15:48.877631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:29.698 [2024-11-18 23:15:48.877640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.698 [2024-11-18 23:15:48.877722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.698 [2024-11-18 23:15:48.877731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:29.698 [2024-11-18 23:15:48.877737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:20:29.698 [2024-11-18 23:15:48.877745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.698 [2024-11-18 23:15:48.877761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.698 [2024-11-18 23:15:48.877768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:29.698 [2024-11-18 23:15:48.877775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:29.698 [2024-11-18 23:15:48.877781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.698 [2024-11-18 23:15:48.877810] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:29.698 [2024-11-18 23:15:48.877817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.698 [2024-11-18 23:15:48.877823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:29.698 [2024-11-18 23:15:48.877832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:29.698 [2024-11-18 23:15:48.877838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.698 [2024-11-18 23:15:48.882119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.698 [2024-11-18 23:15:48.882266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:29.698 [2024-11-18 23:15:48.882281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.265 ms 00:20:29.698 [2024-11-18 23:15:48.882288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.698 [2024-11-18 23:15:48.882349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.698 [2024-11-18 23:15:48.882359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:29.698 [2024-11-18 23:15:48.882367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:20:29.698 [2024-11-18 23:15:48.882377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.698 [2024-11-18 23:15:48.883244] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 104.449 ms, result 0 00:20:30.642  [2024-11-18T23:15:50.964Z] Copying: 18/1024 [MB] (18 MBps) [2024-11-18T23:15:51.908Z] Copying: 39/1024 [MB] (21 MBps) [2024-11-18T23:15:53.297Z] Copying: 56/1024 [MB] (16 MBps) [2024-11-18T23:15:54.263Z] Copying: 72/1024 [MB] (15 MBps) [2024-11-18T23:15:55.218Z] Copying: 89/1024 [MB] (17 MBps) [2024-11-18T23:15:56.194Z] Copying: 107/1024 [MB] (18 MBps) [2024-11-18T23:15:57.140Z] Copying: 124/1024 [MB] (16 MBps) [2024-11-18T23:15:58.084Z] Copying: 139/1024 [MB] (15 MBps) [2024-11-18T23:15:59.030Z] Copying: 154/1024 [MB] (15 MBps) [2024-11-18T23:15:59.975Z] Copying: 169/1024 [MB] (15 MBps) [2024-11-18T23:16:00.922Z] Copying: 185/1024 [MB] (15 MBps) [2024-11-18T23:16:02.311Z] Copying: 200/1024 [MB] (14 MBps) [2024-11-18T23:16:03.257Z] Copying: 217/1024 [MB] (16 MBps) [2024-11-18T23:16:04.201Z] Copying: 236/1024 [MB] (19 MBps) [2024-11-18T23:16:05.146Z] Copying: 257/1024 [MB] (20 MBps) [2024-11-18T23:16:06.090Z] Copying: 279/1024 [MB] (22 MBps) [2024-11-18T23:16:07.036Z] Copying: 298/1024 [MB] (19 MBps) [2024-11-18T23:16:07.982Z] Copying: 315/1024 [MB] (16 MBps) [2024-11-18T23:16:08.925Z] Copying: 331/1024 [MB] (16 MBps) [2024-11-18T23:16:10.312Z] Copying: 356/1024 [MB] (24 MBps) [2024-11-18T23:16:11.257Z] Copying: 377/1024 [MB] (21 MBps) [2024-11-18T23:16:12.201Z] Copying: 392/1024 [MB] (15 MBps) [2024-11-18T23:16:13.147Z] Copying: 410/1024 [MB] (18 MBps) [2024-11-18T23:16:14.091Z] Copying: 421/1024 [MB] (10 MBps) [2024-11-18T23:16:15.036Z] Copying: 441/1024 [MB] (20 MBps) [2024-11-18T23:16:15.980Z] Copying: 457/1024 [MB] (15 MBps) [2024-11-18T23:16:16.924Z] Copying: 482/1024 [MB] (24 MBps) [2024-11-18T23:16:18.310Z] Copying: 511/1024 [MB] (28 MBps) [2024-11-18T23:16:19.256Z] Copying: 541/1024 [MB] (30 MBps) [2024-11-18T23:16:20.202Z] Copying: 561/1024 [MB] (19 MBps) [2024-11-18T23:16:21.159Z] Copying: 576/1024 [MB] (15 MBps) [2024-11-18T23:16:22.103Z] Copying: 593/1024 [MB] (16 MBps) [2024-11-18T23:16:23.078Z] Copying: 610/1024 [MB] (17 MBps) [2024-11-18T23:16:24.046Z] Copying: 631/1024 [MB] (20 MBps) [2024-11-18T23:16:24.991Z] Copying: 641/1024 [MB] (10 MBps) [2024-11-18T23:16:25.935Z] Copying: 659/1024 [MB] (17 MBps) [2024-11-18T23:16:27.319Z] Copying: 680/1024 [MB] (20 MBps) [2024-11-18T23:16:28.262Z] Copying: 715/1024 [MB] (35 MBps) [2024-11-18T23:16:29.206Z] Copying: 739/1024 [MB] (23 MBps) [2024-11-18T23:16:30.147Z] Copying: 751/1024 [MB] (12 MBps) [2024-11-18T23:16:31.091Z] Copying: 762/1024 [MB] (11 MBps) [2024-11-18T23:16:32.035Z] Copying: 790792/1048576 [kB] (10088 kBps) [2024-11-18T23:16:32.979Z] Copying: 805/1024 [MB] (33 MBps) [2024-11-18T23:16:33.921Z] Copying: 822/1024 [MB] (16 MBps) [2024-11-18T23:16:35.307Z] Copying: 834/1024 [MB] (12 MBps) [2024-11-18T23:16:36.251Z] Copying: 845/1024 [MB] (10 MBps) [2024-11-18T23:16:37.196Z] Copying: 855/1024 [MB] (10 MBps) [2024-11-18T23:16:38.139Z] Copying: 866/1024 [MB] (10 MBps) [2024-11-18T23:16:39.081Z] Copying: 878/1024 [MB] (12 MBps) [2024-11-18T23:16:40.033Z] Copying: 889/1024 [MB] (10 MBps) [2024-11-18T23:16:40.976Z] Copying: 900/1024 [MB] (11 MBps) [2024-11-18T23:16:41.920Z] Copying: 932064/1048576 [kB] (9976 kBps) [2024-11-18T23:16:43.310Z] Copying: 920/1024 [MB] (10 MBps) [2024-11-18T23:16:44.253Z] Copying: 931/1024 [MB] (10 MBps) [2024-11-18T23:16:45.195Z] Copying: 941/1024 [MB] (10 MBps) [2024-11-18T23:16:46.140Z] Copying: 955/1024 [MB] (13 MBps) [2024-11-18T23:16:47.082Z] Copying: 970/1024 [MB] (15 MBps) [2024-11-18T23:16:48.016Z] Copying: 982/1024 [MB] (12 MBps) [2024-11-18T23:16:48.951Z] Copying: 996/1024 [MB] (13 MBps) [2024-11-18T23:16:50.333Z] Copying: 1012/1024 [MB] (15 MBps) [2024-11-18T23:16:50.594Z] Copying: 1023/1024 [MB] (11 MBps) [2024-11-18T23:16:50.594Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-11-18 23:16:50.399188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.216 [2024-11-18 23:16:50.399254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:31.216 [2024-11-18 23:16:50.399270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:31.216 [2024-11-18 23:16:50.399279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.216 [2024-11-18 23:16:50.402242] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:31.216 [2024-11-18 23:16:50.404723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.216 [2024-11-18 23:16:50.404837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:31.216 [2024-11-18 23:16:50.404863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.412 ms 00:21:31.216 [2024-11-18 23:16:50.404880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.216 [2024-11-18 23:16:50.423446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.216 [2024-11-18 23:16:50.423497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:31.216 [2024-11-18 23:16:50.423511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.353 ms 00:21:31.216 [2024-11-18 23:16:50.423520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.216 [2024-11-18 23:16:50.446812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.216 [2024-11-18 23:16:50.446859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:31.216 [2024-11-18 23:16:50.446872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.273 ms 00:21:31.216 [2024-11-18 23:16:50.446881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.216 [2024-11-18 23:16:50.453274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.216 [2024-11-18 23:16:50.453323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:31.216 [2024-11-18 23:16:50.453336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.347 ms 00:21:31.216 [2024-11-18 23:16:50.453344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.216 [2024-11-18 23:16:50.455965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.216 [2024-11-18 23:16:50.456012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:31.216 [2024-11-18 23:16:50.456023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.575 ms 00:21:31.216 [2024-11-18 23:16:50.456032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.216 [2024-11-18 23:16:50.461580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.216 [2024-11-18 23:16:50.461638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:31.216 [2024-11-18 23:16:50.461649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.505 ms 00:21:31.216 [2024-11-18 23:16:50.461657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.477 [2024-11-18 23:16:50.767029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.477 [2024-11-18 23:16:50.767090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:31.477 [2024-11-18 23:16:50.767107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 305.316 ms 00:21:31.477 [2024-11-18 23:16:50.767116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.477 [2024-11-18 23:16:50.770365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.478 [2024-11-18 23:16:50.770580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:31.478 [2024-11-18 23:16:50.770602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.231 ms 00:21:31.478 [2024-11-18 23:16:50.770611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.478 [2024-11-18 23:16:50.773875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.478 [2024-11-18 23:16:50.774091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:31.478 [2024-11-18 23:16:50.774113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.969 ms 00:21:31.478 [2024-11-18 23:16:50.774122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.478 [2024-11-18 23:16:50.776439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.478 [2024-11-18 23:16:50.776502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:31.478 [2024-11-18 23:16:50.776515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.995 ms 00:21:31.478 [2024-11-18 23:16:50.776523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.478 [2024-11-18 23:16:50.778547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.478 [2024-11-18 23:16:50.778594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:31.478 [2024-11-18 23:16:50.778605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.934 ms 00:21:31.478 [2024-11-18 23:16:50.778613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.478 [2024-11-18 23:16:50.778652] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:31.478 [2024-11-18 23:16:50.778668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 107264 / 261120 wr_cnt: 1 state: open 00:21:31.478 [2024-11-18 23:16:50.778680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:31.478 [2024-11-18 23:16:50.778689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:31.478 [2024-11-18 23:16:50.778709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:31.478 [2024-11-18 23:16:50.778718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:31.478 [2024-11-18 23:16:50.778726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:31.478 [2024-11-18 23:16:50.778735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:31.478 [2024-11-18 23:16:50.778744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:31.478 [2024-11-18 23:16:50.778752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:31.478 [2024-11-18 23:16:50.778760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:31.478 [2024-11-18 23:16:50.778768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:31.478 [2024-11-18 23:16:50.778776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:31.478 [2024-11-18 23:16:50.778785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:31.478 [2024-11-18 23:16:50.778795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:31.478 [2024-11-18 23:16:50.778803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:31.478 [2024-11-18 23:16:50.778811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:31.478 [2024-11-18 23:16:50.778820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:31.478 [2024-11-18 23:16:50.778829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:31.478 [2024-11-18 23:16:50.778838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:31.478 [2024-11-18 23:16:50.778847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:31.478 [2024-11-18 23:16:50.778857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:31.478 [2024-11-18 23:16:50.778864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:31.478 [2024-11-18 23:16:50.778872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:31.478 [2024-11-18 23:16:50.778880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:31.478 [2024-11-18 23:16:50.778887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:31.478 [2024-11-18 23:16:50.778895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:31.478 [2024-11-18 23:16:50.778904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:31.478 [2024-11-18 23:16:50.778911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:31.478 [2024-11-18 23:16:50.778920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:31.478 [2024-11-18 23:16:50.778928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:31.478 [2024-11-18 23:16:50.778936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:31.478 [2024-11-18 23:16:50.778943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:31.478 [2024-11-18 23:16:50.778951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:31.478 [2024-11-18 23:16:50.778959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:31.478 [2024-11-18 23:16:50.778966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:31.478 [2024-11-18 23:16:50.778975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:31.478 [2024-11-18 23:16:50.778983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:31.478 [2024-11-18 23:16:50.778992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:31.478 [2024-11-18 23:16:50.779000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:31.478 [2024-11-18 23:16:50.779008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:31.478 [2024-11-18 23:16:50.779015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:31.478 [2024-11-18 23:16:50.779023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:31.478 [2024-11-18 23:16:50.779031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:31.478 [2024-11-18 23:16:50.779038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:31.478 [2024-11-18 23:16:50.779046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:31.478 [2024-11-18 23:16:50.779053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:31.478 [2024-11-18 23:16:50.779060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:31.478 [2024-11-18 23:16:50.779068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:31.478 [2024-11-18 23:16:50.779075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:31.478 [2024-11-18 23:16:50.779082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:31.478 [2024-11-18 23:16:50.779091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:31.478 [2024-11-18 23:16:50.779100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:31.478 [2024-11-18 23:16:50.779108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:31.478 [2024-11-18 23:16:50.779116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:31.478 [2024-11-18 23:16:50.779124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:31.478 [2024-11-18 23:16:50.779133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:31.478 [2024-11-18 23:16:50.779140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:31.479 [2024-11-18 23:16:50.779147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:31.479 [2024-11-18 23:16:50.779179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:31.479 [2024-11-18 23:16:50.779187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:31.479 [2024-11-18 23:16:50.779195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:31.479 [2024-11-18 23:16:50.779206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:31.479 [2024-11-18 23:16:50.779215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:31.479 [2024-11-18 23:16:50.779223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:31.479 [2024-11-18 23:16:50.779231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:31.479 [2024-11-18 23:16:50.779239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:31.479 [2024-11-18 23:16:50.779247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:31.479 [2024-11-18 23:16:50.779255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:31.479 [2024-11-18 23:16:50.779263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:31.479 [2024-11-18 23:16:50.779271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:31.479 [2024-11-18 23:16:50.779279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:31.479 [2024-11-18 23:16:50.779287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:31.479 [2024-11-18 23:16:50.779295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:31.479 [2024-11-18 23:16:50.779322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:31.479 [2024-11-18 23:16:50.779330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:31.479 [2024-11-18 23:16:50.779339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:31.479 [2024-11-18 23:16:50.779348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:31.479 [2024-11-18 23:16:50.779356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:31.479 [2024-11-18 23:16:50.779364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:31.479 [2024-11-18 23:16:50.779373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:31.479 [2024-11-18 23:16:50.779381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:31.479 [2024-11-18 23:16:50.779388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:31.479 [2024-11-18 23:16:50.779397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:31.479 [2024-11-18 23:16:50.779405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:31.479 [2024-11-18 23:16:50.779415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:31.479 [2024-11-18 23:16:50.779424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:31.479 [2024-11-18 23:16:50.779432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:31.479 [2024-11-18 23:16:50.779440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:31.479 [2024-11-18 23:16:50.779450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:31.479 [2024-11-18 23:16:50.779457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:31.479 [2024-11-18 23:16:50.779465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:31.479 [2024-11-18 23:16:50.779473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:31.479 [2024-11-18 23:16:50.779483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:31.479 [2024-11-18 23:16:50.779493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:31.479 [2024-11-18 23:16:50.779500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:31.479 [2024-11-18 23:16:50.779508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:31.479 [2024-11-18 23:16:50.779515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:31.479 [2024-11-18 23:16:50.779523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:31.479 [2024-11-18 23:16:50.779532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:31.479 [2024-11-18 23:16:50.779539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:31.479 [2024-11-18 23:16:50.779555] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:31.479 [2024-11-18 23:16:50.779563] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e018dca0-090d-4735-a005-a51940083468 00:21:31.479 [2024-11-18 23:16:50.779574] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 107264 00:21:31.479 [2024-11-18 23:16:50.779594] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 108224 00:21:31.479 [2024-11-18 23:16:50.779601] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 107264 00:21:31.479 [2024-11-18 23:16:50.779619] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0089 00:21:31.479 [2024-11-18 23:16:50.779626] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:31.479 [2024-11-18 23:16:50.779635] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:31.479 [2024-11-18 23:16:50.779642] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:31.479 [2024-11-18 23:16:50.779648] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:31.479 [2024-11-18 23:16:50.779656] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:31.479 [2024-11-18 23:16:50.779664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.479 [2024-11-18 23:16:50.779672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:31.479 [2024-11-18 23:16:50.779680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.013 ms 00:21:31.479 [2024-11-18 23:16:50.779688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.479 [2024-11-18 23:16:50.781973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.479 [2024-11-18 23:16:50.782011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:31.479 [2024-11-18 23:16:50.782023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.258 ms 00:21:31.479 [2024-11-18 23:16:50.782031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.479 [2024-11-18 23:16:50.782178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:31.479 [2024-11-18 23:16:50.782190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:31.479 [2024-11-18 23:16:50.782200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:21:31.479 [2024-11-18 23:16:50.782212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.479 [2024-11-18 23:16:50.788850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.479 [2024-11-18 23:16:50.789066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:31.479 [2024-11-18 23:16:50.789087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.479 [2024-11-18 23:16:50.789096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.479 [2024-11-18 23:16:50.789189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.479 [2024-11-18 23:16:50.789199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:31.479 [2024-11-18 23:16:50.789218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.479 [2024-11-18 23:16:50.789228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.479 [2024-11-18 23:16:50.789298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.479 [2024-11-18 23:16:50.789314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:31.479 [2024-11-18 23:16:50.789324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.479 [2024-11-18 23:16:50.789332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.480 [2024-11-18 23:16:50.789348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.480 [2024-11-18 23:16:50.789357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:31.480 [2024-11-18 23:16:50.789365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.480 [2024-11-18 23:16:50.789373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.480 [2024-11-18 23:16:50.803451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.480 [2024-11-18 23:16:50.803690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:31.480 [2024-11-18 23:16:50.803709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.480 [2024-11-18 23:16:50.803718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.480 [2024-11-18 23:16:50.814305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.480 [2024-11-18 23:16:50.814514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:31.480 [2024-11-18 23:16:50.814532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.480 [2024-11-18 23:16:50.814541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.480 [2024-11-18 23:16:50.814634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.480 [2024-11-18 23:16:50.814645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:31.480 [2024-11-18 23:16:50.814658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.480 [2024-11-18 23:16:50.814666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.480 [2024-11-18 23:16:50.814701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.480 [2024-11-18 23:16:50.814711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:31.480 [2024-11-18 23:16:50.814720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.480 [2024-11-18 23:16:50.814729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.480 [2024-11-18 23:16:50.814803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.480 [2024-11-18 23:16:50.814814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:31.480 [2024-11-18 23:16:50.814827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.480 [2024-11-18 23:16:50.814835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.480 [2024-11-18 23:16:50.814865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.480 [2024-11-18 23:16:50.814876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:31.480 [2024-11-18 23:16:50.814884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.480 [2024-11-18 23:16:50.814892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.480 [2024-11-18 23:16:50.814933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.480 [2024-11-18 23:16:50.814942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:31.480 [2024-11-18 23:16:50.814951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.480 [2024-11-18 23:16:50.814964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.480 [2024-11-18 23:16:50.815015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:31.480 [2024-11-18 23:16:50.815029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:31.480 [2024-11-18 23:16:50.815040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:31.480 [2024-11-18 23:16:50.815049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:31.480 [2024-11-18 23:16:50.815224] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 419.117 ms, result 0 00:21:32.433 00:21:32.433 00:21:32.433 23:16:51 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:21:32.433 [2024-11-18 23:16:51.629631] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:21:32.433 [2024-11-18 23:16:51.629784] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88521 ] 00:21:32.433 [2024-11-18 23:16:51.784417] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:32.707 [2024-11-18 23:16:51.835771] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:32.707 [2024-11-18 23:16:51.950264] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:32.707 [2024-11-18 23:16:51.950612] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:32.969 [2024-11-18 23:16:52.112774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.969 [2024-11-18 23:16:52.112833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:32.969 [2024-11-18 23:16:52.112854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:32.969 [2024-11-18 23:16:52.112863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.969 [2024-11-18 23:16:52.112922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.969 [2024-11-18 23:16:52.112934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:32.969 [2024-11-18 23:16:52.112943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:21:32.969 [2024-11-18 23:16:52.112952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.969 [2024-11-18 23:16:52.112973] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:32.969 [2024-11-18 23:16:52.113293] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:32.969 [2024-11-18 23:16:52.113313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.969 [2024-11-18 23:16:52.113322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:32.969 [2024-11-18 23:16:52.113331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.346 ms 00:21:32.969 [2024-11-18 23:16:52.113343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.969 [2024-11-18 23:16:52.114942] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:32.969 [2024-11-18 23:16:52.118789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.969 [2024-11-18 23:16:52.118840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:32.969 [2024-11-18 23:16:52.118861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.849 ms 00:21:32.969 [2024-11-18 23:16:52.118869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.969 [2024-11-18 23:16:52.118946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.969 [2024-11-18 23:16:52.118957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:32.969 [2024-11-18 23:16:52.118967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:21:32.969 [2024-11-18 23:16:52.118979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.969 [2024-11-18 23:16:52.126892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.969 [2024-11-18 23:16:52.126937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:32.969 [2024-11-18 23:16:52.126948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.862 ms 00:21:32.969 [2024-11-18 23:16:52.126961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.969 [2024-11-18 23:16:52.127070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.969 [2024-11-18 23:16:52.127082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:32.969 [2024-11-18 23:16:52.127092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:21:32.969 [2024-11-18 23:16:52.127101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.969 [2024-11-18 23:16:52.127181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.969 [2024-11-18 23:16:52.127193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:32.969 [2024-11-18 23:16:52.127204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:21:32.969 [2024-11-18 23:16:52.127212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.969 [2024-11-18 23:16:52.127243] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:32.969 [2024-11-18 23:16:52.129250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.969 [2024-11-18 23:16:52.129496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:32.969 [2024-11-18 23:16:52.129514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.019 ms 00:21:32.969 [2024-11-18 23:16:52.129534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.969 [2024-11-18 23:16:52.129574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.969 [2024-11-18 23:16:52.129583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:32.969 [2024-11-18 23:16:52.129592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:21:32.969 [2024-11-18 23:16:52.129600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.969 [2024-11-18 23:16:52.129623] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:32.969 [2024-11-18 23:16:52.129652] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:32.970 [2024-11-18 23:16:52.129692] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:32.970 [2024-11-18 23:16:52.129713] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:32.970 [2024-11-18 23:16:52.129818] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:32.970 [2024-11-18 23:16:52.129835] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:32.970 [2024-11-18 23:16:52.129847] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:32.970 [2024-11-18 23:16:52.129858] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:32.970 [2024-11-18 23:16:52.129871] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:32.970 [2024-11-18 23:16:52.129881] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:32.970 [2024-11-18 23:16:52.129889] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:32.970 [2024-11-18 23:16:52.129896] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:32.970 [2024-11-18 23:16:52.129904] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:32.970 [2024-11-18 23:16:52.129915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.970 [2024-11-18 23:16:52.129928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:32.970 [2024-11-18 23:16:52.129936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:21:32.970 [2024-11-18 23:16:52.129944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.970 [2024-11-18 23:16:52.130027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.970 [2024-11-18 23:16:52.130044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:32.970 [2024-11-18 23:16:52.130053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:21:32.970 [2024-11-18 23:16:52.130063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.970 [2024-11-18 23:16:52.130196] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:32.970 [2024-11-18 23:16:52.130210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:32.970 [2024-11-18 23:16:52.130221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:32.970 [2024-11-18 23:16:52.130237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:32.970 [2024-11-18 23:16:52.130247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:32.970 [2024-11-18 23:16:52.130255] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:32.970 [2024-11-18 23:16:52.130263] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:32.970 [2024-11-18 23:16:52.130272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:32.970 [2024-11-18 23:16:52.130282] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:32.970 [2024-11-18 23:16:52.130293] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:32.970 [2024-11-18 23:16:52.130301] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:32.970 [2024-11-18 23:16:52.130310] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:32.970 [2024-11-18 23:16:52.130319] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:32.970 [2024-11-18 23:16:52.130329] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:32.970 [2024-11-18 23:16:52.130340] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:32.970 [2024-11-18 23:16:52.130350] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:32.970 [2024-11-18 23:16:52.130361] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:32.970 [2024-11-18 23:16:52.130372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:32.970 [2024-11-18 23:16:52.130380] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:32.970 [2024-11-18 23:16:52.130388] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:32.970 [2024-11-18 23:16:52.130395] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:32.970 [2024-11-18 23:16:52.130401] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:32.970 [2024-11-18 23:16:52.130409] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:32.970 [2024-11-18 23:16:52.130415] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:32.970 [2024-11-18 23:16:52.130422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:32.970 [2024-11-18 23:16:52.130429] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:32.970 [2024-11-18 23:16:52.130437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:32.970 [2024-11-18 23:16:52.130443] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:32.970 [2024-11-18 23:16:52.130450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:32.970 [2024-11-18 23:16:52.130457] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:32.970 [2024-11-18 23:16:52.130464] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:32.970 [2024-11-18 23:16:52.130471] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:32.970 [2024-11-18 23:16:52.130483] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:32.970 [2024-11-18 23:16:52.130491] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:32.970 [2024-11-18 23:16:52.130498] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:32.970 [2024-11-18 23:16:52.130504] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:32.970 [2024-11-18 23:16:52.130511] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:32.970 [2024-11-18 23:16:52.130517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:32.970 [2024-11-18 23:16:52.130523] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:32.970 [2024-11-18 23:16:52.130530] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:32.970 [2024-11-18 23:16:52.130537] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:32.970 [2024-11-18 23:16:52.130543] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:32.970 [2024-11-18 23:16:52.130550] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:32.970 [2024-11-18 23:16:52.130557] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:32.970 [2024-11-18 23:16:52.130565] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:32.970 [2024-11-18 23:16:52.130573] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:32.970 [2024-11-18 23:16:52.130586] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:32.970 [2024-11-18 23:16:52.130600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:32.970 [2024-11-18 23:16:52.130610] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:32.970 [2024-11-18 23:16:52.130617] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:32.970 [2024-11-18 23:16:52.130625] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:32.970 [2024-11-18 23:16:52.130635] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:32.970 [2024-11-18 23:16:52.130643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:32.970 [2024-11-18 23:16:52.130652] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:32.970 [2024-11-18 23:16:52.130661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:32.970 [2024-11-18 23:16:52.130669] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:32.970 [2024-11-18 23:16:52.130677] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:32.970 [2024-11-18 23:16:52.130685] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:32.970 [2024-11-18 23:16:52.130693] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:32.970 [2024-11-18 23:16:52.130700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:32.970 [2024-11-18 23:16:52.130707] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:32.970 [2024-11-18 23:16:52.130714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:32.970 [2024-11-18 23:16:52.130722] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:32.970 [2024-11-18 23:16:52.130729] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:32.970 [2024-11-18 23:16:52.130739] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:32.970 [2024-11-18 23:16:52.130748] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:32.970 [2024-11-18 23:16:52.130755] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:32.970 [2024-11-18 23:16:52.130773] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:32.970 [2024-11-18 23:16:52.130781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:32.970 [2024-11-18 23:16:52.130787] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:32.970 [2024-11-18 23:16:52.130801] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:32.970 [2024-11-18 23:16:52.130812] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:32.970 [2024-11-18 23:16:52.130820] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:32.970 [2024-11-18 23:16:52.130828] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:32.970 [2024-11-18 23:16:52.130835] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:32.970 [2024-11-18 23:16:52.130845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.970 [2024-11-18 23:16:52.130854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:32.970 [2024-11-18 23:16:52.130865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.752 ms 00:21:32.971 [2024-11-18 23:16:52.130873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.971 [2024-11-18 23:16:52.152105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.971 [2024-11-18 23:16:52.152190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:32.971 [2024-11-18 23:16:52.152209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.184 ms 00:21:32.971 [2024-11-18 23:16:52.152222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.971 [2024-11-18 23:16:52.152333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.971 [2024-11-18 23:16:52.152349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:32.971 [2024-11-18 23:16:52.152370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:21:32.971 [2024-11-18 23:16:52.152389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.971 [2024-11-18 23:16:52.164060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.971 [2024-11-18 23:16:52.164108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:32.971 [2024-11-18 23:16:52.164126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.596 ms 00:21:32.971 [2024-11-18 23:16:52.164134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.971 [2024-11-18 23:16:52.164188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.971 [2024-11-18 23:16:52.164198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:32.971 [2024-11-18 23:16:52.164208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:32.971 [2024-11-18 23:16:52.164217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.971 [2024-11-18 23:16:52.164726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.971 [2024-11-18 23:16:52.164774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:32.971 [2024-11-18 23:16:52.164785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.458 ms 00:21:32.971 [2024-11-18 23:16:52.164798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.971 [2024-11-18 23:16:52.164943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.971 [2024-11-18 23:16:52.164955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:32.971 [2024-11-18 23:16:52.164965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:21:32.971 [2024-11-18 23:16:52.164975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.971 [2024-11-18 23:16:52.171744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.971 [2024-11-18 23:16:52.171787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:32.971 [2024-11-18 23:16:52.171807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.741 ms 00:21:32.971 [2024-11-18 23:16:52.171815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.971 [2024-11-18 23:16:52.175624] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:21:32.971 [2024-11-18 23:16:52.175673] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:32.971 [2024-11-18 23:16:52.175686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.971 [2024-11-18 23:16:52.175695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:32.971 [2024-11-18 23:16:52.175704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.780 ms 00:21:32.971 [2024-11-18 23:16:52.175712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.971 [2024-11-18 23:16:52.191949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.971 [2024-11-18 23:16:52.192007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:32.971 [2024-11-18 23:16:52.192022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.186 ms 00:21:32.971 [2024-11-18 23:16:52.192030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.971 [2024-11-18 23:16:52.195199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.971 [2024-11-18 23:16:52.195241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:32.971 [2024-11-18 23:16:52.195252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.112 ms 00:21:32.971 [2024-11-18 23:16:52.195259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.971 [2024-11-18 23:16:52.197910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.971 [2024-11-18 23:16:52.197956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:32.971 [2024-11-18 23:16:52.197967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.582 ms 00:21:32.971 [2024-11-18 23:16:52.197975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.971 [2024-11-18 23:16:52.198339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.971 [2024-11-18 23:16:52.198360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:32.971 [2024-11-18 23:16:52.198375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:21:32.971 [2024-11-18 23:16:52.198387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.971 [2024-11-18 23:16:52.225986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.971 [2024-11-18 23:16:52.226279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:32.971 [2024-11-18 23:16:52.226320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.577 ms 00:21:32.971 [2024-11-18 23:16:52.226330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.971 [2024-11-18 23:16:52.234437] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:32.971 [2024-11-18 23:16:52.237935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.971 [2024-11-18 23:16:52.238120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:32.971 [2024-11-18 23:16:52.238150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.556 ms 00:21:32.971 [2024-11-18 23:16:52.238179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.971 [2024-11-18 23:16:52.238268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.971 [2024-11-18 23:16:52.238280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:32.971 [2024-11-18 23:16:52.238294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:21:32.971 [2024-11-18 23:16:52.238302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.971 [2024-11-18 23:16:52.240139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.971 [2024-11-18 23:16:52.240202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:32.971 [2024-11-18 23:16:52.240214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.799 ms 00:21:32.971 [2024-11-18 23:16:52.240228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.971 [2024-11-18 23:16:52.240257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.971 [2024-11-18 23:16:52.240272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:32.971 [2024-11-18 23:16:52.240285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:32.971 [2024-11-18 23:16:52.240293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.971 [2024-11-18 23:16:52.240335] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:32.971 [2024-11-18 23:16:52.240346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.971 [2024-11-18 23:16:52.240355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:32.971 [2024-11-18 23:16:52.240365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:32.971 [2024-11-18 23:16:52.240378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.971 [2024-11-18 23:16:52.246144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.971 [2024-11-18 23:16:52.246208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:32.971 [2024-11-18 23:16:52.246220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.744 ms 00:21:32.971 [2024-11-18 23:16:52.246228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.971 [2024-11-18 23:16:52.246316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:32.971 [2024-11-18 23:16:52.246330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:32.971 [2024-11-18 23:16:52.246340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:21:32.971 [2024-11-18 23:16:52.246348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:32.971 [2024-11-18 23:16:52.247559] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 134.282 ms, result 0 00:21:34.358  [2024-11-18T23:16:54.679Z] Copying: 8840/1048576 [kB] (8840 kBps) [2024-11-18T23:16:55.622Z] Copying: 23/1024 [MB] (14 MBps) [2024-11-18T23:16:56.607Z] Copying: 33/1024 [MB] (10 MBps) [2024-11-18T23:16:57.549Z] Copying: 48/1024 [MB] (14 MBps) [2024-11-18T23:16:58.487Z] Copying: 58/1024 [MB] (10 MBps) [2024-11-18T23:16:59.870Z] Copying: 69/1024 [MB] (11 MBps) [2024-11-18T23:17:00.441Z] Copying: 80/1024 [MB] (10 MBps) [2024-11-18T23:17:01.819Z] Copying: 90/1024 [MB] (10 MBps) [2024-11-18T23:17:02.749Z] Copying: 101/1024 [MB] (10 MBps) [2024-11-18T23:17:03.688Z] Copying: 113/1024 [MB] (11 MBps) [2024-11-18T23:17:04.622Z] Copying: 124/1024 [MB] (11 MBps) [2024-11-18T23:17:05.557Z] Copying: 135/1024 [MB] (11 MBps) [2024-11-18T23:17:06.493Z] Copying: 147/1024 [MB] (11 MBps) [2024-11-18T23:17:07.877Z] Copying: 159/1024 [MB] (11 MBps) [2024-11-18T23:17:08.443Z] Copying: 169/1024 [MB] (10 MBps) [2024-11-18T23:17:09.818Z] Copying: 180/1024 [MB] (11 MBps) [2024-11-18T23:17:10.755Z] Copying: 192/1024 [MB] (11 MBps) [2024-11-18T23:17:11.697Z] Copying: 203/1024 [MB] (11 MBps) [2024-11-18T23:17:12.629Z] Copying: 214/1024 [MB] (10 MBps) [2024-11-18T23:17:13.563Z] Copying: 226/1024 [MB] (12 MBps) [2024-11-18T23:17:14.499Z] Copying: 238/1024 [MB] (11 MBps) [2024-11-18T23:17:15.877Z] Copying: 250/1024 [MB] (11 MBps) [2024-11-18T23:17:16.444Z] Copying: 262/1024 [MB] (11 MBps) [2024-11-18T23:17:17.820Z] Copying: 273/1024 [MB] (11 MBps) [2024-11-18T23:17:18.757Z] Copying: 285/1024 [MB] (11 MBps) [2024-11-18T23:17:19.699Z] Copying: 297/1024 [MB] (11 MBps) [2024-11-18T23:17:20.667Z] Copying: 308/1024 [MB] (11 MBps) [2024-11-18T23:17:21.611Z] Copying: 318/1024 [MB] (10 MBps) [2024-11-18T23:17:22.553Z] Copying: 328/1024 [MB] (10 MBps) [2024-11-18T23:17:23.488Z] Copying: 339/1024 [MB] (10 MBps) [2024-11-18T23:17:24.870Z] Copying: 352/1024 [MB] (12 MBps) [2024-11-18T23:17:25.442Z] Copying: 375/1024 [MB] (23 MBps) [2024-11-18T23:17:26.822Z] Copying: 386/1024 [MB] (10 MBps) [2024-11-18T23:17:27.756Z] Copying: 397/1024 [MB] (11 MBps) [2024-11-18T23:17:28.697Z] Copying: 410/1024 [MB] (12 MBps) [2024-11-18T23:17:29.639Z] Copying: 422/1024 [MB] (11 MBps) [2024-11-18T23:17:30.578Z] Copying: 434/1024 [MB] (11 MBps) [2024-11-18T23:17:31.517Z] Copying: 445/1024 [MB] (11 MBps) [2024-11-18T23:17:32.455Z] Copying: 456/1024 [MB] (10 MBps) [2024-11-18T23:17:33.828Z] Copying: 467/1024 [MB] (11 MBps) [2024-11-18T23:17:34.774Z] Copying: 480/1024 [MB] (12 MBps) [2024-11-18T23:17:35.707Z] Copying: 493/1024 [MB] (12 MBps) [2024-11-18T23:17:36.645Z] Copying: 505/1024 [MB] (12 MBps) [2024-11-18T23:17:37.587Z] Copying: 518/1024 [MB] (12 MBps) [2024-11-18T23:17:38.523Z] Copying: 528/1024 [MB] (10 MBps) [2024-11-18T23:17:39.457Z] Copying: 540/1024 [MB] (12 MBps) [2024-11-18T23:17:40.830Z] Copying: 553/1024 [MB] (12 MBps) [2024-11-18T23:17:41.763Z] Copying: 565/1024 [MB] (12 MBps) [2024-11-18T23:17:42.703Z] Copying: 577/1024 [MB] (11 MBps) [2024-11-18T23:17:43.637Z] Copying: 589/1024 [MB] (12 MBps) [2024-11-18T23:17:44.577Z] Copying: 601/1024 [MB] (12 MBps) [2024-11-18T23:17:45.517Z] Copying: 613/1024 [MB] (11 MBps) [2024-11-18T23:17:46.456Z] Copying: 628/1024 [MB] (15 MBps) [2024-11-18T23:17:47.837Z] Copying: 648/1024 [MB] (19 MBps) [2024-11-18T23:17:48.814Z] Copying: 667/1024 [MB] (18 MBps) [2024-11-18T23:17:49.770Z] Copying: 679/1024 [MB] (12 MBps) [2024-11-18T23:17:50.704Z] Copying: 691/1024 [MB] (11 MBps) [2024-11-18T23:17:51.681Z] Copying: 703/1024 [MB] (11 MBps) [2024-11-18T23:17:52.615Z] Copying: 714/1024 [MB] (11 MBps) [2024-11-18T23:17:53.556Z] Copying: 726/1024 [MB] (11 MBps) [2024-11-18T23:17:54.491Z] Copying: 737/1024 [MB] (11 MBps) [2024-11-18T23:17:55.867Z] Copying: 749/1024 [MB] (12 MBps) [2024-11-18T23:17:56.804Z] Copying: 762/1024 [MB] (12 MBps) [2024-11-18T23:17:57.739Z] Copying: 774/1024 [MB] (12 MBps) [2024-11-18T23:17:58.678Z] Copying: 785/1024 [MB] (11 MBps) [2024-11-18T23:17:59.616Z] Copying: 797/1024 [MB] (12 MBps) [2024-11-18T23:18:00.550Z] Copying: 809/1024 [MB] (11 MBps) [2024-11-18T23:18:01.490Z] Copying: 821/1024 [MB] (11 MBps) [2024-11-18T23:18:02.873Z] Copying: 835/1024 [MB] (14 MBps) [2024-11-18T23:18:03.440Z] Copying: 846/1024 [MB] (10 MBps) [2024-11-18T23:18:04.819Z] Copying: 858/1024 [MB] (11 MBps) [2024-11-18T23:18:05.760Z] Copying: 876/1024 [MB] (18 MBps) [2024-11-18T23:18:06.701Z] Copying: 887/1024 [MB] (10 MBps) [2024-11-18T23:18:07.638Z] Copying: 902/1024 [MB] (14 MBps) [2024-11-18T23:18:08.576Z] Copying: 913/1024 [MB] (10 MBps) [2024-11-18T23:18:09.507Z] Copying: 923/1024 [MB] (10 MBps) [2024-11-18T23:18:10.441Z] Copying: 935/1024 [MB] (11 MBps) [2024-11-18T23:18:11.819Z] Copying: 947/1024 [MB] (12 MBps) [2024-11-18T23:18:12.754Z] Copying: 958/1024 [MB] (10 MBps) [2024-11-18T23:18:13.688Z] Copying: 970/1024 [MB] (12 MBps) [2024-11-18T23:18:14.630Z] Copying: 983/1024 [MB] (13 MBps) [2024-11-18T23:18:15.566Z] Copying: 996/1024 [MB] (13 MBps) [2024-11-18T23:18:16.506Z] Copying: 1007/1024 [MB] (11 MBps) [2024-11-18T23:18:16.765Z] Copying: 1024/1024 [MB] (average 12 MBps)[2024-11-18 23:18:16.531604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.387 [2024-11-18 23:18:16.531684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:57.387 [2024-11-18 23:18:16.531700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:57.387 [2024-11-18 23:18:16.531711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.387 [2024-11-18 23:18:16.531735] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:57.387 [2024-11-18 23:18:16.532492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.387 [2024-11-18 23:18:16.532525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:57.387 [2024-11-18 23:18:16.532538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.739 ms 00:22:57.387 [2024-11-18 23:18:16.532561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.387 [2024-11-18 23:18:16.532805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.387 [2024-11-18 23:18:16.532818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:57.387 [2024-11-18 23:18:16.532829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.212 ms 00:22:57.387 [2024-11-18 23:18:16.532839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.387 [2024-11-18 23:18:16.538700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.387 [2024-11-18 23:18:16.538947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:57.387 [2024-11-18 23:18:16.538970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.843 ms 00:22:57.387 [2024-11-18 23:18:16.538979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.387 [2024-11-18 23:18:16.546781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.387 [2024-11-18 23:18:16.546949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:57.387 [2024-11-18 23:18:16.547015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.658 ms 00:22:57.387 [2024-11-18 23:18:16.547040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.387 [2024-11-18 23:18:16.550147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.387 [2024-11-18 23:18:16.550319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:57.387 [2024-11-18 23:18:16.550381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.021 ms 00:22:57.387 [2024-11-18 23:18:16.550404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.387 [2024-11-18 23:18:16.556329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.387 [2024-11-18 23:18:16.556493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:57.387 [2024-11-18 23:18:16.556554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.849 ms 00:22:57.387 [2024-11-18 23:18:16.556577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.650 [2024-11-18 23:18:16.936635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.650 [2024-11-18 23:18:16.936832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:57.650 [2024-11-18 23:18:16.937069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 380.003 ms 00:22:57.650 [2024-11-18 23:18:16.937097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.650 [2024-11-18 23:18:16.940696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.650 [2024-11-18 23:18:16.940867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:57.650 [2024-11-18 23:18:16.940934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.555 ms 00:22:57.650 [2024-11-18 23:18:16.940957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.650 [2024-11-18 23:18:16.944084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.650 [2024-11-18 23:18:16.944282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:57.650 [2024-11-18 23:18:16.944362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.079 ms 00:22:57.650 [2024-11-18 23:18:16.944388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.650 [2024-11-18 23:18:16.946950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.650 [2024-11-18 23:18:16.947112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:57.650 [2024-11-18 23:18:16.947203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.513 ms 00:22:57.650 [2024-11-18 23:18:16.947227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.650 [2024-11-18 23:18:16.949679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.650 [2024-11-18 23:18:16.949843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:57.650 [2024-11-18 23:18:16.949906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.349 ms 00:22:57.650 [2024-11-18 23:18:16.949928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.650 [2024-11-18 23:18:16.949972] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:57.650 [2024-11-18 23:18:16.950000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:22:57.650 [2024-11-18 23:18:16.950048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:57.650 [2024-11-18 23:18:16.950140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:57.650 [2024-11-18 23:18:16.950189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:57.650 [2024-11-18 23:18:16.950220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:57.650 [2024-11-18 23:18:16.950249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:57.650 [2024-11-18 23:18:16.950315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:57.650 [2024-11-18 23:18:16.950346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:57.650 [2024-11-18 23:18:16.950378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:57.650 [2024-11-18 23:18:16.950456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:57.650 [2024-11-18 23:18:16.950488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:57.650 [2024-11-18 23:18:16.950517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:57.650 [2024-11-18 23:18:16.950572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:57.650 [2024-11-18 23:18:16.950606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:57.650 [2024-11-18 23:18:16.950635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:57.650 [2024-11-18 23:18:16.950666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:57.650 [2024-11-18 23:18:16.950695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:57.650 [2024-11-18 23:18:16.950735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:57.650 [2024-11-18 23:18:16.950814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:57.650 [2024-11-18 23:18:16.950846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:57.650 [2024-11-18 23:18:16.950875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:57.650 [2024-11-18 23:18:16.950905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:57.650 [2024-11-18 23:18:16.950935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:57.650 [2024-11-18 23:18:16.950966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:57.650 [2024-11-18 23:18:16.951073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:57.650 [2024-11-18 23:18:16.951105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:57.650 [2024-11-18 23:18:16.951190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:57.650 [2024-11-18 23:18:16.951224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:57.650 [2024-11-18 23:18:16.951288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:57.650 [2024-11-18 23:18:16.951335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:57.650 [2024-11-18 23:18:16.951366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:57.650 [2024-11-18 23:18:16.951424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:57.650 [2024-11-18 23:18:16.951453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:57.650 [2024-11-18 23:18:16.951485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:57.651 [2024-11-18 23:18:16.951514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:57.651 [2024-11-18 23:18:16.951546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:57.651 [2024-11-18 23:18:16.951575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:57.651 [2024-11-18 23:18:16.951649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:57.651 [2024-11-18 23:18:16.951682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:57.651 [2024-11-18 23:18:16.951711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:57.651 [2024-11-18 23:18:16.951742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:57.651 [2024-11-18 23:18:16.951763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:57.651 [2024-11-18 23:18:16.951771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:57.651 [2024-11-18 23:18:16.951780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:57.651 [2024-11-18 23:18:16.951789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:57.651 [2024-11-18 23:18:16.951798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:57.651 [2024-11-18 23:18:16.951806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:57.651 [2024-11-18 23:18:16.951814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:57.651 [2024-11-18 23:18:16.951822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:57.651 [2024-11-18 23:18:16.951830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:57.651 [2024-11-18 23:18:16.951837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:57.651 [2024-11-18 23:18:16.951846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:57.651 [2024-11-18 23:18:16.951855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:57.651 [2024-11-18 23:18:16.951863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:57.651 [2024-11-18 23:18:16.951871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:57.651 [2024-11-18 23:18:16.951879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:57.651 [2024-11-18 23:18:16.951888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:57.651 [2024-11-18 23:18:16.951899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:57.651 [2024-11-18 23:18:16.951908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:57.651 [2024-11-18 23:18:16.951917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:57.651 [2024-11-18 23:18:16.951925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:57.651 [2024-11-18 23:18:16.951932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:57.651 [2024-11-18 23:18:16.951940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:57.651 [2024-11-18 23:18:16.951947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:57.651 [2024-11-18 23:18:16.951955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:57.651 [2024-11-18 23:18:16.951964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:57.651 [2024-11-18 23:18:16.951972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:57.651 [2024-11-18 23:18:16.951980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:57.651 [2024-11-18 23:18:16.951988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:57.651 [2024-11-18 23:18:16.951995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:57.651 [2024-11-18 23:18:16.952003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:57.651 [2024-11-18 23:18:16.952014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:57.651 [2024-11-18 23:18:16.952022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:57.651 [2024-11-18 23:18:16.952030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:57.651 [2024-11-18 23:18:16.952037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:57.651 [2024-11-18 23:18:16.952045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:57.651 [2024-11-18 23:18:16.952053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:57.651 [2024-11-18 23:18:16.952061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:57.651 [2024-11-18 23:18:16.952069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:57.651 [2024-11-18 23:18:16.952076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:57.651 [2024-11-18 23:18:16.952084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:57.651 [2024-11-18 23:18:16.952091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:57.651 [2024-11-18 23:18:16.952098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:57.651 [2024-11-18 23:18:16.952105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:57.651 [2024-11-18 23:18:16.952113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:57.651 [2024-11-18 23:18:16.952120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:57.651 [2024-11-18 23:18:16.952129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:57.651 [2024-11-18 23:18:16.952136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:57.651 [2024-11-18 23:18:16.952143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:57.651 [2024-11-18 23:18:16.952152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:57.651 [2024-11-18 23:18:16.952179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:57.651 [2024-11-18 23:18:16.952187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:57.651 [2024-11-18 23:18:16.952195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:57.651 [2024-11-18 23:18:16.952204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:57.651 [2024-11-18 23:18:16.952212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:57.651 [2024-11-18 23:18:16.952220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:57.651 [2024-11-18 23:18:16.952228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:57.651 [2024-11-18 23:18:16.952236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:57.651 [2024-11-18 23:18:16.952244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:57.651 [2024-11-18 23:18:16.952252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:57.651 [2024-11-18 23:18:16.952270] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:57.651 [2024-11-18 23:18:16.952279] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e018dca0-090d-4735-a005-a51940083468 00:22:57.651 [2024-11-18 23:18:16.952288] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:22:57.651 [2024-11-18 23:18:16.952295] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 24768 00:22:57.651 [2024-11-18 23:18:16.952303] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 23808 00:22:57.651 [2024-11-18 23:18:16.952325] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0403 00:22:57.651 [2024-11-18 23:18:16.952332] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:57.651 [2024-11-18 23:18:16.952340] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:57.651 [2024-11-18 23:18:16.952348] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:57.651 [2024-11-18 23:18:16.952355] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:57.651 [2024-11-18 23:18:16.952364] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:57.651 [2024-11-18 23:18:16.952374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.651 [2024-11-18 23:18:16.952382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:57.651 [2024-11-18 23:18:16.952391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.403 ms 00:22:57.651 [2024-11-18 23:18:16.952399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.651 [2024-11-18 23:18:16.954712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.652 [2024-11-18 23:18:16.954756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:57.652 [2024-11-18 23:18:16.954768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.288 ms 00:22:57.652 [2024-11-18 23:18:16.954777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.652 [2024-11-18 23:18:16.954897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:57.652 [2024-11-18 23:18:16.954907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:57.652 [2024-11-18 23:18:16.954918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:22:57.652 [2024-11-18 23:18:16.954926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.652 [2024-11-18 23:18:16.961710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:57.652 [2024-11-18 23:18:16.961897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:57.652 [2024-11-18 23:18:16.961916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:57.652 [2024-11-18 23:18:16.961925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.652 [2024-11-18 23:18:16.961991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:57.652 [2024-11-18 23:18:16.961999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:57.652 [2024-11-18 23:18:16.962007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:57.652 [2024-11-18 23:18:16.962016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.652 [2024-11-18 23:18:16.962087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:57.652 [2024-11-18 23:18:16.962103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:57.652 [2024-11-18 23:18:16.962113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:57.652 [2024-11-18 23:18:16.962121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.652 [2024-11-18 23:18:16.962137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:57.652 [2024-11-18 23:18:16.962145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:57.652 [2024-11-18 23:18:16.962201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:57.652 [2024-11-18 23:18:16.962209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.652 [2024-11-18 23:18:16.975745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:57.652 [2024-11-18 23:18:16.975797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:57.652 [2024-11-18 23:18:16.975809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:57.652 [2024-11-18 23:18:16.975817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.652 [2024-11-18 23:18:16.985882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:57.652 [2024-11-18 23:18:16.985932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:57.652 [2024-11-18 23:18:16.985944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:57.652 [2024-11-18 23:18:16.985952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.652 [2024-11-18 23:18:16.986001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:57.652 [2024-11-18 23:18:16.986011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:57.652 [2024-11-18 23:18:16.986027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:57.652 [2024-11-18 23:18:16.986035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.652 [2024-11-18 23:18:16.986070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:57.652 [2024-11-18 23:18:16.986080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:57.652 [2024-11-18 23:18:16.986088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:57.652 [2024-11-18 23:18:16.986096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.652 [2024-11-18 23:18:16.986207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:57.652 [2024-11-18 23:18:16.986219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:57.652 [2024-11-18 23:18:16.986228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:57.652 [2024-11-18 23:18:16.986240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.652 [2024-11-18 23:18:16.986286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:57.652 [2024-11-18 23:18:16.986301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:57.652 [2024-11-18 23:18:16.986310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:57.652 [2024-11-18 23:18:16.986319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.652 [2024-11-18 23:18:16.986360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:57.652 [2024-11-18 23:18:16.986369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:57.652 [2024-11-18 23:18:16.986378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:57.652 [2024-11-18 23:18:16.986389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.652 [2024-11-18 23:18:16.986439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:57.652 [2024-11-18 23:18:16.986450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:57.652 [2024-11-18 23:18:16.986463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:57.652 [2024-11-18 23:18:16.986473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:57.652 [2024-11-18 23:18:16.986607] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 454.972 ms, result 0 00:22:57.976 00:22:57.976 00:22:57.976 23:18:17 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:00.517 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:23:00.517 23:18:19 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:23:00.517 23:18:19 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:23:00.517 23:18:19 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:23:00.517 23:18:19 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:00.517 23:18:19 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:00.517 Process with pid 86351 is not found 00:23:00.517 Remove shared memory files 00:23:00.517 23:18:19 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 86351 00:23:00.517 23:18:19 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 86351 ']' 00:23:00.517 23:18:19 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 86351 00:23:00.517 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (86351) - No such process 00:23:00.517 23:18:19 ftl.ftl_restore -- common/autotest_common.sh@977 -- # echo 'Process with pid 86351 is not found' 00:23:00.517 23:18:19 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:23:00.517 23:18:19 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:23:00.517 23:18:19 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:23:00.517 23:18:19 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:23:00.517 23:18:19 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:23:00.517 23:18:19 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:23:00.517 23:18:19 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:23:00.517 ************************************ 00:23:00.517 END TEST ftl_restore 00:23:00.517 ************************************ 00:23:00.517 00:23:00.517 real 4m56.048s 00:23:00.517 user 4m42.345s 00:23:00.517 sys 0m13.171s 00:23:00.517 23:18:19 ftl.ftl_restore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:00.517 23:18:19 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:23:00.517 23:18:19 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:23:00.517 23:18:19 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:23:00.517 23:18:19 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:00.517 23:18:19 ftl -- common/autotest_common.sh@10 -- # set +x 00:23:00.517 ************************************ 00:23:00.517 START TEST ftl_dirty_shutdown 00:23:00.517 ************************************ 00:23:00.517 23:18:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:23:00.517 * Looking for test storage... 00:23:00.517 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:00.517 23:18:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:00.518 23:18:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:00.518 23:18:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:23:00.518 23:18:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:00.518 23:18:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:00.518 23:18:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:00.518 23:18:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:00.518 23:18:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:23:00.518 23:18:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:23:00.518 23:18:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:23:00.518 23:18:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:23:00.518 23:18:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:23:00.518 23:18:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:23:00.518 23:18:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:23:00.518 23:18:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:00.518 23:18:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:23:00.518 23:18:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:23:00.518 23:18:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:00.518 23:18:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:00.518 23:18:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:23:00.518 23:18:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:23:00.518 23:18:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:00.518 23:18:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:23:00.518 23:18:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:23:00.518 23:18:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:23:00.518 23:18:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:23:00.518 23:18:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:00.780 23:18:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:23:00.780 23:18:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:23:00.780 23:18:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:00.780 23:18:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:00.780 23:18:19 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:23:00.780 23:18:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:00.780 23:18:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:00.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:00.780 --rc genhtml_branch_coverage=1 00:23:00.780 --rc genhtml_function_coverage=1 00:23:00.780 --rc genhtml_legend=1 00:23:00.780 --rc geninfo_all_blocks=1 00:23:00.780 --rc geninfo_unexecuted_blocks=1 00:23:00.780 00:23:00.780 ' 00:23:00.780 23:18:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:00.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:00.780 --rc genhtml_branch_coverage=1 00:23:00.780 --rc genhtml_function_coverage=1 00:23:00.780 --rc genhtml_legend=1 00:23:00.780 --rc geninfo_all_blocks=1 00:23:00.780 --rc geninfo_unexecuted_blocks=1 00:23:00.780 00:23:00.780 ' 00:23:00.780 23:18:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:00.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:00.780 --rc genhtml_branch_coverage=1 00:23:00.780 --rc genhtml_function_coverage=1 00:23:00.780 --rc genhtml_legend=1 00:23:00.780 --rc geninfo_all_blocks=1 00:23:00.780 --rc geninfo_unexecuted_blocks=1 00:23:00.780 00:23:00.780 ' 00:23:00.780 23:18:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:00.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:00.780 --rc genhtml_branch_coverage=1 00:23:00.780 --rc genhtml_function_coverage=1 00:23:00.780 --rc genhtml_legend=1 00:23:00.780 --rc geninfo_all_blocks=1 00:23:00.780 --rc geninfo_unexecuted_blocks=1 00:23:00.780 00:23:00.780 ' 00:23:00.780 23:18:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:00.780 23:18:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:23:00.780 23:18:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:00.780 23:18:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:00.780 23:18:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:00.780 23:18:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:00.780 23:18:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:00.780 23:18:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:00.780 23:18:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:00.780 23:18:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:00.780 23:18:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:00.780 23:18:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:00.780 23:18:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:00.780 23:18:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:00.780 23:18:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:00.780 23:18:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:00.780 23:18:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:00.780 23:18:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:00.780 23:18:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:00.780 23:18:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:00.780 23:18:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:00.780 23:18:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:00.780 23:18:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:00.780 23:18:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:00.780 23:18:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:00.780 23:18:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:00.780 23:18:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:00.780 23:18:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:00.780 23:18:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:00.780 23:18:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:00.780 23:18:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:00.780 23:18:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:23:00.780 23:18:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:23:00.780 23:18:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:23:00.780 23:18:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:23:00.780 23:18:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:23:00.780 23:18:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:23:00.780 23:18:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:23:00.780 23:18:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:23:00.780 23:18:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:23:00.780 23:18:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:23:00.780 23:18:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:23:00.780 23:18:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=89499 00:23:00.780 23:18:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 89499 00:23:00.780 23:18:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@831 -- # '[' -z 89499 ']' 00:23:00.780 23:18:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:00.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:00.780 23:18:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:00.780 23:18:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:00.780 23:18:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:00.780 23:18:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:00.780 23:18:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:23:00.780 [2024-11-18 23:18:20.011504] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:00.780 [2024-11-18 23:18:20.012260] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89499 ] 00:23:01.040 [2024-11-18 23:18:20.169522] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:01.040 [2024-11-18 23:18:20.236037] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:01.609 23:18:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:01.609 23:18:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # return 0 00:23:01.609 23:18:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:23:01.609 23:18:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:23:01.609 23:18:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:23:01.609 23:18:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:23:01.609 23:18:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:23:01.609 23:18:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:23:01.869 23:18:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:23:01.869 23:18:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:23:01.869 23:18:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:23:01.869 23:18:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:23:01.869 23:18:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:01.869 23:18:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:23:01.869 23:18:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:23:01.869 23:18:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:23:02.130 23:18:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:02.130 { 00:23:02.130 "name": "nvme0n1", 00:23:02.130 "aliases": [ 00:23:02.130 "94a91329-7c83-4ed0-8bba-f7f57416b3bb" 00:23:02.130 ], 00:23:02.130 "product_name": "NVMe disk", 00:23:02.130 "block_size": 4096, 00:23:02.130 "num_blocks": 1310720, 00:23:02.130 "uuid": "94a91329-7c83-4ed0-8bba-f7f57416b3bb", 00:23:02.130 "numa_id": -1, 00:23:02.130 "assigned_rate_limits": { 00:23:02.130 "rw_ios_per_sec": 0, 00:23:02.130 "rw_mbytes_per_sec": 0, 00:23:02.130 "r_mbytes_per_sec": 0, 00:23:02.130 "w_mbytes_per_sec": 0 00:23:02.130 }, 00:23:02.130 "claimed": true, 00:23:02.130 "claim_type": "read_many_write_one", 00:23:02.130 "zoned": false, 00:23:02.130 "supported_io_types": { 00:23:02.130 "read": true, 00:23:02.130 "write": true, 00:23:02.130 "unmap": true, 00:23:02.130 "flush": true, 00:23:02.130 "reset": true, 00:23:02.130 "nvme_admin": true, 00:23:02.130 "nvme_io": true, 00:23:02.130 "nvme_io_md": false, 00:23:02.130 "write_zeroes": true, 00:23:02.130 "zcopy": false, 00:23:02.130 "get_zone_info": false, 00:23:02.130 "zone_management": false, 00:23:02.130 "zone_append": false, 00:23:02.130 "compare": true, 00:23:02.130 "compare_and_write": false, 00:23:02.130 "abort": true, 00:23:02.130 "seek_hole": false, 00:23:02.130 "seek_data": false, 00:23:02.130 "copy": true, 00:23:02.130 "nvme_iov_md": false 00:23:02.130 }, 00:23:02.130 "driver_specific": { 00:23:02.130 "nvme": [ 00:23:02.130 { 00:23:02.130 "pci_address": "0000:00:11.0", 00:23:02.130 "trid": { 00:23:02.130 "trtype": "PCIe", 00:23:02.130 "traddr": "0000:00:11.0" 00:23:02.130 }, 00:23:02.130 "ctrlr_data": { 00:23:02.130 "cntlid": 0, 00:23:02.130 "vendor_id": "0x1b36", 00:23:02.130 "model_number": "QEMU NVMe Ctrl", 00:23:02.130 "serial_number": "12341", 00:23:02.130 "firmware_revision": "8.0.0", 00:23:02.130 "subnqn": "nqn.2019-08.org.qemu:12341", 00:23:02.130 "oacs": { 00:23:02.130 "security": 0, 00:23:02.130 "format": 1, 00:23:02.130 "firmware": 0, 00:23:02.130 "ns_manage": 1 00:23:02.130 }, 00:23:02.130 "multi_ctrlr": false, 00:23:02.130 "ana_reporting": false 00:23:02.130 }, 00:23:02.130 "vs": { 00:23:02.130 "nvme_version": "1.4" 00:23:02.130 }, 00:23:02.130 "ns_data": { 00:23:02.130 "id": 1, 00:23:02.130 "can_share": false 00:23:02.130 } 00:23:02.130 } 00:23:02.130 ], 00:23:02.130 "mp_policy": "active_passive" 00:23:02.130 } 00:23:02.130 } 00:23:02.130 ]' 00:23:02.130 23:18:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:02.130 23:18:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:23:02.130 23:18:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:02.130 23:18:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:23:02.130 23:18:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:23:02.130 23:18:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:23:02.130 23:18:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:23:02.130 23:18:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:23:02.130 23:18:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:23:02.130 23:18:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:02.130 23:18:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:23:02.392 23:18:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=c9da61ad-5018-444f-af3f-0fb2cd04932d 00:23:02.392 23:18:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:23:02.392 23:18:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c9da61ad-5018-444f-af3f-0fb2cd04932d 00:23:02.673 23:18:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:23:02.934 23:18:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=357a325c-b3a9-478a-b762-7b8c20585fd4 00:23:02.934 23:18:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 357a325c-b3a9-478a-b762-7b8c20585fd4 00:23:03.194 23:18:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=722f6525-cb63-4008-80ff-ac12751ff6bb 00:23:03.194 23:18:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:23:03.194 23:18:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 722f6525-cb63-4008-80ff-ac12751ff6bb 00:23:03.194 23:18:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:23:03.194 23:18:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:23:03.194 23:18:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=722f6525-cb63-4008-80ff-ac12751ff6bb 00:23:03.194 23:18:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:23:03.194 23:18:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 722f6525-cb63-4008-80ff-ac12751ff6bb 00:23:03.194 23:18:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=722f6525-cb63-4008-80ff-ac12751ff6bb 00:23:03.194 23:18:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:03.194 23:18:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:23:03.194 23:18:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:23:03.194 23:18:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 722f6525-cb63-4008-80ff-ac12751ff6bb 00:23:03.453 23:18:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:03.453 { 00:23:03.453 "name": "722f6525-cb63-4008-80ff-ac12751ff6bb", 00:23:03.453 "aliases": [ 00:23:03.453 "lvs/nvme0n1p0" 00:23:03.453 ], 00:23:03.453 "product_name": "Logical Volume", 00:23:03.453 "block_size": 4096, 00:23:03.453 "num_blocks": 26476544, 00:23:03.453 "uuid": "722f6525-cb63-4008-80ff-ac12751ff6bb", 00:23:03.453 "assigned_rate_limits": { 00:23:03.453 "rw_ios_per_sec": 0, 00:23:03.453 "rw_mbytes_per_sec": 0, 00:23:03.453 "r_mbytes_per_sec": 0, 00:23:03.453 "w_mbytes_per_sec": 0 00:23:03.453 }, 00:23:03.453 "claimed": false, 00:23:03.453 "zoned": false, 00:23:03.453 "supported_io_types": { 00:23:03.453 "read": true, 00:23:03.453 "write": true, 00:23:03.453 "unmap": true, 00:23:03.453 "flush": false, 00:23:03.453 "reset": true, 00:23:03.453 "nvme_admin": false, 00:23:03.453 "nvme_io": false, 00:23:03.453 "nvme_io_md": false, 00:23:03.453 "write_zeroes": true, 00:23:03.453 "zcopy": false, 00:23:03.453 "get_zone_info": false, 00:23:03.453 "zone_management": false, 00:23:03.453 "zone_append": false, 00:23:03.453 "compare": false, 00:23:03.453 "compare_and_write": false, 00:23:03.453 "abort": false, 00:23:03.453 "seek_hole": true, 00:23:03.453 "seek_data": true, 00:23:03.453 "copy": false, 00:23:03.453 "nvme_iov_md": false 00:23:03.453 }, 00:23:03.453 "driver_specific": { 00:23:03.453 "lvol": { 00:23:03.453 "lvol_store_uuid": "357a325c-b3a9-478a-b762-7b8c20585fd4", 00:23:03.453 "base_bdev": "nvme0n1", 00:23:03.453 "thin_provision": true, 00:23:03.453 "num_allocated_clusters": 0, 00:23:03.453 "snapshot": false, 00:23:03.453 "clone": false, 00:23:03.453 "esnap_clone": false 00:23:03.453 } 00:23:03.453 } 00:23:03.453 } 00:23:03.453 ]' 00:23:03.453 23:18:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:03.453 23:18:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:23:03.453 23:18:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:03.453 23:18:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:23:03.453 23:18:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:23:03.453 23:18:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:23:03.453 23:18:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:23:03.453 23:18:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:23:03.453 23:18:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:23:03.713 23:18:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:23:03.713 23:18:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:23:03.713 23:18:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 722f6525-cb63-4008-80ff-ac12751ff6bb 00:23:03.713 23:18:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=722f6525-cb63-4008-80ff-ac12751ff6bb 00:23:03.713 23:18:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:03.713 23:18:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:23:03.713 23:18:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:23:03.713 23:18:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 722f6525-cb63-4008-80ff-ac12751ff6bb 00:23:03.973 23:18:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:03.973 { 00:23:03.973 "name": "722f6525-cb63-4008-80ff-ac12751ff6bb", 00:23:03.973 "aliases": [ 00:23:03.973 "lvs/nvme0n1p0" 00:23:03.973 ], 00:23:03.973 "product_name": "Logical Volume", 00:23:03.973 "block_size": 4096, 00:23:03.973 "num_blocks": 26476544, 00:23:03.973 "uuid": "722f6525-cb63-4008-80ff-ac12751ff6bb", 00:23:03.973 "assigned_rate_limits": { 00:23:03.973 "rw_ios_per_sec": 0, 00:23:03.973 "rw_mbytes_per_sec": 0, 00:23:03.973 "r_mbytes_per_sec": 0, 00:23:03.973 "w_mbytes_per_sec": 0 00:23:03.973 }, 00:23:03.973 "claimed": false, 00:23:03.973 "zoned": false, 00:23:03.973 "supported_io_types": { 00:23:03.973 "read": true, 00:23:03.973 "write": true, 00:23:03.973 "unmap": true, 00:23:03.973 "flush": false, 00:23:03.973 "reset": true, 00:23:03.973 "nvme_admin": false, 00:23:03.973 "nvme_io": false, 00:23:03.973 "nvme_io_md": false, 00:23:03.973 "write_zeroes": true, 00:23:03.973 "zcopy": false, 00:23:03.973 "get_zone_info": false, 00:23:03.973 "zone_management": false, 00:23:03.973 "zone_append": false, 00:23:03.973 "compare": false, 00:23:03.973 "compare_and_write": false, 00:23:03.973 "abort": false, 00:23:03.973 "seek_hole": true, 00:23:03.973 "seek_data": true, 00:23:03.973 "copy": false, 00:23:03.973 "nvme_iov_md": false 00:23:03.973 }, 00:23:03.973 "driver_specific": { 00:23:03.973 "lvol": { 00:23:03.973 "lvol_store_uuid": "357a325c-b3a9-478a-b762-7b8c20585fd4", 00:23:03.973 "base_bdev": "nvme0n1", 00:23:03.973 "thin_provision": true, 00:23:03.973 "num_allocated_clusters": 0, 00:23:03.973 "snapshot": false, 00:23:03.973 "clone": false, 00:23:03.973 "esnap_clone": false 00:23:03.973 } 00:23:03.973 } 00:23:03.973 } 00:23:03.973 ]' 00:23:03.973 23:18:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:03.973 23:18:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:23:03.973 23:18:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:03.973 23:18:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:23:03.973 23:18:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:23:03.973 23:18:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:23:03.973 23:18:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:23:03.973 23:18:23 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:23:04.235 23:18:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:23:04.235 23:18:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 722f6525-cb63-4008-80ff-ac12751ff6bb 00:23:04.235 23:18:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=722f6525-cb63-4008-80ff-ac12751ff6bb 00:23:04.235 23:18:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:04.235 23:18:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:23:04.235 23:18:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:23:04.235 23:18:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 722f6525-cb63-4008-80ff-ac12751ff6bb 00:23:04.495 23:18:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:04.495 { 00:23:04.495 "name": "722f6525-cb63-4008-80ff-ac12751ff6bb", 00:23:04.495 "aliases": [ 00:23:04.495 "lvs/nvme0n1p0" 00:23:04.495 ], 00:23:04.495 "product_name": "Logical Volume", 00:23:04.495 "block_size": 4096, 00:23:04.495 "num_blocks": 26476544, 00:23:04.495 "uuid": "722f6525-cb63-4008-80ff-ac12751ff6bb", 00:23:04.495 "assigned_rate_limits": { 00:23:04.495 "rw_ios_per_sec": 0, 00:23:04.495 "rw_mbytes_per_sec": 0, 00:23:04.495 "r_mbytes_per_sec": 0, 00:23:04.495 "w_mbytes_per_sec": 0 00:23:04.495 }, 00:23:04.495 "claimed": false, 00:23:04.495 "zoned": false, 00:23:04.495 "supported_io_types": { 00:23:04.495 "read": true, 00:23:04.495 "write": true, 00:23:04.495 "unmap": true, 00:23:04.495 "flush": false, 00:23:04.495 "reset": true, 00:23:04.495 "nvme_admin": false, 00:23:04.495 "nvme_io": false, 00:23:04.495 "nvme_io_md": false, 00:23:04.495 "write_zeroes": true, 00:23:04.495 "zcopy": false, 00:23:04.495 "get_zone_info": false, 00:23:04.495 "zone_management": false, 00:23:04.495 "zone_append": false, 00:23:04.495 "compare": false, 00:23:04.495 "compare_and_write": false, 00:23:04.495 "abort": false, 00:23:04.495 "seek_hole": true, 00:23:04.495 "seek_data": true, 00:23:04.495 "copy": false, 00:23:04.495 "nvme_iov_md": false 00:23:04.495 }, 00:23:04.495 "driver_specific": { 00:23:04.495 "lvol": { 00:23:04.495 "lvol_store_uuid": "357a325c-b3a9-478a-b762-7b8c20585fd4", 00:23:04.495 "base_bdev": "nvme0n1", 00:23:04.495 "thin_provision": true, 00:23:04.495 "num_allocated_clusters": 0, 00:23:04.495 "snapshot": false, 00:23:04.495 "clone": false, 00:23:04.495 "esnap_clone": false 00:23:04.495 } 00:23:04.495 } 00:23:04.495 } 00:23:04.495 ]' 00:23:04.495 23:18:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:04.495 23:18:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:23:04.495 23:18:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:04.495 23:18:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:23:04.495 23:18:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:23:04.495 23:18:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:23:04.495 23:18:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:23:04.495 23:18:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 722f6525-cb63-4008-80ff-ac12751ff6bb --l2p_dram_limit 10' 00:23:04.495 23:18:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:23:04.495 23:18:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:23:04.495 23:18:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:23:04.495 23:18:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 722f6525-cb63-4008-80ff-ac12751ff6bb --l2p_dram_limit 10 -c nvc0n1p0 00:23:04.757 [2024-11-18 23:18:23.942609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.757 [2024-11-18 23:18:23.942676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:04.757 [2024-11-18 23:18:23.942699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:04.757 [2024-11-18 23:18:23.942714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.757 [2024-11-18 23:18:23.942782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.757 [2024-11-18 23:18:23.942797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:04.757 [2024-11-18 23:18:23.942805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:23:04.757 [2024-11-18 23:18:23.942820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.757 [2024-11-18 23:18:23.942847] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:04.757 [2024-11-18 23:18:23.943196] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:04.757 [2024-11-18 23:18:23.943215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.757 [2024-11-18 23:18:23.943227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:04.757 [2024-11-18 23:18:23.943238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.378 ms 00:23:04.757 [2024-11-18 23:18:23.943248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.757 [2024-11-18 23:18:23.943346] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID d010ea4a-a8d8-4554-bde3-40831b373ea1 00:23:04.757 [2024-11-18 23:18:23.945123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.757 [2024-11-18 23:18:23.945190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:23:04.757 [2024-11-18 23:18:23.945204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:23:04.757 [2024-11-18 23:18:23.945213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.757 [2024-11-18 23:18:23.954086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.757 [2024-11-18 23:18:23.954132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:04.757 [2024-11-18 23:18:23.954146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.807 ms 00:23:04.757 [2024-11-18 23:18:23.954190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.757 [2024-11-18 23:18:23.954283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.757 [2024-11-18 23:18:23.954294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:04.757 [2024-11-18 23:18:23.954305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:23:04.757 [2024-11-18 23:18:23.954319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.757 [2024-11-18 23:18:23.954375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.757 [2024-11-18 23:18:23.954386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:04.757 [2024-11-18 23:18:23.954400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:04.757 [2024-11-18 23:18:23.954408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.757 [2024-11-18 23:18:23.954437] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:04.757 [2024-11-18 23:18:23.956749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.757 [2024-11-18 23:18:23.956961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:04.757 [2024-11-18 23:18:23.956984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.324 ms 00:23:04.757 [2024-11-18 23:18:23.956995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.757 [2024-11-18 23:18:23.957039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.757 [2024-11-18 23:18:23.957051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:04.757 [2024-11-18 23:18:23.957061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:23:04.757 [2024-11-18 23:18:23.957075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.757 [2024-11-18 23:18:23.957103] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:23:04.757 [2024-11-18 23:18:23.957285] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:04.757 [2024-11-18 23:18:23.957298] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:04.757 [2024-11-18 23:18:23.957312] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:04.757 [2024-11-18 23:18:23.957323] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:04.757 [2024-11-18 23:18:23.957335] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:04.757 [2024-11-18 23:18:23.957343] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:04.757 [2024-11-18 23:18:23.957364] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:04.757 [2024-11-18 23:18:23.957375] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:04.757 [2024-11-18 23:18:23.957385] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:04.757 [2024-11-18 23:18:23.957396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.757 [2024-11-18 23:18:23.957409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:04.757 [2024-11-18 23:18:23.957417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:23:04.757 [2024-11-18 23:18:23.957427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.757 [2024-11-18 23:18:23.957512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.757 [2024-11-18 23:18:23.957525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:04.757 [2024-11-18 23:18:23.957533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:23:04.757 [2024-11-18 23:18:23.957542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.757 [2024-11-18 23:18:23.957641] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:04.757 [2024-11-18 23:18:23.957657] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:04.757 [2024-11-18 23:18:23.957671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:04.757 [2024-11-18 23:18:23.957681] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:04.757 [2024-11-18 23:18:23.957689] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:04.757 [2024-11-18 23:18:23.957699] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:04.757 [2024-11-18 23:18:23.957707] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:04.757 [2024-11-18 23:18:23.957715] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:04.757 [2024-11-18 23:18:23.957722] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:04.757 [2024-11-18 23:18:23.957731] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:04.757 [2024-11-18 23:18:23.957738] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:04.757 [2024-11-18 23:18:23.957747] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:04.758 [2024-11-18 23:18:23.957756] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:04.758 [2024-11-18 23:18:23.957767] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:04.758 [2024-11-18 23:18:23.957774] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:04.758 [2024-11-18 23:18:23.957783] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:04.758 [2024-11-18 23:18:23.957791] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:04.758 [2024-11-18 23:18:23.957803] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:04.758 [2024-11-18 23:18:23.957810] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:04.758 [2024-11-18 23:18:23.957819] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:04.758 [2024-11-18 23:18:23.957826] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:04.758 [2024-11-18 23:18:23.957835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:04.758 [2024-11-18 23:18:23.957843] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:04.758 [2024-11-18 23:18:23.957852] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:04.758 [2024-11-18 23:18:23.957858] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:04.758 [2024-11-18 23:18:23.957867] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:04.758 [2024-11-18 23:18:23.957874] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:04.758 [2024-11-18 23:18:23.957882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:04.758 [2024-11-18 23:18:23.957889] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:04.758 [2024-11-18 23:18:23.957902] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:04.758 [2024-11-18 23:18:23.957909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:04.758 [2024-11-18 23:18:23.957918] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:04.758 [2024-11-18 23:18:23.957924] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:04.758 [2024-11-18 23:18:23.957933] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:04.758 [2024-11-18 23:18:23.957940] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:04.758 [2024-11-18 23:18:23.957949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:04.758 [2024-11-18 23:18:23.957955] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:04.758 [2024-11-18 23:18:23.957963] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:04.758 [2024-11-18 23:18:23.957970] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:04.758 [2024-11-18 23:18:23.957978] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:04.758 [2024-11-18 23:18:23.957986] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:04.758 [2024-11-18 23:18:23.957995] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:04.758 [2024-11-18 23:18:23.958001] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:04.758 [2024-11-18 23:18:23.958009] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:04.758 [2024-11-18 23:18:23.958017] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:04.758 [2024-11-18 23:18:23.958030] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:04.758 [2024-11-18 23:18:23.958037] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:04.758 [2024-11-18 23:18:23.958048] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:04.758 [2024-11-18 23:18:23.958055] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:04.758 [2024-11-18 23:18:23.958069] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:04.758 [2024-11-18 23:18:23.958077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:04.758 [2024-11-18 23:18:23.958085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:04.758 [2024-11-18 23:18:23.958092] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:04.758 [2024-11-18 23:18:23.958106] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:04.758 [2024-11-18 23:18:23.958117] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:04.758 [2024-11-18 23:18:23.958128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:04.758 [2024-11-18 23:18:23.958135] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:04.758 [2024-11-18 23:18:23.958144] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:04.758 [2024-11-18 23:18:23.958167] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:04.758 [2024-11-18 23:18:23.958177] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:04.758 [2024-11-18 23:18:23.958184] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:04.758 [2024-11-18 23:18:23.958196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:04.758 [2024-11-18 23:18:23.958203] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:04.758 [2024-11-18 23:18:23.958213] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:04.758 [2024-11-18 23:18:23.958220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:04.758 [2024-11-18 23:18:23.958229] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:04.758 [2024-11-18 23:18:23.958236] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:04.758 [2024-11-18 23:18:23.958246] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:04.758 [2024-11-18 23:18:23.958254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:04.758 [2024-11-18 23:18:23.958263] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:04.758 [2024-11-18 23:18:23.958278] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:04.758 [2024-11-18 23:18:23.958288] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:04.758 [2024-11-18 23:18:23.958295] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:04.758 [2024-11-18 23:18:23.958305] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:04.758 [2024-11-18 23:18:23.958313] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:04.758 [2024-11-18 23:18:23.958324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.758 [2024-11-18 23:18:23.958332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:04.758 [2024-11-18 23:18:23.958347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.747 ms 00:23:04.758 [2024-11-18 23:18:23.958355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.758 [2024-11-18 23:18:23.958419] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:23:04.758 [2024-11-18 23:18:23.958430] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:23:08.966 [2024-11-18 23:18:27.872250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.966 [2024-11-18 23:18:27.872475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:23:08.966 [2024-11-18 23:18:27.872585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3913.812 ms 00:23:08.966 [2024-11-18 23:18:27.872614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.966 [2024-11-18 23:18:27.886442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.966 [2024-11-18 23:18:27.886623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:08.966 [2024-11-18 23:18:27.886702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.677 ms 00:23:08.966 [2024-11-18 23:18:27.886728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.966 [2024-11-18 23:18:27.886872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.966 [2024-11-18 23:18:27.886965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:08.966 [2024-11-18 23:18:27.886997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:23:08.966 [2024-11-18 23:18:27.887017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.966 [2024-11-18 23:18:27.898646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.966 [2024-11-18 23:18:27.898811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:08.966 [2024-11-18 23:18:27.898875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.535 ms 00:23:08.966 [2024-11-18 23:18:27.898905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.966 [2024-11-18 23:18:27.898951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.966 [2024-11-18 23:18:27.898976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:08.966 [2024-11-18 23:18:27.898999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:08.966 [2024-11-18 23:18:27.899018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.966 [2024-11-18 23:18:27.899626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.966 [2024-11-18 23:18:27.899681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:08.966 [2024-11-18 23:18:27.899706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.540 ms 00:23:08.966 [2024-11-18 23:18:27.899726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.966 [2024-11-18 23:18:27.899957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.966 [2024-11-18 23:18:27.899985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:08.966 [2024-11-18 23:18:27.900012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:23:08.966 [2024-11-18 23:18:27.900032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.966 [2024-11-18 23:18:27.923065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.966 [2024-11-18 23:18:27.923511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:08.966 [2024-11-18 23:18:27.923723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.931 ms 00:23:08.966 [2024-11-18 23:18:27.923753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.966 [2024-11-18 23:18:27.933739] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:08.966 [2024-11-18 23:18:27.937629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.966 [2024-11-18 23:18:27.937780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:08.966 [2024-11-18 23:18:27.937838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.645 ms 00:23:08.966 [2024-11-18 23:18:27.937865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.966 [2024-11-18 23:18:28.019070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.966 [2024-11-18 23:18:28.019264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:23:08.966 [2024-11-18 23:18:28.019357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.159 ms 00:23:08.966 [2024-11-18 23:18:28.019391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.966 [2024-11-18 23:18:28.019597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.966 [2024-11-18 23:18:28.019635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:08.966 [2024-11-18 23:18:28.019790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.151 ms 00:23:08.966 [2024-11-18 23:18:28.019838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.966 [2024-11-18 23:18:28.025600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.966 [2024-11-18 23:18:28.025762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:23:08.966 [2024-11-18 23:18:28.025818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.690 ms 00:23:08.966 [2024-11-18 23:18:28.025844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.966 [2024-11-18 23:18:28.030672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.966 [2024-11-18 23:18:28.030820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:23:08.966 [2024-11-18 23:18:28.030887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.774 ms 00:23:08.966 [2024-11-18 23:18:28.030910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.966 [2024-11-18 23:18:28.031275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.966 [2024-11-18 23:18:28.031344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:08.966 [2024-11-18 23:18:28.031367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:23:08.966 [2024-11-18 23:18:28.031573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.966 [2024-11-18 23:18:28.082450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.966 [2024-11-18 23:18:28.082641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:23:08.966 [2024-11-18 23:18:28.082665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.833 ms 00:23:08.966 [2024-11-18 23:18:28.082684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.966 [2024-11-18 23:18:28.090242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.966 [2024-11-18 23:18:28.090298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:23:08.966 [2024-11-18 23:18:28.090310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.479 ms 00:23:08.966 [2024-11-18 23:18:28.090321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.966 [2024-11-18 23:18:28.096569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.966 [2024-11-18 23:18:28.096629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:23:08.966 [2024-11-18 23:18:28.096639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.194 ms 00:23:08.966 [2024-11-18 23:18:28.096648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.966 [2024-11-18 23:18:28.103008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.966 [2024-11-18 23:18:28.103064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:08.966 [2024-11-18 23:18:28.103075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.310 ms 00:23:08.966 [2024-11-18 23:18:28.103088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.966 [2024-11-18 23:18:28.103137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.966 [2024-11-18 23:18:28.103150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:08.966 [2024-11-18 23:18:28.103194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:08.966 [2024-11-18 23:18:28.103204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.966 [2024-11-18 23:18:28.103330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:08.966 [2024-11-18 23:18:28.103345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:08.966 [2024-11-18 23:18:28.103356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:23:08.966 [2024-11-18 23:18:28.103374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:08.966 [2024-11-18 23:18:28.105089] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4162.019 ms, result 0 00:23:08.966 { 00:23:08.966 "name": "ftl0", 00:23:08.966 "uuid": "d010ea4a-a8d8-4554-bde3-40831b373ea1" 00:23:08.966 } 00:23:08.966 23:18:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:23:08.966 23:18:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:23:09.229 23:18:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:23:09.229 23:18:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:23:09.229 23:18:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:23:09.229 /dev/nbd0 00:23:09.229 23:18:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:23:09.229 23:18:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:23:09.229 23:18:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # local i 00:23:09.229 23:18:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:23:09.229 23:18:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:23:09.229 23:18:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:23:09.491 23:18:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # break 00:23:09.491 23:18:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:23:09.491 23:18:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:23:09.491 23:18:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:23:09.491 1+0 records in 00:23:09.491 1+0 records out 00:23:09.491 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000500837 s, 8.2 MB/s 00:23:09.491 23:18:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:23:09.491 23:18:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # size=4096 00:23:09.491 23:18:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:23:09.491 23:18:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:23:09.491 23:18:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # return 0 00:23:09.491 23:18:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:23:09.491 [2024-11-18 23:18:28.690369] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:09.491 [2024-11-18 23:18:28.690498] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89646 ] 00:23:09.491 [2024-11-18 23:18:28.841428] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:09.752 [2024-11-18 23:18:28.891297] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:10.702  [2024-11-18T23:18:31.014Z] Copying: 188/1024 [MB] (188 MBps) [2024-11-18T23:18:32.389Z] Copying: 446/1024 [MB] (257 MBps) [2024-11-18T23:18:33.324Z] Copying: 708/1024 [MB] (261 MBps) [2024-11-18T23:18:33.324Z] Copying: 958/1024 [MB] (249 MBps) [2024-11-18T23:18:33.584Z] Copying: 1024/1024 [MB] (average 240 MBps) 00:23:14.206 00:23:14.206 23:18:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:23:16.114 23:18:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:23:16.114 [2024-11-18 23:18:35.455823] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:16.114 [2024-11-18 23:18:35.455934] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89725 ] 00:23:16.374 [2024-11-18 23:18:35.599681] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:16.374 [2024-11-18 23:18:35.629747] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:17.316  [2024-11-18T23:18:38.069Z] Copying: 16/1024 [MB] (16 MBps) [2024-11-18T23:18:39.004Z] Copying: 24944/1048576 [kB] (8464 kBps) [2024-11-18T23:18:39.939Z] Copying: 33768/1048576 [kB] (8824 kBps) [2024-11-18T23:18:40.873Z] Copying: 48/1024 [MB] (15 MBps) [2024-11-18T23:18:41.808Z] Copying: 59/1024 [MB] (11 MBps) [2024-11-18T23:18:42.744Z] Copying: 75/1024 [MB] (15 MBps) [2024-11-18T23:18:43.680Z] Copying: 87/1024 [MB] (12 MBps) [2024-11-18T23:18:45.055Z] Copying: 102/1024 [MB] (14 MBps) [2024-11-18T23:18:45.990Z] Copying: 119/1024 [MB] (16 MBps) [2024-11-18T23:18:46.939Z] Copying: 137/1024 [MB] (18 MBps) [2024-11-18T23:18:47.935Z] Copying: 155/1024 [MB] (17 MBps) [2024-11-18T23:18:48.870Z] Copying: 184/1024 [MB] (28 MBps) [2024-11-18T23:18:49.805Z] Copying: 201/1024 [MB] (17 MBps) [2024-11-18T23:18:50.741Z] Copying: 222/1024 [MB] (20 MBps) [2024-11-18T23:18:51.678Z] Copying: 244/1024 [MB] (21 MBps) [2024-11-18T23:18:53.053Z] Copying: 264/1024 [MB] (20 MBps) [2024-11-18T23:18:53.999Z] Copying: 281/1024 [MB] (16 MBps) [2024-11-18T23:18:54.935Z] Copying: 302/1024 [MB] (21 MBps) [2024-11-18T23:18:55.869Z] Copying: 320/1024 [MB] (18 MBps) [2024-11-18T23:18:56.803Z] Copying: 342/1024 [MB] (21 MBps) [2024-11-18T23:18:57.737Z] Copying: 362/1024 [MB] (20 MBps) [2024-11-18T23:18:59.112Z] Copying: 384/1024 [MB] (22 MBps) [2024-11-18T23:18:59.678Z] Copying: 405/1024 [MB] (20 MBps) [2024-11-18T23:19:01.053Z] Copying: 426/1024 [MB] (20 MBps) [2024-11-18T23:19:01.987Z] Copying: 445/1024 [MB] (19 MBps) [2024-11-18T23:19:02.921Z] Copying: 468/1024 [MB] (22 MBps) [2024-11-18T23:19:03.857Z] Copying: 489/1024 [MB] (21 MBps) [2024-11-18T23:19:04.791Z] Copying: 508/1024 [MB] (19 MBps) [2024-11-18T23:19:05.726Z] Copying: 529/1024 [MB] (20 MBps) [2024-11-18T23:19:07.102Z] Copying: 546/1024 [MB] (17 MBps) [2024-11-18T23:19:08.037Z] Copying: 567/1024 [MB] (21 MBps) [2024-11-18T23:19:08.973Z] Copying: 588/1024 [MB] (20 MBps) [2024-11-18T23:19:09.907Z] Copying: 608/1024 [MB] (19 MBps) [2024-11-18T23:19:10.840Z] Copying: 627/1024 [MB] (19 MBps) [2024-11-18T23:19:11.776Z] Copying: 646/1024 [MB] (18 MBps) [2024-11-18T23:19:12.710Z] Copying: 663/1024 [MB] (16 MBps) [2024-11-18T23:19:14.085Z] Copying: 682/1024 [MB] (19 MBps) [2024-11-18T23:19:15.020Z] Copying: 699/1024 [MB] (17 MBps) [2024-11-18T23:19:15.956Z] Copying: 717/1024 [MB] (18 MBps) [2024-11-18T23:19:16.931Z] Copying: 734/1024 [MB] (16 MBps) [2024-11-18T23:19:17.864Z] Copying: 749/1024 [MB] (15 MBps) [2024-11-18T23:19:18.797Z] Copying: 767/1024 [MB] (18 MBps) [2024-11-18T23:19:19.731Z] Copying: 783/1024 [MB] (16 MBps) [2024-11-18T23:19:21.106Z] Copying: 796/1024 [MB] (13 MBps) [2024-11-18T23:19:22.042Z] Copying: 815/1024 [MB] (18 MBps) [2024-11-18T23:19:22.975Z] Copying: 833/1024 [MB] (18 MBps) [2024-11-18T23:19:23.908Z] Copying: 852/1024 [MB] (18 MBps) [2024-11-18T23:19:24.843Z] Copying: 876/1024 [MB] (24 MBps) [2024-11-18T23:19:25.776Z] Copying: 896/1024 [MB] (20 MBps) [2024-11-18T23:19:26.709Z] Copying: 915/1024 [MB] (18 MBps) [2024-11-18T23:19:28.082Z] Copying: 935/1024 [MB] (19 MBps) [2024-11-18T23:19:29.015Z] Copying: 952/1024 [MB] (17 MBps) [2024-11-18T23:19:29.955Z] Copying: 970/1024 [MB] (18 MBps) [2024-11-18T23:19:30.888Z] Copying: 990/1024 [MB] (19 MBps) [2024-11-18T23:19:31.147Z] Copying: 1014/1024 [MB] (24 MBps) [2024-11-18T23:19:31.407Z] Copying: 1024/1024 [MB] (average 18 MBps) 00:24:12.029 00:24:12.029 23:19:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:24:12.029 23:19:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:24:12.290 23:19:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:24:12.554 [2024-11-18 23:19:31.699672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.554 [2024-11-18 23:19:31.699747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:12.554 [2024-11-18 23:19:31.699771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:12.554 [2024-11-18 23:19:31.699782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.554 [2024-11-18 23:19:31.699812] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:12.554 [2024-11-18 23:19:31.700792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.554 [2024-11-18 23:19:31.700842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:12.554 [2024-11-18 23:19:31.700856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.959 ms 00:24:12.554 [2024-11-18 23:19:31.700871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.554 [2024-11-18 23:19:31.703571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.554 [2024-11-18 23:19:31.703625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:12.554 [2024-11-18 23:19:31.703637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.668 ms 00:24:12.554 [2024-11-18 23:19:31.703649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.554 [2024-11-18 23:19:31.722254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.554 [2024-11-18 23:19:31.722466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:12.554 [2024-11-18 23:19:31.722490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.584 ms 00:24:12.554 [2024-11-18 23:19:31.722502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.554 [2024-11-18 23:19:31.728852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.554 [2024-11-18 23:19:31.728903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:12.554 [2024-11-18 23:19:31.728915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.301 ms 00:24:12.554 [2024-11-18 23:19:31.728926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.554 [2024-11-18 23:19:31.731719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.554 [2024-11-18 23:19:31.731781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:12.554 [2024-11-18 23:19:31.731792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.701 ms 00:24:12.554 [2024-11-18 23:19:31.731803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.554 [2024-11-18 23:19:31.738991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.554 [2024-11-18 23:19:31.739055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:12.554 [2024-11-18 23:19:31.739070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.140 ms 00:24:12.554 [2024-11-18 23:19:31.739084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.554 [2024-11-18 23:19:31.739265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.554 [2024-11-18 23:19:31.739280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:12.554 [2024-11-18 23:19:31.739289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 00:24:12.554 [2024-11-18 23:19:31.739316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.554 [2024-11-18 23:19:31.742706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.554 [2024-11-18 23:19:31.742761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:12.554 [2024-11-18 23:19:31.742772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.372 ms 00:24:12.554 [2024-11-18 23:19:31.742785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.554 [2024-11-18 23:19:31.745730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.554 [2024-11-18 23:19:31.745788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:12.554 [2024-11-18 23:19:31.745798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.899 ms 00:24:12.554 [2024-11-18 23:19:31.745808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.554 [2024-11-18 23:19:31.748239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.554 [2024-11-18 23:19:31.748425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:12.554 [2024-11-18 23:19:31.748444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.386 ms 00:24:12.554 [2024-11-18 23:19:31.748454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.554 [2024-11-18 23:19:31.750717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.554 [2024-11-18 23:19:31.750772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:12.554 [2024-11-18 23:19:31.750783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.163 ms 00:24:12.554 [2024-11-18 23:19:31.750794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.554 [2024-11-18 23:19:31.750837] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:12.554 [2024-11-18 23:19:31.750859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:12.554 [2024-11-18 23:19:31.750875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:12.554 [2024-11-18 23:19:31.750887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:12.555 [2024-11-18 23:19:31.750896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:12.555 [2024-11-18 23:19:31.750910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:12.555 [2024-11-18 23:19:31.750918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:12.555 [2024-11-18 23:19:31.750929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:12.555 [2024-11-18 23:19:31.750936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:12.555 [2024-11-18 23:19:31.750950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:12.555 [2024-11-18 23:19:31.750957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:12.555 [2024-11-18 23:19:31.750966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:12.555 [2024-11-18 23:19:31.750974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:12.555 [2024-11-18 23:19:31.750984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:12.555 [2024-11-18 23:19:31.750991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:12.555 [2024-11-18 23:19:31.751001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:12.555 [2024-11-18 23:19:31.751009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:12.555 [2024-11-18 23:19:31.751019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:12.555 [2024-11-18 23:19:31.751027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:12.555 [2024-11-18 23:19:31.751037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:12.555 [2024-11-18 23:19:31.751044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:12.555 [2024-11-18 23:19:31.751057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:12.555 [2024-11-18 23:19:31.751064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:12.555 [2024-11-18 23:19:31.751074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:12.555 [2024-11-18 23:19:31.751081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:12.555 [2024-11-18 23:19:31.751091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:12.555 [2024-11-18 23:19:31.751099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:12.555 [2024-11-18 23:19:31.751111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:12.555 [2024-11-18 23:19:31.751118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:12.555 [2024-11-18 23:19:31.751128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:12.555 [2024-11-18 23:19:31.751135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:12.555 [2024-11-18 23:19:31.751146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:12.555 [2024-11-18 23:19:31.751176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:12.555 [2024-11-18 23:19:31.751188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:12.555 [2024-11-18 23:19:31.751196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:12.555 [2024-11-18 23:19:31.751207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:12.555 [2024-11-18 23:19:31.751215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:12.555 [2024-11-18 23:19:31.751229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:12.555 [2024-11-18 23:19:31.751237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:12.555 [2024-11-18 23:19:31.751247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:12.555 [2024-11-18 23:19:31.751255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:12.555 [2024-11-18 23:19:31.751265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:12.555 [2024-11-18 23:19:31.751272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:12.555 [2024-11-18 23:19:31.751283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:12.555 [2024-11-18 23:19:31.751306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:12.555 [2024-11-18 23:19:31.751317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:12.555 [2024-11-18 23:19:31.751324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:12.555 [2024-11-18 23:19:31.751335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:12.555 [2024-11-18 23:19:31.751343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:12.555 [2024-11-18 23:19:31.751353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:12.555 [2024-11-18 23:19:31.751362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:12.555 [2024-11-18 23:19:31.751376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:12.555 [2024-11-18 23:19:31.751383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:12.555 [2024-11-18 23:19:31.751396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:12.555 [2024-11-18 23:19:31.751404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:12.555 [2024-11-18 23:19:31.751414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:12.555 [2024-11-18 23:19:31.751440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:12.555 [2024-11-18 23:19:31.751450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:12.555 [2024-11-18 23:19:31.751457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:12.555 [2024-11-18 23:19:31.751468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:12.555 [2024-11-18 23:19:31.751475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:12.555 [2024-11-18 23:19:31.751486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:12.555 [2024-11-18 23:19:31.751494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:12.555 [2024-11-18 23:19:31.751514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:12.555 [2024-11-18 23:19:31.751524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:12.555 [2024-11-18 23:19:31.751535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:12.555 [2024-11-18 23:19:31.751543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:12.555 [2024-11-18 23:19:31.751553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:12.555 [2024-11-18 23:19:31.751564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:12.555 [2024-11-18 23:19:31.751578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:12.555 [2024-11-18 23:19:31.751585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:12.555 [2024-11-18 23:19:31.751596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:12.555 [2024-11-18 23:19:31.751604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:12.555 [2024-11-18 23:19:31.751614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:12.555 [2024-11-18 23:19:31.751622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:12.555 [2024-11-18 23:19:31.751632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:12.555 [2024-11-18 23:19:31.751639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:12.555 [2024-11-18 23:19:31.751649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:12.556 [2024-11-18 23:19:31.751656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:12.556 [2024-11-18 23:19:31.751666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:12.556 [2024-11-18 23:19:31.751675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:12.556 [2024-11-18 23:19:31.751684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:12.556 [2024-11-18 23:19:31.751692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:12.556 [2024-11-18 23:19:31.751702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:12.556 [2024-11-18 23:19:31.751709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:12.556 [2024-11-18 23:19:31.751722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:12.556 [2024-11-18 23:19:31.751729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:12.556 [2024-11-18 23:19:31.751739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:12.556 [2024-11-18 23:19:31.751747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:12.556 [2024-11-18 23:19:31.751757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:12.556 [2024-11-18 23:19:31.751764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:12.556 [2024-11-18 23:19:31.751774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:12.556 [2024-11-18 23:19:31.751781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:12.556 [2024-11-18 23:19:31.751791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:12.556 [2024-11-18 23:19:31.751798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:12.556 [2024-11-18 23:19:31.751810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:12.556 [2024-11-18 23:19:31.751818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:12.556 [2024-11-18 23:19:31.751829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:12.556 [2024-11-18 23:19:31.751836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:12.556 [2024-11-18 23:19:31.751846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:12.556 [2024-11-18 23:19:31.751853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:12.556 [2024-11-18 23:19:31.751874] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:12.556 [2024-11-18 23:19:31.751883] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d010ea4a-a8d8-4554-bde3-40831b373ea1 00:24:12.556 [2024-11-18 23:19:31.751897] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:12.556 [2024-11-18 23:19:31.751904] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:12.556 [2024-11-18 23:19:31.751914] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:12.556 [2024-11-18 23:19:31.751922] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:12.556 [2024-11-18 23:19:31.751932] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:12.556 [2024-11-18 23:19:31.751941] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:12.556 [2024-11-18 23:19:31.751950] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:12.556 [2024-11-18 23:19:31.751957] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:12.556 [2024-11-18 23:19:31.751966] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:12.556 [2024-11-18 23:19:31.751974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.556 [2024-11-18 23:19:31.751987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:12.556 [2024-11-18 23:19:31.751996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.138 ms 00:24:12.556 [2024-11-18 23:19:31.752006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.556 [2024-11-18 23:19:31.755090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.556 [2024-11-18 23:19:31.755130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:12.556 [2024-11-18 23:19:31.755141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.062 ms 00:24:12.556 [2024-11-18 23:19:31.755171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.556 [2024-11-18 23:19:31.755341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.556 [2024-11-18 23:19:31.755354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:12.556 [2024-11-18 23:19:31.755364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:24:12.556 [2024-11-18 23:19:31.755374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.556 [2024-11-18 23:19:31.766296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:12.556 [2024-11-18 23:19:31.766465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:12.556 [2024-11-18 23:19:31.766525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:12.556 [2024-11-18 23:19:31.766554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.556 [2024-11-18 23:19:31.766635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:12.556 [2024-11-18 23:19:31.766660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:12.556 [2024-11-18 23:19:31.766681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:12.556 [2024-11-18 23:19:31.766703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.556 [2024-11-18 23:19:31.766809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:12.556 [2024-11-18 23:19:31.766995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:12.556 [2024-11-18 23:19:31.767023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:12.556 [2024-11-18 23:19:31.767046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.556 [2024-11-18 23:19:31.767081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:12.556 [2024-11-18 23:19:31.767107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:12.556 [2024-11-18 23:19:31.767126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:12.556 [2024-11-18 23:19:31.767149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.556 [2024-11-18 23:19:31.786180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:12.556 [2024-11-18 23:19:31.786372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:12.556 [2024-11-18 23:19:31.786430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:12.556 [2024-11-18 23:19:31.786458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.556 [2024-11-18 23:19:31.802537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:12.556 [2024-11-18 23:19:31.802740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:12.556 [2024-11-18 23:19:31.802808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:12.556 [2024-11-18 23:19:31.802835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.556 [2024-11-18 23:19:31.802951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:12.556 [2024-11-18 23:19:31.802993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:12.556 [2024-11-18 23:19:31.803014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:12.556 [2024-11-18 23:19:31.803037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.556 [2024-11-18 23:19:31.803104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:12.556 [2024-11-18 23:19:31.803339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:12.556 [2024-11-18 23:19:31.803356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:12.556 [2024-11-18 23:19:31.803369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.556 [2024-11-18 23:19:31.803483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:12.556 [2024-11-18 23:19:31.803501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:12.556 [2024-11-18 23:19:31.803510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:12.556 [2024-11-18 23:19:31.803521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.556 [2024-11-18 23:19:31.803557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:12.556 [2024-11-18 23:19:31.803570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:12.556 [2024-11-18 23:19:31.803579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:12.557 [2024-11-18 23:19:31.803589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.557 [2024-11-18 23:19:31.803645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:12.557 [2024-11-18 23:19:31.803663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:12.557 [2024-11-18 23:19:31.803672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:12.557 [2024-11-18 23:19:31.803683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.557 [2024-11-18 23:19:31.803743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:12.557 [2024-11-18 23:19:31.803757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:12.557 [2024-11-18 23:19:31.803766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:12.557 [2024-11-18 23:19:31.803778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.557 [2024-11-18 23:19:31.803961] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 104.237 ms, result 0 00:24:12.557 true 00:24:12.557 23:19:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 89499 00:24:12.557 23:19:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid89499 00:24:12.557 23:19:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:24:12.557 [2024-11-18 23:19:31.911857] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:24:12.557 [2024-11-18 23:19:31.912009] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90313 ] 00:24:12.818 [2024-11-18 23:19:32.068098] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.818 [2024-11-18 23:19:32.139930] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:14.202  [2024-11-18T23:19:34.521Z] Copying: 190/1024 [MB] (190 MBps) [2024-11-18T23:19:35.462Z] Copying: 448/1024 [MB] (257 MBps) [2024-11-18T23:19:36.403Z] Copying: 704/1024 [MB] (255 MBps) [2024-11-18T23:19:36.664Z] Copying: 961/1024 [MB] (256 MBps) [2024-11-18T23:19:36.925Z] Copying: 1024/1024 [MB] (average 241 MBps) 00:24:17.547 00:24:17.547 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 89499 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:24:17.547 23:19:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:17.547 [2024-11-18 23:19:36.753234] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:24:17.547 [2024-11-18 23:19:36.753522] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90366 ] 00:24:17.547 [2024-11-18 23:19:36.902088] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:17.807 [2024-11-18 23:19:36.955570] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:17.807 [2024-11-18 23:19:37.054693] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:17.807 [2024-11-18 23:19:37.054755] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:17.807 [2024-11-18 23:19:37.116787] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:24:17.807 [2024-11-18 23:19:37.117089] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:24:17.807 [2024-11-18 23:19:37.117279] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:24:18.069 [2024-11-18 23:19:37.321125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.069 [2024-11-18 23:19:37.321174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:18.069 [2024-11-18 23:19:37.321186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:18.069 [2024-11-18 23:19:37.321193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.069 [2024-11-18 23:19:37.321230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.069 [2024-11-18 23:19:37.321240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:18.069 [2024-11-18 23:19:37.321247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:24:18.069 [2024-11-18 23:19:37.321253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.069 [2024-11-18 23:19:37.321269] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:18.069 [2024-11-18 23:19:37.321454] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:18.070 [2024-11-18 23:19:37.321466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.070 [2024-11-18 23:19:37.321472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:18.070 [2024-11-18 23:19:37.321480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.204 ms 00:24:18.070 [2024-11-18 23:19:37.321486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.070 [2024-11-18 23:19:37.322702] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:18.070 [2024-11-18 23:19:37.325298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.070 [2024-11-18 23:19:37.325329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:18.070 [2024-11-18 23:19:37.325338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.597 ms 00:24:18.070 [2024-11-18 23:19:37.325344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.070 [2024-11-18 23:19:37.325388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.070 [2024-11-18 23:19:37.325396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:18.070 [2024-11-18 23:19:37.325403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:24:18.070 [2024-11-18 23:19:37.325414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.070 [2024-11-18 23:19:37.331508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.070 [2024-11-18 23:19:37.331537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:18.070 [2024-11-18 23:19:37.331545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.053 ms 00:24:18.070 [2024-11-18 23:19:37.331551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.070 [2024-11-18 23:19:37.331622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.070 [2024-11-18 23:19:37.331630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:18.070 [2024-11-18 23:19:37.331636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:24:18.070 [2024-11-18 23:19:37.331645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.070 [2024-11-18 23:19:37.331684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.070 [2024-11-18 23:19:37.331698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:18.070 [2024-11-18 23:19:37.331704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:18.070 [2024-11-18 23:19:37.331710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.070 [2024-11-18 23:19:37.331728] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:18.070 [2024-11-18 23:19:37.333258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.070 [2024-11-18 23:19:37.333281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:18.070 [2024-11-18 23:19:37.333288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.535 ms 00:24:18.070 [2024-11-18 23:19:37.333294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.070 [2024-11-18 23:19:37.333322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.070 [2024-11-18 23:19:37.333329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:18.070 [2024-11-18 23:19:37.333335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:18.070 [2024-11-18 23:19:37.333341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.070 [2024-11-18 23:19:37.333357] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:18.070 [2024-11-18 23:19:37.333373] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:18.070 [2024-11-18 23:19:37.333404] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:18.070 [2024-11-18 23:19:37.333418] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:18.070 [2024-11-18 23:19:37.333501] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:18.070 [2024-11-18 23:19:37.333509] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:18.070 [2024-11-18 23:19:37.333522] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:18.070 [2024-11-18 23:19:37.333529] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:18.070 [2024-11-18 23:19:37.333537] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:18.070 [2024-11-18 23:19:37.333543] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:18.070 [2024-11-18 23:19:37.333549] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:18.070 [2024-11-18 23:19:37.333555] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:18.070 [2024-11-18 23:19:37.333560] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:18.070 [2024-11-18 23:19:37.333567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.070 [2024-11-18 23:19:37.333575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:18.070 [2024-11-18 23:19:37.333581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.213 ms 00:24:18.070 [2024-11-18 23:19:37.333587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.070 [2024-11-18 23:19:37.333649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.070 [2024-11-18 23:19:37.333658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:18.070 [2024-11-18 23:19:37.333664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:24:18.070 [2024-11-18 23:19:37.333674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.070 [2024-11-18 23:19:37.333748] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:18.070 [2024-11-18 23:19:37.333755] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:18.070 [2024-11-18 23:19:37.333764] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:18.070 [2024-11-18 23:19:37.333771] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:18.070 [2024-11-18 23:19:37.333777] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:18.070 [2024-11-18 23:19:37.333782] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:18.070 [2024-11-18 23:19:37.333787] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:18.070 [2024-11-18 23:19:37.333793] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:18.070 [2024-11-18 23:19:37.333799] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:18.070 [2024-11-18 23:19:37.333804] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:18.070 [2024-11-18 23:19:37.333809] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:18.070 [2024-11-18 23:19:37.333822] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:18.070 [2024-11-18 23:19:37.333827] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:18.070 [2024-11-18 23:19:37.333835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:18.070 [2024-11-18 23:19:37.333841] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:18.070 [2024-11-18 23:19:37.333846] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:18.070 [2024-11-18 23:19:37.333851] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:18.070 [2024-11-18 23:19:37.333856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:18.070 [2024-11-18 23:19:37.333861] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:18.070 [2024-11-18 23:19:37.333867] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:18.070 [2024-11-18 23:19:37.333872] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:18.070 [2024-11-18 23:19:37.333877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:18.070 [2024-11-18 23:19:37.333882] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:18.070 [2024-11-18 23:19:37.333887] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:18.070 [2024-11-18 23:19:37.333892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:18.070 [2024-11-18 23:19:37.333897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:18.070 [2024-11-18 23:19:37.333902] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:18.070 [2024-11-18 23:19:37.333907] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:18.070 [2024-11-18 23:19:37.333913] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:18.070 [2024-11-18 23:19:37.333923] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:18.070 [2024-11-18 23:19:37.333929] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:18.070 [2024-11-18 23:19:37.333935] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:18.070 [2024-11-18 23:19:37.333941] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:18.070 [2024-11-18 23:19:37.333947] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:18.070 [2024-11-18 23:19:37.333952] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:18.070 [2024-11-18 23:19:37.333959] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:18.070 [2024-11-18 23:19:37.333964] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:18.070 [2024-11-18 23:19:37.333970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:18.070 [2024-11-18 23:19:37.333975] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:18.070 [2024-11-18 23:19:37.333981] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:18.071 [2024-11-18 23:19:37.333987] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:18.071 [2024-11-18 23:19:37.333993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:18.071 [2024-11-18 23:19:37.333999] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:18.071 [2024-11-18 23:19:37.334009] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:18.071 [2024-11-18 23:19:37.334016] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:18.071 [2024-11-18 23:19:37.334024] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:18.071 [2024-11-18 23:19:37.334030] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:18.071 [2024-11-18 23:19:37.334037] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:18.071 [2024-11-18 23:19:37.334043] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:18.071 [2024-11-18 23:19:37.334049] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:18.071 [2024-11-18 23:19:37.334055] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:18.071 [2024-11-18 23:19:37.334061] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:18.071 [2024-11-18 23:19:37.334067] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:18.071 [2024-11-18 23:19:37.334074] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:18.071 [2024-11-18 23:19:37.334082] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:18.071 [2024-11-18 23:19:37.334089] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:18.071 [2024-11-18 23:19:37.334095] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:18.071 [2024-11-18 23:19:37.334101] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:18.071 [2024-11-18 23:19:37.334108] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:18.071 [2024-11-18 23:19:37.334114] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:18.071 [2024-11-18 23:19:37.334120] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:18.071 [2024-11-18 23:19:37.334129] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:18.071 [2024-11-18 23:19:37.334136] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:18.071 [2024-11-18 23:19:37.334142] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:18.071 [2024-11-18 23:19:37.334148] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:18.071 [2024-11-18 23:19:37.334164] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:18.071 [2024-11-18 23:19:37.334170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:18.071 [2024-11-18 23:19:37.334176] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:18.071 [2024-11-18 23:19:37.334183] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:18.071 [2024-11-18 23:19:37.334189] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:18.071 [2024-11-18 23:19:37.334196] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:18.071 [2024-11-18 23:19:37.334204] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:18.071 [2024-11-18 23:19:37.334210] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:18.071 [2024-11-18 23:19:37.334217] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:18.071 [2024-11-18 23:19:37.334223] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:18.071 [2024-11-18 23:19:37.334231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.071 [2024-11-18 23:19:37.334240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:18.071 [2024-11-18 23:19:37.334248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.536 ms 00:24:18.071 [2024-11-18 23:19:37.334255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.071 [2024-11-18 23:19:37.356728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.071 [2024-11-18 23:19:37.356793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:18.071 [2024-11-18 23:19:37.356824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.431 ms 00:24:18.071 [2024-11-18 23:19:37.356841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.071 [2024-11-18 23:19:37.356995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.071 [2024-11-18 23:19:37.357011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:18.071 [2024-11-18 23:19:37.357031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:24:18.071 [2024-11-18 23:19:37.357044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.071 [2024-11-18 23:19:37.367103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.071 [2024-11-18 23:19:37.367132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:18.071 [2024-11-18 23:19:37.367140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.969 ms 00:24:18.071 [2024-11-18 23:19:37.367146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.071 [2024-11-18 23:19:37.367183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.071 [2024-11-18 23:19:37.367194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:18.071 [2024-11-18 23:19:37.367202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:24:18.071 [2024-11-18 23:19:37.367209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.071 [2024-11-18 23:19:37.367633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.071 [2024-11-18 23:19:37.367646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:18.071 [2024-11-18 23:19:37.367654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.384 ms 00:24:18.071 [2024-11-18 23:19:37.367660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.071 [2024-11-18 23:19:37.367768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.071 [2024-11-18 23:19:37.367776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:18.071 [2024-11-18 23:19:37.367787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:24:18.071 [2024-11-18 23:19:37.367797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.071 [2024-11-18 23:19:37.373065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.071 [2024-11-18 23:19:37.373091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:18.071 [2024-11-18 23:19:37.373105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.251 ms 00:24:18.071 [2024-11-18 23:19:37.373112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.071 [2024-11-18 23:19:37.375695] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:18.071 [2024-11-18 23:19:37.375724] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:18.071 [2024-11-18 23:19:37.375735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.071 [2024-11-18 23:19:37.375741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:18.071 [2024-11-18 23:19:37.375748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.537 ms 00:24:18.071 [2024-11-18 23:19:37.375754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.071 [2024-11-18 23:19:37.387223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.071 [2024-11-18 23:19:37.387276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:18.071 [2024-11-18 23:19:37.387299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.437 ms 00:24:18.071 [2024-11-18 23:19:37.387311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.071 [2024-11-18 23:19:37.388844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.071 [2024-11-18 23:19:37.388871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:18.071 [2024-11-18 23:19:37.388879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.513 ms 00:24:18.071 [2024-11-18 23:19:37.388885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.072 [2024-11-18 23:19:37.390144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.072 [2024-11-18 23:19:37.390182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:18.072 [2024-11-18 23:19:37.390190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.228 ms 00:24:18.072 [2024-11-18 23:19:37.390196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.072 [2024-11-18 23:19:37.390462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.072 [2024-11-18 23:19:37.390495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:18.072 [2024-11-18 23:19:37.390508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.216 ms 00:24:18.072 [2024-11-18 23:19:37.390514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.072 [2024-11-18 23:19:37.406732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.072 [2024-11-18 23:19:37.406772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:18.072 [2024-11-18 23:19:37.406782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.204 ms 00:24:18.072 [2024-11-18 23:19:37.406789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.072 [2024-11-18 23:19:37.412774] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:18.072 [2024-11-18 23:19:37.415330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.072 [2024-11-18 23:19:37.415360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:18.072 [2024-11-18 23:19:37.415375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.505 ms 00:24:18.072 [2024-11-18 23:19:37.415381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.072 [2024-11-18 23:19:37.415429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.072 [2024-11-18 23:19:37.415438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:18.072 [2024-11-18 23:19:37.415449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:18.072 [2024-11-18 23:19:37.415456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.072 [2024-11-18 23:19:37.415545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.072 [2024-11-18 23:19:37.415553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:18.072 [2024-11-18 23:19:37.415560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:24:18.072 [2024-11-18 23:19:37.415566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.072 [2024-11-18 23:19:37.415583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.072 [2024-11-18 23:19:37.415590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:18.072 [2024-11-18 23:19:37.415596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:18.072 [2024-11-18 23:19:37.415605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.072 [2024-11-18 23:19:37.415633] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:18.072 [2024-11-18 23:19:37.415646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.072 [2024-11-18 23:19:37.415652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:18.072 [2024-11-18 23:19:37.415658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:24:18.072 [2024-11-18 23:19:37.415666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.072 [2024-11-18 23:19:37.418760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.072 [2024-11-18 23:19:37.418789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:18.072 [2024-11-18 23:19:37.418802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.080 ms 00:24:18.072 [2024-11-18 23:19:37.418808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.072 [2024-11-18 23:19:37.418866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:18.072 [2024-11-18 23:19:37.418877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:18.072 [2024-11-18 23:19:37.418884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:24:18.072 [2024-11-18 23:19:37.418890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:18.072 [2024-11-18 23:19:37.419794] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 98.288 ms, result 0 00:24:19.454  [2024-11-18T23:19:39.774Z] Copying: 24/1024 [MB] (24 MBps) [2024-11-18T23:19:40.718Z] Copying: 42/1024 [MB] (17 MBps) [2024-11-18T23:19:41.660Z] Copying: 70/1024 [MB] (28 MBps) [2024-11-18T23:19:42.607Z] Copying: 89/1024 [MB] (19 MBps) [2024-11-18T23:19:43.551Z] Copying: 109/1024 [MB] (19 MBps) [2024-11-18T23:19:44.577Z] Copying: 132/1024 [MB] (23 MBps) [2024-11-18T23:19:45.517Z] Copying: 151/1024 [MB] (19 MBps) [2024-11-18T23:19:46.462Z] Copying: 169/1024 [MB] (17 MBps) [2024-11-18T23:19:47.851Z] Copying: 183632/1048576 [kB] (10168 kBps) [2024-11-18T23:19:48.790Z] Copying: 196/1024 [MB] (17 MBps) [2024-11-18T23:19:49.738Z] Copying: 208/1024 [MB] (11 MBps) [2024-11-18T23:19:50.681Z] Copying: 222/1024 [MB] (14 MBps) [2024-11-18T23:19:51.625Z] Copying: 234/1024 [MB] (11 MBps) [2024-11-18T23:19:52.570Z] Copying: 244/1024 [MB] (10 MBps) [2024-11-18T23:19:53.513Z] Copying: 256/1024 [MB] (12 MBps) [2024-11-18T23:19:54.458Z] Copying: 275/1024 [MB] (18 MBps) [2024-11-18T23:19:55.849Z] Copying: 295/1024 [MB] (19 MBps) [2024-11-18T23:19:56.797Z] Copying: 306/1024 [MB] (11 MBps) [2024-11-18T23:19:57.742Z] Copying: 320/1024 [MB] (13 MBps) [2024-11-18T23:19:58.797Z] Copying: 332/1024 [MB] (12 MBps) [2024-11-18T23:19:59.741Z] Copying: 347/1024 [MB] (14 MBps) [2024-11-18T23:20:00.717Z] Copying: 363/1024 [MB] (16 MBps) [2024-11-18T23:20:01.658Z] Copying: 384/1024 [MB] (20 MBps) [2024-11-18T23:20:02.596Z] Copying: 405/1024 [MB] (21 MBps) [2024-11-18T23:20:03.539Z] Copying: 420/1024 [MB] (14 MBps) [2024-11-18T23:20:04.483Z] Copying: 437/1024 [MB] (16 MBps) [2024-11-18T23:20:05.871Z] Copying: 455/1024 [MB] (18 MBps) [2024-11-18T23:20:06.443Z] Copying: 469/1024 [MB] (13 MBps) [2024-11-18T23:20:07.828Z] Copying: 479/1024 [MB] (10 MBps) [2024-11-18T23:20:08.770Z] Copying: 489/1024 [MB] (10 MBps) [2024-11-18T23:20:09.715Z] Copying: 501/1024 [MB] (11 MBps) [2024-11-18T23:20:10.659Z] Copying: 511/1024 [MB] (10 MBps) [2024-11-18T23:20:11.604Z] Copying: 527/1024 [MB] (15 MBps) [2024-11-18T23:20:12.547Z] Copying: 541/1024 [MB] (14 MBps) [2024-11-18T23:20:13.491Z] Copying: 553/1024 [MB] (11 MBps) [2024-11-18T23:20:14.435Z] Copying: 564/1024 [MB] (11 MBps) [2024-11-18T23:20:15.821Z] Copying: 583/1024 [MB] (19 MBps) [2024-11-18T23:20:16.764Z] Copying: 601/1024 [MB] (18 MBps) [2024-11-18T23:20:17.708Z] Copying: 618/1024 [MB] (16 MBps) [2024-11-18T23:20:18.653Z] Copying: 632/1024 [MB] (14 MBps) [2024-11-18T23:20:19.597Z] Copying: 650/1024 [MB] (18 MBps) [2024-11-18T23:20:20.538Z] Copying: 667/1024 [MB] (17 MBps) [2024-11-18T23:20:21.480Z] Copying: 687/1024 [MB] (19 MBps) [2024-11-18T23:20:22.865Z] Copying: 705/1024 [MB] (18 MBps) [2024-11-18T23:20:23.432Z] Copying: 722/1024 [MB] (16 MBps) [2024-11-18T23:20:24.809Z] Copying: 742/1024 [MB] (20 MBps) [2024-11-18T23:20:25.751Z] Copying: 758/1024 [MB] (15 MBps) [2024-11-18T23:20:26.694Z] Copying: 774/1024 [MB] (15 MBps) [2024-11-18T23:20:27.638Z] Copying: 790/1024 [MB] (15 MBps) [2024-11-18T23:20:28.581Z] Copying: 802/1024 [MB] (12 MBps) [2024-11-18T23:20:29.515Z] Copying: 812/1024 [MB] (10 MBps) [2024-11-18T23:20:30.451Z] Copying: 825/1024 [MB] (12 MBps) [2024-11-18T23:20:31.829Z] Copying: 837/1024 [MB] (11 MBps) [2024-11-18T23:20:32.797Z] Copying: 847/1024 [MB] (10 MBps) [2024-11-18T23:20:33.737Z] Copying: 861/1024 [MB] (14 MBps) [2024-11-18T23:20:34.681Z] Copying: 871/1024 [MB] (10 MBps) [2024-11-18T23:20:35.619Z] Copying: 887/1024 [MB] (15 MBps) [2024-11-18T23:20:36.565Z] Copying: 903/1024 [MB] (16 MBps) [2024-11-18T23:20:37.531Z] Copying: 913/1024 [MB] (10 MBps) [2024-11-18T23:20:38.470Z] Copying: 924/1024 [MB] (10 MBps) [2024-11-18T23:20:39.849Z] Copying: 934/1024 [MB] (10 MBps) [2024-11-18T23:20:40.790Z] Copying: 948/1024 [MB] (13 MBps) [2024-11-18T23:20:41.734Z] Copying: 958/1024 [MB] (10 MBps) [2024-11-18T23:20:42.746Z] Copying: 992096/1048576 [kB] (10144 kBps) [2024-11-18T23:20:43.684Z] Copying: 983/1024 [MB] (14 MBps) [2024-11-18T23:20:44.620Z] Copying: 1002/1024 [MB] (19 MBps) [2024-11-18T23:20:45.562Z] Copying: 1015/1024 [MB] (13 MBps) [2024-11-18T23:20:45.824Z] Copying: 1048284/1048576 [kB] (8056 kBps) [2024-11-18T23:20:45.824Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-11-18 23:20:45.741314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.446 [2024-11-18 23:20:45.741476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:26.446 [2024-11-18 23:20:45.741500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:26.446 [2024-11-18 23:20:45.741519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.446 [2024-11-18 23:20:45.742026] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:26.446 [2024-11-18 23:20:45.744378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.446 [2024-11-18 23:20:45.744502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:26.446 [2024-11-18 23:20:45.744521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.329 ms 00:25:26.446 [2024-11-18 23:20:45.744529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.446 [2024-11-18 23:20:45.757402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.446 [2024-11-18 23:20:45.757438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:26.446 [2024-11-18 23:20:45.757450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.676 ms 00:25:26.446 [2024-11-18 23:20:45.757459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.446 [2024-11-18 23:20:45.780146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.446 [2024-11-18 23:20:45.780189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:26.446 [2024-11-18 23:20:45.780207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.671 ms 00:25:26.446 [2024-11-18 23:20:45.780215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.446 [2024-11-18 23:20:45.786289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.446 [2024-11-18 23:20:45.786325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:26.446 [2024-11-18 23:20:45.786335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.045 ms 00:25:26.446 [2024-11-18 23:20:45.786343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.446 [2024-11-18 23:20:45.788477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.446 [2024-11-18 23:20:45.788510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:26.446 [2024-11-18 23:20:45.788521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.094 ms 00:25:26.446 [2024-11-18 23:20:45.788528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.446 [2024-11-18 23:20:45.792147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.446 [2024-11-18 23:20:45.792195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:26.446 [2024-11-18 23:20:45.792205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.590 ms 00:25:26.446 [2024-11-18 23:20:45.792214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.709 [2024-11-18 23:20:46.062610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.709 [2024-11-18 23:20:46.062656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:26.709 [2024-11-18 23:20:46.062668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 270.363 ms 00:25:26.709 [2024-11-18 23:20:46.062677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.709 [2024-11-18 23:20:46.065733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.709 [2024-11-18 23:20:46.065766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:26.709 [2024-11-18 23:20:46.065775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.030 ms 00:25:26.709 [2024-11-18 23:20:46.065783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.709 [2024-11-18 23:20:46.067948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.709 [2024-11-18 23:20:46.067979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:26.709 [2024-11-18 23:20:46.067989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.138 ms 00:25:26.709 [2024-11-18 23:20:46.067995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.709 [2024-11-18 23:20:46.069600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.709 [2024-11-18 23:20:46.069630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:26.709 [2024-11-18 23:20:46.069639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.578 ms 00:25:26.709 [2024-11-18 23:20:46.069646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.709 [2024-11-18 23:20:46.071228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.709 [2024-11-18 23:20:46.071257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:26.709 [2024-11-18 23:20:46.071265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.531 ms 00:25:26.709 [2024-11-18 23:20:46.071272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.709 [2024-11-18 23:20:46.071301] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:26.709 [2024-11-18 23:20:46.071314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 110336 / 261120 wr_cnt: 1 state: open 00:25:26.709 [2024-11-18 23:20:46.071326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:26.709 [2024-11-18 23:20:46.071334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:26.709 [2024-11-18 23:20:46.071343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:26.709 [2024-11-18 23:20:46.071350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:26.709 [2024-11-18 23:20:46.071358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:26.709 [2024-11-18 23:20:46.071366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:26.709 [2024-11-18 23:20:46.071374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:26.709 [2024-11-18 23:20:46.071381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:26.709 [2024-11-18 23:20:46.071389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:26.709 [2024-11-18 23:20:46.071397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:26.709 [2024-11-18 23:20:46.071405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:26.709 [2024-11-18 23:20:46.071413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:26.709 [2024-11-18 23:20:46.071421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:26.709 [2024-11-18 23:20:46.071429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:26.709 [2024-11-18 23:20:46.071436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:26.709 [2024-11-18 23:20:46.071444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:26.709 [2024-11-18 23:20:46.071452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:26.709 [2024-11-18 23:20:46.071459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:26.709 [2024-11-18 23:20:46.071466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:26.709 [2024-11-18 23:20:46.071474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:26.709 [2024-11-18 23:20:46.071481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:26.709 [2024-11-18 23:20:46.071488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:26.709 [2024-11-18 23:20:46.071496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:26.709 [2024-11-18 23:20:46.071503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:26.709 [2024-11-18 23:20:46.071510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:26.709 [2024-11-18 23:20:46.071519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:26.709 [2024-11-18 23:20:46.071526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:26.709 [2024-11-18 23:20:46.071533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:26.709 [2024-11-18 23:20:46.071542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:26.709 [2024-11-18 23:20:46.071550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:26.709 [2024-11-18 23:20:46.071558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:26.709 [2024-11-18 23:20:46.071565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:26.709 [2024-11-18 23:20:46.071572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:26.709 [2024-11-18 23:20:46.071579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:26.709 [2024-11-18 23:20:46.071587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:26.709 [2024-11-18 23:20:46.071594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:26.709 [2024-11-18 23:20:46.071602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:26.709 [2024-11-18 23:20:46.071609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:26.709 [2024-11-18 23:20:46.071616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:26.709 [2024-11-18 23:20:46.071623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:26.709 [2024-11-18 23:20:46.071630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:26.709 [2024-11-18 23:20:46.071638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:26.709 [2024-11-18 23:20:46.071645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:26.709 [2024-11-18 23:20:46.071653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:26.709 [2024-11-18 23:20:46.071660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:26.709 [2024-11-18 23:20:46.071667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:26.709 [2024-11-18 23:20:46.071674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:26.709 [2024-11-18 23:20:46.071682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:26.709 [2024-11-18 23:20:46.071689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:26.709 [2024-11-18 23:20:46.071696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:26.709 [2024-11-18 23:20:46.071704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:26.709 [2024-11-18 23:20:46.071711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:26.709 [2024-11-18 23:20:46.071719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:26.709 [2024-11-18 23:20:46.071727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:26.709 [2024-11-18 23:20:46.071735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:26.709 [2024-11-18 23:20:46.071742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:26.709 [2024-11-18 23:20:46.071749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:26.709 [2024-11-18 23:20:46.071756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:26.709 [2024-11-18 23:20:46.071763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:26.709 [2024-11-18 23:20:46.071770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:26.709 [2024-11-18 23:20:46.071779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:26.709 [2024-11-18 23:20:46.071787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:26.709 [2024-11-18 23:20:46.071795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:26.709 [2024-11-18 23:20:46.071802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:26.709 [2024-11-18 23:20:46.071809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:26.709 [2024-11-18 23:20:46.071816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:26.709 [2024-11-18 23:20:46.071824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:26.709 [2024-11-18 23:20:46.071831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:26.710 [2024-11-18 23:20:46.071838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:26.710 [2024-11-18 23:20:46.071845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:26.710 [2024-11-18 23:20:46.071853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:26.710 [2024-11-18 23:20:46.071860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:26.710 [2024-11-18 23:20:46.071867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:26.710 [2024-11-18 23:20:46.071874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:26.710 [2024-11-18 23:20:46.071882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:26.710 [2024-11-18 23:20:46.071889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:26.710 [2024-11-18 23:20:46.071896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:26.710 [2024-11-18 23:20:46.071904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:26.710 [2024-11-18 23:20:46.071911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:26.710 [2024-11-18 23:20:46.071919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:26.710 [2024-11-18 23:20:46.071926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:26.710 [2024-11-18 23:20:46.071933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:26.710 [2024-11-18 23:20:46.071940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:26.710 [2024-11-18 23:20:46.071948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:26.710 [2024-11-18 23:20:46.071956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:26.710 [2024-11-18 23:20:46.071963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:26.710 [2024-11-18 23:20:46.071970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:26.710 [2024-11-18 23:20:46.071978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:26.710 [2024-11-18 23:20:46.071985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:26.710 [2024-11-18 23:20:46.071992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:26.710 [2024-11-18 23:20:46.071999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:26.710 [2024-11-18 23:20:46.072006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:26.710 [2024-11-18 23:20:46.072020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:26.710 [2024-11-18 23:20:46.072027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:26.710 [2024-11-18 23:20:46.072035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:26.710 [2024-11-18 23:20:46.072043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:26.710 [2024-11-18 23:20:46.072050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:26.710 [2024-11-18 23:20:46.072058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:26.710 [2024-11-18 23:20:46.072066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:26.710 [2024-11-18 23:20:46.072082] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:26.710 [2024-11-18 23:20:46.072094] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d010ea4a-a8d8-4554-bde3-40831b373ea1 00:25:26.710 [2024-11-18 23:20:46.072107] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 110336 00:25:26.710 [2024-11-18 23:20:46.072115] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 111296 00:25:26.710 [2024-11-18 23:20:46.072122] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 110336 00:25:26.710 [2024-11-18 23:20:46.072131] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0087 00:25:26.710 [2024-11-18 23:20:46.072138] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:26.710 [2024-11-18 23:20:46.072146] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:26.710 [2024-11-18 23:20:46.072153] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:26.710 [2024-11-18 23:20:46.072170] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:26.710 [2024-11-18 23:20:46.072177] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:26.710 [2024-11-18 23:20:46.072191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.710 [2024-11-18 23:20:46.072198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:26.710 [2024-11-18 23:20:46.072207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.891 ms 00:25:26.710 [2024-11-18 23:20:46.072214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.710 [2024-11-18 23:20:46.074108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.710 [2024-11-18 23:20:46.074130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:26.710 [2024-11-18 23:20:46.074140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.876 ms 00:25:26.710 [2024-11-18 23:20:46.074149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.710 [2024-11-18 23:20:46.074266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.710 [2024-11-18 23:20:46.074276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:26.710 [2024-11-18 23:20:46.074289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:25:26.710 [2024-11-18 23:20:46.074302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.710 [2024-11-18 23:20:46.080193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:26.710 [2024-11-18 23:20:46.080299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:26.710 [2024-11-18 23:20:46.080349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:26.710 [2024-11-18 23:20:46.080372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.710 [2024-11-18 23:20:46.080435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:26.710 [2024-11-18 23:20:46.080464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:26.710 [2024-11-18 23:20:46.080487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:26.710 [2024-11-18 23:20:46.080510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.710 [2024-11-18 23:20:46.080614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:26.710 [2024-11-18 23:20:46.080641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:26.710 [2024-11-18 23:20:46.080738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:26.710 [2024-11-18 23:20:46.080761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.710 [2024-11-18 23:20:46.080791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:26.710 [2024-11-18 23:20:46.080835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:26.710 [2024-11-18 23:20:46.080858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:26.710 [2024-11-18 23:20:46.080883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.971 [2024-11-18 23:20:46.092773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:26.971 [2024-11-18 23:20:46.092912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:26.971 [2024-11-18 23:20:46.092962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:26.971 [2024-11-18 23:20:46.092985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.971 [2024-11-18 23:20:46.102649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:26.971 [2024-11-18 23:20:46.102780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:26.971 [2024-11-18 23:20:46.102839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:26.971 [2024-11-18 23:20:46.102868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.971 [2024-11-18 23:20:46.102930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:26.971 [2024-11-18 23:20:46.102957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:26.971 [2024-11-18 23:20:46.102977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:26.971 [2024-11-18 23:20:46.102996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.971 [2024-11-18 23:20:46.103031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:26.971 [2024-11-18 23:20:46.103087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:26.971 [2024-11-18 23:20:46.103111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:26.971 [2024-11-18 23:20:46.103130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.971 [2024-11-18 23:20:46.103370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:26.971 [2024-11-18 23:20:46.103401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:26.971 [2024-11-18 23:20:46.103422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:26.971 [2024-11-18 23:20:46.103441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.971 [2024-11-18 23:20:46.103515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:26.971 [2024-11-18 23:20:46.103532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:26.972 [2024-11-18 23:20:46.103547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:26.972 [2024-11-18 23:20:46.103555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.972 [2024-11-18 23:20:46.103605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:26.972 [2024-11-18 23:20:46.103618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:26.972 [2024-11-18 23:20:46.103627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:26.972 [2024-11-18 23:20:46.103638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.972 [2024-11-18 23:20:46.103683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:26.972 [2024-11-18 23:20:46.103693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:26.972 [2024-11-18 23:20:46.103702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:26.972 [2024-11-18 23:20:46.103710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.972 [2024-11-18 23:20:46.103850] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 365.208 ms, result 0 00:25:27.914 00:25:27.914 00:25:27.914 23:20:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:25:30.458 23:20:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:30.459 [2024-11-18 23:20:49.248232] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:25:30.459 [2024-11-18 23:20:49.248454] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91102 ] 00:25:30.459 [2024-11-18 23:20:49.392012] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:30.459 [2024-11-18 23:20:49.439758] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:30.459 [2024-11-18 23:20:49.549000] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:30.459 [2024-11-18 23:20:49.549080] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:30.459 [2024-11-18 23:20:49.708571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.459 [2024-11-18 23:20:49.708618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:30.459 [2024-11-18 23:20:49.708636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:30.459 [2024-11-18 23:20:49.708645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.459 [2024-11-18 23:20:49.708695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.459 [2024-11-18 23:20:49.708705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:30.459 [2024-11-18 23:20:49.708713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:25:30.459 [2024-11-18 23:20:49.708721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.459 [2024-11-18 23:20:49.708741] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:30.459 [2024-11-18 23:20:49.708984] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:30.459 [2024-11-18 23:20:49.708999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.459 [2024-11-18 23:20:49.709011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:30.459 [2024-11-18 23:20:49.709020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:25:30.459 [2024-11-18 23:20:49.709030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.459 [2024-11-18 23:20:49.710515] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:30.459 [2024-11-18 23:20:49.713769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.459 [2024-11-18 23:20:49.713808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:30.459 [2024-11-18 23:20:49.713819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.256 ms 00:25:30.459 [2024-11-18 23:20:49.713827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.459 [2024-11-18 23:20:49.713887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.459 [2024-11-18 23:20:49.713897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:30.459 [2024-11-18 23:20:49.713908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:25:30.459 [2024-11-18 23:20:49.713916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.459 [2024-11-18 23:20:49.721116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.459 [2024-11-18 23:20:49.721286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:30.459 [2024-11-18 23:20:49.721303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.151 ms 00:25:30.459 [2024-11-18 23:20:49.721318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.459 [2024-11-18 23:20:49.721408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.459 [2024-11-18 23:20:49.721418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:30.459 [2024-11-18 23:20:49.721426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:25:30.459 [2024-11-18 23:20:49.721437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.459 [2024-11-18 23:20:49.721479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.459 [2024-11-18 23:20:49.721488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:30.459 [2024-11-18 23:20:49.721497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:30.459 [2024-11-18 23:20:49.721503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.459 [2024-11-18 23:20:49.721530] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:30.459 [2024-11-18 23:20:49.723415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.459 [2024-11-18 23:20:49.723444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:30.459 [2024-11-18 23:20:49.723453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.894 ms 00:25:30.459 [2024-11-18 23:20:49.723461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.459 [2024-11-18 23:20:49.723494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.459 [2024-11-18 23:20:49.723504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:30.459 [2024-11-18 23:20:49.723512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:25:30.459 [2024-11-18 23:20:49.723519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.459 [2024-11-18 23:20:49.723553] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:30.459 [2024-11-18 23:20:49.723580] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:30.459 [2024-11-18 23:20:49.723616] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:30.459 [2024-11-18 23:20:49.723632] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:30.459 [2024-11-18 23:20:49.723740] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:30.459 [2024-11-18 23:20:49.723751] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:30.459 [2024-11-18 23:20:49.723761] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:30.459 [2024-11-18 23:20:49.723771] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:30.459 [2024-11-18 23:20:49.723782] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:30.459 [2024-11-18 23:20:49.723790] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:30.459 [2024-11-18 23:20:49.723797] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:30.459 [2024-11-18 23:20:49.723804] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:30.459 [2024-11-18 23:20:49.723812] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:30.459 [2024-11-18 23:20:49.723823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.459 [2024-11-18 23:20:49.723831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:30.459 [2024-11-18 23:20:49.723839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:25:30.459 [2024-11-18 23:20:49.723846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.459 [2024-11-18 23:20:49.723932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.459 [2024-11-18 23:20:49.723948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:30.459 [2024-11-18 23:20:49.723955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:25:30.459 [2024-11-18 23:20:49.723962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.459 [2024-11-18 23:20:49.724070] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:30.459 [2024-11-18 23:20:49.724085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:30.459 [2024-11-18 23:20:49.724094] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:30.459 [2024-11-18 23:20:49.724108] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:30.459 [2024-11-18 23:20:49.724117] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:30.459 [2024-11-18 23:20:49.724124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:30.459 [2024-11-18 23:20:49.724132] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:30.459 [2024-11-18 23:20:49.724140] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:30.459 [2024-11-18 23:20:49.724147] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:30.459 [2024-11-18 23:20:49.724191] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:30.459 [2024-11-18 23:20:49.724200] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:30.459 [2024-11-18 23:20:49.724210] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:30.459 [2024-11-18 23:20:49.724218] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:30.459 [2024-11-18 23:20:49.724225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:30.459 [2024-11-18 23:20:49.724233] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:30.459 [2024-11-18 23:20:49.724243] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:30.459 [2024-11-18 23:20:49.724251] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:30.459 [2024-11-18 23:20:49.724258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:30.459 [2024-11-18 23:20:49.724266] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:30.459 [2024-11-18 23:20:49.724274] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:30.459 [2024-11-18 23:20:49.724282] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:30.459 [2024-11-18 23:20:49.724289] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:30.459 [2024-11-18 23:20:49.724296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:30.459 [2024-11-18 23:20:49.724304] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:30.459 [2024-11-18 23:20:49.724312] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:30.459 [2024-11-18 23:20:49.724319] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:30.459 [2024-11-18 23:20:49.724327] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:30.459 [2024-11-18 23:20:49.724338] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:30.459 [2024-11-18 23:20:49.724346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:30.459 [2024-11-18 23:20:49.724354] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:30.459 [2024-11-18 23:20:49.724379] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:30.460 [2024-11-18 23:20:49.724387] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:30.460 [2024-11-18 23:20:49.724395] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:30.460 [2024-11-18 23:20:49.724403] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:30.460 [2024-11-18 23:20:49.724411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:30.460 [2024-11-18 23:20:49.724418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:30.460 [2024-11-18 23:20:49.724425] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:30.460 [2024-11-18 23:20:49.724432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:30.460 [2024-11-18 23:20:49.724440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:30.460 [2024-11-18 23:20:49.724447] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:30.460 [2024-11-18 23:20:49.724453] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:30.460 [2024-11-18 23:20:49.724460] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:30.460 [2024-11-18 23:20:49.724466] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:30.460 [2024-11-18 23:20:49.724475] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:30.460 [2024-11-18 23:20:49.724483] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:30.460 [2024-11-18 23:20:49.724493] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:30.460 [2024-11-18 23:20:49.724503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:30.460 [2024-11-18 23:20:49.724512] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:30.460 [2024-11-18 23:20:49.724519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:30.460 [2024-11-18 23:20:49.724526] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:30.460 [2024-11-18 23:20:49.724533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:30.460 [2024-11-18 23:20:49.724540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:30.460 [2024-11-18 23:20:49.724546] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:30.460 [2024-11-18 23:20:49.724554] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:30.460 [2024-11-18 23:20:49.724563] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:30.460 [2024-11-18 23:20:49.724571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:30.460 [2024-11-18 23:20:49.724579] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:30.460 [2024-11-18 23:20:49.724586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:30.460 [2024-11-18 23:20:49.724593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:30.460 [2024-11-18 23:20:49.724602] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:30.460 [2024-11-18 23:20:49.724609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:30.460 [2024-11-18 23:20:49.724616] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:30.460 [2024-11-18 23:20:49.724624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:30.460 [2024-11-18 23:20:49.724631] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:30.460 [2024-11-18 23:20:49.724638] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:30.460 [2024-11-18 23:20:49.724644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:30.460 [2024-11-18 23:20:49.724651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:30.460 [2024-11-18 23:20:49.724658] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:30.460 [2024-11-18 23:20:49.724666] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:30.460 [2024-11-18 23:20:49.724673] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:30.460 [2024-11-18 23:20:49.724681] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:30.460 [2024-11-18 23:20:49.724689] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:30.460 [2024-11-18 23:20:49.724697] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:30.460 [2024-11-18 23:20:49.724704] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:30.460 [2024-11-18 23:20:49.724711] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:30.460 [2024-11-18 23:20:49.724721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.460 [2024-11-18 23:20:49.724729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:30.460 [2024-11-18 23:20:49.724737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.718 ms 00:25:30.460 [2024-11-18 23:20:49.724745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.460 [2024-11-18 23:20:49.754286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.460 [2024-11-18 23:20:49.754363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:30.460 [2024-11-18 23:20:49.754388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.494 ms 00:25:30.460 [2024-11-18 23:20:49.754404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.460 [2024-11-18 23:20:49.754570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.460 [2024-11-18 23:20:49.754587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:30.460 [2024-11-18 23:20:49.754608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:25:30.460 [2024-11-18 23:20:49.754623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.460 [2024-11-18 23:20:49.766610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.460 [2024-11-18 23:20:49.766658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:30.460 [2024-11-18 23:20:49.766672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.895 ms 00:25:30.460 [2024-11-18 23:20:49.766682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.460 [2024-11-18 23:20:49.766719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.460 [2024-11-18 23:20:49.766730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:30.460 [2024-11-18 23:20:49.766741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:30.460 [2024-11-18 23:20:49.766751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.460 [2024-11-18 23:20:49.767317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.460 [2024-11-18 23:20:49.767352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:30.460 [2024-11-18 23:20:49.767362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.505 ms 00:25:30.460 [2024-11-18 23:20:49.767371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.460 [2024-11-18 23:20:49.767507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.460 [2024-11-18 23:20:49.767517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:30.460 [2024-11-18 23:20:49.767526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:25:30.460 [2024-11-18 23:20:49.767535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.460 [2024-11-18 23:20:49.773946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.460 [2024-11-18 23:20:49.773980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:30.460 [2024-11-18 23:20:49.773997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.389 ms 00:25:30.460 [2024-11-18 23:20:49.774005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.460 [2024-11-18 23:20:49.777487] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:25:30.460 [2024-11-18 23:20:49.777532] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:30.460 [2024-11-18 23:20:49.777545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.460 [2024-11-18 23:20:49.777553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:30.460 [2024-11-18 23:20:49.777562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.456 ms 00:25:30.460 [2024-11-18 23:20:49.777570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.460 [2024-11-18 23:20:49.792589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.460 [2024-11-18 23:20:49.792736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:30.460 [2024-11-18 23:20:49.792755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.977 ms 00:25:30.460 [2024-11-18 23:20:49.792763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.460 [2024-11-18 23:20:49.795261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.460 [2024-11-18 23:20:49.795293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:30.460 [2024-11-18 23:20:49.795302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.459 ms 00:25:30.460 [2024-11-18 23:20:49.795309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.460 [2024-11-18 23:20:49.797443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.460 [2024-11-18 23:20:49.797487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:30.460 [2024-11-18 23:20:49.797498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.891 ms 00:25:30.460 [2024-11-18 23:20:49.797505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.460 [2024-11-18 23:20:49.797855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.460 [2024-11-18 23:20:49.797869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:30.460 [2024-11-18 23:20:49.797878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:25:30.460 [2024-11-18 23:20:49.797890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.460 [2024-11-18 23:20:49.818121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.460 [2024-11-18 23:20:49.818192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:30.460 [2024-11-18 23:20:49.818206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.214 ms 00:25:30.461 [2024-11-18 23:20:49.818214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.461 [2024-11-18 23:20:49.826067] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:30.461 [2024-11-18 23:20:49.829345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.461 [2024-11-18 23:20:49.829383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:30.461 [2024-11-18 23:20:49.829394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.088 ms 00:25:30.461 [2024-11-18 23:20:49.829407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.461 [2024-11-18 23:20:49.829484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.461 [2024-11-18 23:20:49.829495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:30.461 [2024-11-18 23:20:49.829504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:30.461 [2024-11-18 23:20:49.829512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.461 [2024-11-18 23:20:49.831290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.461 [2024-11-18 23:20:49.831323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:30.461 [2024-11-18 23:20:49.831336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.742 ms 00:25:30.461 [2024-11-18 23:20:49.831344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.461 [2024-11-18 23:20:49.831369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.461 [2024-11-18 23:20:49.831378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:30.461 [2024-11-18 23:20:49.831386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:30.461 [2024-11-18 23:20:49.831393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.461 [2024-11-18 23:20:49.831433] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:30.461 [2024-11-18 23:20:49.831443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.461 [2024-11-18 23:20:49.831451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:30.461 [2024-11-18 23:20:49.831462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:30.461 [2024-11-18 23:20:49.831472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.722 [2024-11-18 23:20:49.836103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.722 [2024-11-18 23:20:49.836144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:30.722 [2024-11-18 23:20:49.836173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.613 ms 00:25:30.722 [2024-11-18 23:20:49.836182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.722 [2024-11-18 23:20:49.836257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:30.722 [2024-11-18 23:20:49.836272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:30.722 [2024-11-18 23:20:49.836281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:25:30.722 [2024-11-18 23:20:49.836288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:30.722 [2024-11-18 23:20:49.837348] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 128.313 ms, result 0 00:25:31.661  [2024-11-18T23:20:52.422Z] Copying: 1060/1048576 [kB] (1060 kBps) [2024-11-18T23:20:53.368Z] Copying: 4668/1048576 [kB] (3608 kBps) [2024-11-18T23:20:54.309Z] Copying: 16/1024 [MB] (11 MBps) [2024-11-18T23:20:55.251Z] Copying: 33/1024 [MB] (16 MBps) [2024-11-18T23:20:56.192Z] Copying: 63/1024 [MB] (29 MBps) [2024-11-18T23:20:57.135Z] Copying: 85/1024 [MB] (22 MBps) [2024-11-18T23:20:58.076Z] Copying: 108/1024 [MB] (22 MBps) [2024-11-18T23:20:59.022Z] Copying: 132/1024 [MB] (24 MBps) [2024-11-18T23:21:00.408Z] Copying: 158/1024 [MB] (25 MBps) [2024-11-18T23:21:01.352Z] Copying: 186/1024 [MB] (28 MBps) [2024-11-18T23:21:02.299Z] Copying: 232/1024 [MB] (45 MBps) [2024-11-18T23:21:03.246Z] Copying: 248/1024 [MB] (15 MBps) [2024-11-18T23:21:04.190Z] Copying: 271/1024 [MB] (22 MBps) [2024-11-18T23:21:05.135Z] Copying: 296/1024 [MB] (25 MBps) [2024-11-18T23:21:06.078Z] Copying: 319/1024 [MB] (22 MBps) [2024-11-18T23:21:07.024Z] Copying: 340/1024 [MB] (21 MBps) [2024-11-18T23:21:08.410Z] Copying: 360/1024 [MB] (20 MBps) [2024-11-18T23:21:09.040Z] Copying: 389/1024 [MB] (28 MBps) [2024-11-18T23:21:10.432Z] Copying: 418/1024 [MB] (29 MBps) [2024-11-18T23:21:11.372Z] Copying: 450/1024 [MB] (31 MBps) [2024-11-18T23:21:12.306Z] Copying: 477/1024 [MB] (27 MBps) [2024-11-18T23:21:13.241Z] Copying: 503/1024 [MB] (25 MBps) [2024-11-18T23:21:14.177Z] Copying: 521/1024 [MB] (18 MBps) [2024-11-18T23:21:15.117Z] Copying: 539/1024 [MB] (17 MBps) [2024-11-18T23:21:16.053Z] Copying: 555/1024 [MB] (16 MBps) [2024-11-18T23:21:17.435Z] Copying: 579/1024 [MB] (23 MBps) [2024-11-18T23:21:18.373Z] Copying: 596/1024 [MB] (17 MBps) [2024-11-18T23:21:19.311Z] Copying: 613/1024 [MB] (17 MBps) [2024-11-18T23:21:20.255Z] Copying: 635/1024 [MB] (22 MBps) [2024-11-18T23:21:21.200Z] Copying: 653/1024 [MB] (17 MBps) [2024-11-18T23:21:22.143Z] Copying: 673/1024 [MB] (20 MBps) [2024-11-18T23:21:23.088Z] Copying: 694/1024 [MB] (20 MBps) [2024-11-18T23:21:24.031Z] Copying: 731/1024 [MB] (37 MBps) [2024-11-18T23:21:25.417Z] Copying: 757/1024 [MB] (25 MBps) [2024-11-18T23:21:26.362Z] Copying: 783/1024 [MB] (26 MBps) [2024-11-18T23:21:27.303Z] Copying: 810/1024 [MB] (26 MBps) [2024-11-18T23:21:28.248Z] Copying: 833/1024 [MB] (23 MBps) [2024-11-18T23:21:29.193Z] Copying: 860/1024 [MB] (26 MBps) [2024-11-18T23:21:30.137Z] Copying: 886/1024 [MB] (26 MBps) [2024-11-18T23:21:31.082Z] Copying: 911/1024 [MB] (25 MBps) [2024-11-18T23:21:32.029Z] Copying: 935/1024 [MB] (23 MBps) [2024-11-18T23:21:33.418Z] Copying: 958/1024 [MB] (22 MBps) [2024-11-18T23:21:34.362Z] Copying: 983/1024 [MB] (25 MBps) [2024-11-18T23:21:34.362Z] Copying: 1017/1024 [MB] (34 MBps) [2024-11-18T23:21:34.936Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-11-18 23:21:34.656334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.558 [2024-11-18 23:21:34.656783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:15.558 [2024-11-18 23:21:34.656897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:15.558 [2024-11-18 23:21:34.656934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.558 [2024-11-18 23:21:34.656998] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:15.558 [2024-11-18 23:21:34.658437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.558 [2024-11-18 23:21:34.658652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:15.558 [2024-11-18 23:21:34.658916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.377 ms 00:26:15.558 [2024-11-18 23:21:34.658971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.558 [2024-11-18 23:21:34.659385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.558 [2024-11-18 23:21:34.659438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:15.558 [2024-11-18 23:21:34.659471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.354 ms 00:26:15.558 [2024-11-18 23:21:34.659576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.558 [2024-11-18 23:21:34.673008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.558 [2024-11-18 23:21:34.673204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:15.558 [2024-11-18 23:21:34.673293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.384 ms 00:26:15.558 [2024-11-18 23:21:34.673325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.558 [2024-11-18 23:21:34.679631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.558 [2024-11-18 23:21:34.679734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:15.558 [2024-11-18 23:21:34.679786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.203 ms 00:26:15.558 [2024-11-18 23:21:34.679808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.558 [2024-11-18 23:21:34.682128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.558 [2024-11-18 23:21:34.682255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:15.558 [2024-11-18 23:21:34.682308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.254 ms 00:26:15.558 [2024-11-18 23:21:34.682333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.558 [2024-11-18 23:21:34.686207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.558 [2024-11-18 23:21:34.686313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:15.558 [2024-11-18 23:21:34.686334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.835 ms 00:26:15.558 [2024-11-18 23:21:34.686343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.558 [2024-11-18 23:21:34.691471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.559 [2024-11-18 23:21:34.691763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:15.559 [2024-11-18 23:21:34.691957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.087 ms 00:26:15.559 [2024-11-18 23:21:34.692063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.559 [2024-11-18 23:21:34.695635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.559 [2024-11-18 23:21:34.695874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:15.559 [2024-11-18 23:21:34.696047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.482 ms 00:26:15.559 [2024-11-18 23:21:34.696117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.559 [2024-11-18 23:21:34.699317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.559 [2024-11-18 23:21:34.699488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:15.559 [2024-11-18 23:21:34.699532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.052 ms 00:26:15.559 [2024-11-18 23:21:34.699554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.559 [2024-11-18 23:21:34.701243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.559 [2024-11-18 23:21:34.701339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:15.559 [2024-11-18 23:21:34.701385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.652 ms 00:26:15.559 [2024-11-18 23:21:34.701409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.559 [2024-11-18 23:21:34.703465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.559 [2024-11-18 23:21:34.703598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:15.559 [2024-11-18 23:21:34.703656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.725 ms 00:26:15.559 [2024-11-18 23:21:34.703681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.559 [2024-11-18 23:21:34.703749] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:15.559 [2024-11-18 23:21:34.703793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:15.559 [2024-11-18 23:21:34.703825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:26:15.559 [2024-11-18 23:21:34.703901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:15.559 [2024-11-18 23:21:34.703933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:15.559 [2024-11-18 23:21:34.703961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:15.559 [2024-11-18 23:21:34.703988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:15.559 [2024-11-18 23:21:34.704041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:15.559 [2024-11-18 23:21:34.704093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:15.559 [2024-11-18 23:21:34.704422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:15.559 [2024-11-18 23:21:34.704504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:15.559 [2024-11-18 23:21:34.704538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:15.559 [2024-11-18 23:21:34.704567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:15.559 [2024-11-18 23:21:34.704596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:15.559 [2024-11-18 23:21:34.704624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:15.559 [2024-11-18 23:21:34.704652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:15.559 [2024-11-18 23:21:34.704680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:15.559 [2024-11-18 23:21:34.704763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:15.559 [2024-11-18 23:21:34.704793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:15.559 [2024-11-18 23:21:34.704821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:15.559 [2024-11-18 23:21:34.704849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:15.559 [2024-11-18 23:21:34.704965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:15.559 [2024-11-18 23:21:34.704997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:15.559 [2024-11-18 23:21:34.705131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:15.559 [2024-11-18 23:21:34.705173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:15.559 [2024-11-18 23:21:34.705230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:15.559 [2024-11-18 23:21:34.705259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:15.559 [2024-11-18 23:21:34.705321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:15.559 [2024-11-18 23:21:34.705352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:15.559 [2024-11-18 23:21:34.705408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:15.559 [2024-11-18 23:21:34.705439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:15.559 [2024-11-18 23:21:34.705468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:15.559 [2024-11-18 23:21:34.705528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:15.559 [2024-11-18 23:21:34.705862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:15.559 [2024-11-18 23:21:34.705880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:15.559 [2024-11-18 23:21:34.705891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:15.559 [2024-11-18 23:21:34.705899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:15.559 [2024-11-18 23:21:34.705907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:15.559 [2024-11-18 23:21:34.705915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:15.559 [2024-11-18 23:21:34.705922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:15.559 [2024-11-18 23:21:34.705930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:15.559 [2024-11-18 23:21:34.705938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:15.559 [2024-11-18 23:21:34.705946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:15.559 [2024-11-18 23:21:34.705953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:15.559 [2024-11-18 23:21:34.705960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:15.559 [2024-11-18 23:21:34.705968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:15.559 [2024-11-18 23:21:34.705975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:15.559 [2024-11-18 23:21:34.705982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:15.559 [2024-11-18 23:21:34.705989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:15.559 [2024-11-18 23:21:34.705996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:15.559 [2024-11-18 23:21:34.706004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:15.559 [2024-11-18 23:21:34.706011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:15.559 [2024-11-18 23:21:34.706018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:15.559 [2024-11-18 23:21:34.706026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:15.559 [2024-11-18 23:21:34.706033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:15.559 [2024-11-18 23:21:34.706040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:15.559 [2024-11-18 23:21:34.706048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:15.559 [2024-11-18 23:21:34.706055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:15.559 [2024-11-18 23:21:34.706062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:15.559 [2024-11-18 23:21:34.706069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:15.559 [2024-11-18 23:21:34.706076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:15.559 [2024-11-18 23:21:34.706083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:15.559 [2024-11-18 23:21:34.706091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:15.559 [2024-11-18 23:21:34.706099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:15.559 [2024-11-18 23:21:34.706106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:15.559 [2024-11-18 23:21:34.706113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:15.559 [2024-11-18 23:21:34.706123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:15.559 [2024-11-18 23:21:34.706130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:15.559 [2024-11-18 23:21:34.706138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:15.559 [2024-11-18 23:21:34.706145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:15.559 [2024-11-18 23:21:34.706153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:15.559 [2024-11-18 23:21:34.706298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:15.559 [2024-11-18 23:21:34.706328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:15.560 [2024-11-18 23:21:34.706382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:15.560 [2024-11-18 23:21:34.706392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:15.560 [2024-11-18 23:21:34.706400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:15.560 [2024-11-18 23:21:34.706407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:15.560 [2024-11-18 23:21:34.706416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:15.560 [2024-11-18 23:21:34.706423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:15.560 [2024-11-18 23:21:34.706431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:15.560 [2024-11-18 23:21:34.706438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:15.560 [2024-11-18 23:21:34.706446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:15.560 [2024-11-18 23:21:34.706454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:15.560 [2024-11-18 23:21:34.706464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:15.560 [2024-11-18 23:21:34.706471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:15.560 [2024-11-18 23:21:34.706478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:15.560 [2024-11-18 23:21:34.706486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:15.560 [2024-11-18 23:21:34.706493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:15.560 [2024-11-18 23:21:34.706501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:15.560 [2024-11-18 23:21:34.706508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:15.560 [2024-11-18 23:21:34.706515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:15.560 [2024-11-18 23:21:34.706522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:15.560 [2024-11-18 23:21:34.706530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:15.560 [2024-11-18 23:21:34.706537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:15.560 [2024-11-18 23:21:34.706544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:15.560 [2024-11-18 23:21:34.706552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:15.560 [2024-11-18 23:21:34.706559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:15.560 [2024-11-18 23:21:34.706566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:15.560 [2024-11-18 23:21:34.706574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:15.560 [2024-11-18 23:21:34.706582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:15.560 [2024-11-18 23:21:34.706589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:15.560 [2024-11-18 23:21:34.706605] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:15.560 [2024-11-18 23:21:34.706614] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d010ea4a-a8d8-4554-bde3-40831b373ea1 00:26:15.560 [2024-11-18 23:21:34.706635] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:26:15.560 [2024-11-18 23:21:34.706643] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 154304 00:26:15.560 [2024-11-18 23:21:34.706650] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 152320 00:26:15.560 [2024-11-18 23:21:34.706659] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0130 00:26:15.560 [2024-11-18 23:21:34.706666] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:15.560 [2024-11-18 23:21:34.706674] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:15.560 [2024-11-18 23:21:34.706681] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:15.560 [2024-11-18 23:21:34.706687] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:15.560 [2024-11-18 23:21:34.706694] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:15.560 [2024-11-18 23:21:34.706703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.560 [2024-11-18 23:21:34.706712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:15.560 [2024-11-18 23:21:34.706720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.956 ms 00:26:15.560 [2024-11-18 23:21:34.706728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.560 [2024-11-18 23:21:34.708669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.560 [2024-11-18 23:21:34.708700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:15.560 [2024-11-18 23:21:34.708709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.911 ms 00:26:15.560 [2024-11-18 23:21:34.708721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.560 [2024-11-18 23:21:34.708818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:15.560 [2024-11-18 23:21:34.708831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:15.560 [2024-11-18 23:21:34.708842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:26:15.560 [2024-11-18 23:21:34.708849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.560 [2024-11-18 23:21:34.714674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:15.560 [2024-11-18 23:21:34.714707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:15.560 [2024-11-18 23:21:34.714717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:15.560 [2024-11-18 23:21:34.714725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.560 [2024-11-18 23:21:34.714774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:15.560 [2024-11-18 23:21:34.714782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:15.560 [2024-11-18 23:21:34.714796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:15.560 [2024-11-18 23:21:34.714804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.560 [2024-11-18 23:21:34.714855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:15.560 [2024-11-18 23:21:34.714866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:15.560 [2024-11-18 23:21:34.714874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:15.560 [2024-11-18 23:21:34.714881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.560 [2024-11-18 23:21:34.714896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:15.560 [2024-11-18 23:21:34.714904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:15.560 [2024-11-18 23:21:34.714912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:15.560 [2024-11-18 23:21:34.714925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.560 [2024-11-18 23:21:34.726932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:15.560 [2024-11-18 23:21:34.726973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:15.560 [2024-11-18 23:21:34.726985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:15.560 [2024-11-18 23:21:34.726994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.560 [2024-11-18 23:21:34.736692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:15.560 [2024-11-18 23:21:34.736735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:15.560 [2024-11-18 23:21:34.736745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:15.560 [2024-11-18 23:21:34.736760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.560 [2024-11-18 23:21:34.736806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:15.560 [2024-11-18 23:21:34.736816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:15.560 [2024-11-18 23:21:34.736829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:15.560 [2024-11-18 23:21:34.736837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.560 [2024-11-18 23:21:34.736863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:15.560 [2024-11-18 23:21:34.736872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:15.560 [2024-11-18 23:21:34.736880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:15.560 [2024-11-18 23:21:34.736888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.560 [2024-11-18 23:21:34.736956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:15.560 [2024-11-18 23:21:34.736966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:15.560 [2024-11-18 23:21:34.736974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:15.560 [2024-11-18 23:21:34.736982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.560 [2024-11-18 23:21:34.737009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:15.560 [2024-11-18 23:21:34.737019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:15.560 [2024-11-18 23:21:34.737032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:15.560 [2024-11-18 23:21:34.737042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.560 [2024-11-18 23:21:34.737087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:15.560 [2024-11-18 23:21:34.737097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:15.560 [2024-11-18 23:21:34.737106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:15.560 [2024-11-18 23:21:34.737113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.560 [2024-11-18 23:21:34.737190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:15.560 [2024-11-18 23:21:34.737201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:15.560 [2024-11-18 23:21:34.737210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:15.560 [2024-11-18 23:21:34.737218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:15.560 [2024-11-18 23:21:34.737363] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 81.007 ms, result 0 00:26:15.822 00:26:15.822 00:26:15.822 23:21:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:18.372 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:26:18.372 23:21:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:18.372 [2024-11-18 23:21:37.348295] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:26:18.372 [2024-11-18 23:21:37.348437] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91592 ] 00:26:18.372 [2024-11-18 23:21:37.500124] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:18.372 [2024-11-18 23:21:37.573562] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:18.372 [2024-11-18 23:21:37.723284] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:18.372 [2024-11-18 23:21:37.723386] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:18.634 [2024-11-18 23:21:37.887887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.634 [2024-11-18 23:21:37.887959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:18.634 [2024-11-18 23:21:37.887981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:18.634 [2024-11-18 23:21:37.887991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.634 [2024-11-18 23:21:37.888055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.634 [2024-11-18 23:21:37.888067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:18.634 [2024-11-18 23:21:37.888076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:26:18.634 [2024-11-18 23:21:37.888085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.634 [2024-11-18 23:21:37.888107] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:18.634 [2024-11-18 23:21:37.888434] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:18.635 [2024-11-18 23:21:37.888457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.635 [2024-11-18 23:21:37.888466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:18.635 [2024-11-18 23:21:37.888476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.353 ms 00:26:18.635 [2024-11-18 23:21:37.888487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.635 [2024-11-18 23:21:37.890854] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:18.635 [2024-11-18 23:21:37.895821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.635 [2024-11-18 23:21:37.896032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:18.635 [2024-11-18 23:21:37.896060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.971 ms 00:26:18.635 [2024-11-18 23:21:37.896070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.635 [2024-11-18 23:21:37.896198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.635 [2024-11-18 23:21:37.896215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:18.635 [2024-11-18 23:21:37.896229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:26:18.635 [2024-11-18 23:21:37.896238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.635 [2024-11-18 23:21:37.907724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.635 [2024-11-18 23:21:37.907775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:18.635 [2024-11-18 23:21:37.907797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.433 ms 00:26:18.635 [2024-11-18 23:21:37.907817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.635 [2024-11-18 23:21:37.907924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.635 [2024-11-18 23:21:37.907935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:18.635 [2024-11-18 23:21:37.907944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:26:18.635 [2024-11-18 23:21:37.907956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.635 [2024-11-18 23:21:37.908032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.635 [2024-11-18 23:21:37.908043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:18.635 [2024-11-18 23:21:37.908052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:26:18.635 [2024-11-18 23:21:37.908060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.635 [2024-11-18 23:21:37.908094] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:18.635 [2024-11-18 23:21:37.910831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.635 [2024-11-18 23:21:37.911007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:18.635 [2024-11-18 23:21:37.911026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.746 ms 00:26:18.635 [2024-11-18 23:21:37.911058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.635 [2024-11-18 23:21:37.911102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.635 [2024-11-18 23:21:37.911112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:18.635 [2024-11-18 23:21:37.911123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:26:18.635 [2024-11-18 23:21:37.911131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.635 [2024-11-18 23:21:37.911187] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:18.635 [2024-11-18 23:21:37.911219] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:18.635 [2024-11-18 23:21:37.911268] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:18.635 [2024-11-18 23:21:37.911295] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:18.635 [2024-11-18 23:21:37.911410] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:18.635 [2024-11-18 23:21:37.911423] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:18.635 [2024-11-18 23:21:37.911434] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:18.635 [2024-11-18 23:21:37.911445] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:18.635 [2024-11-18 23:21:37.911459] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:18.635 [2024-11-18 23:21:37.911468] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:18.635 [2024-11-18 23:21:37.911476] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:18.635 [2024-11-18 23:21:37.911484] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:18.635 [2024-11-18 23:21:37.911493] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:18.635 [2024-11-18 23:21:37.911501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.635 [2024-11-18 23:21:37.911510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:18.635 [2024-11-18 23:21:37.911522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.326 ms 00:26:18.635 [2024-11-18 23:21:37.911533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.635 [2024-11-18 23:21:37.911620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.635 [2024-11-18 23:21:37.911632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:18.635 [2024-11-18 23:21:37.911640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:26:18.635 [2024-11-18 23:21:37.911647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.635 [2024-11-18 23:21:37.911749] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:18.635 [2024-11-18 23:21:37.911760] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:18.635 [2024-11-18 23:21:37.911773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:18.635 [2024-11-18 23:21:37.911792] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:18.635 [2024-11-18 23:21:37.911801] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:18.635 [2024-11-18 23:21:37.911808] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:18.635 [2024-11-18 23:21:37.911815] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:18.635 [2024-11-18 23:21:37.911822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:18.635 [2024-11-18 23:21:37.911831] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:18.635 [2024-11-18 23:21:37.911838] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:18.635 [2024-11-18 23:21:37.911851] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:18.635 [2024-11-18 23:21:37.911859] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:18.635 [2024-11-18 23:21:37.911866] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:18.635 [2024-11-18 23:21:37.911872] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:18.635 [2024-11-18 23:21:37.911879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:18.635 [2024-11-18 23:21:37.911887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:18.635 [2024-11-18 23:21:37.911897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:18.635 [2024-11-18 23:21:37.911905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:18.635 [2024-11-18 23:21:37.911912] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:18.635 [2024-11-18 23:21:37.911918] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:18.635 [2024-11-18 23:21:37.911925] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:18.635 [2024-11-18 23:21:37.911932] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:18.635 [2024-11-18 23:21:37.911938] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:18.635 [2024-11-18 23:21:37.911946] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:18.635 [2024-11-18 23:21:37.911953] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:18.635 [2024-11-18 23:21:37.911959] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:18.635 [2024-11-18 23:21:37.911971] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:18.635 [2024-11-18 23:21:37.911978] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:18.635 [2024-11-18 23:21:37.911984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:18.635 [2024-11-18 23:21:37.911991] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:18.635 [2024-11-18 23:21:37.911998] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:18.635 [2024-11-18 23:21:37.912006] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:18.635 [2024-11-18 23:21:37.912013] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:18.635 [2024-11-18 23:21:37.912019] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:18.635 [2024-11-18 23:21:37.912026] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:18.635 [2024-11-18 23:21:37.912033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:18.635 [2024-11-18 23:21:37.912040] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:18.635 [2024-11-18 23:21:37.912046] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:18.635 [2024-11-18 23:21:37.912052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:18.635 [2024-11-18 23:21:37.912059] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:18.635 [2024-11-18 23:21:37.912066] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:18.635 [2024-11-18 23:21:37.912074] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:18.635 [2024-11-18 23:21:37.912083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:18.635 [2024-11-18 23:21:37.912090] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:18.635 [2024-11-18 23:21:37.912098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:18.635 [2024-11-18 23:21:37.912106] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:18.635 [2024-11-18 23:21:37.912117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:18.635 [2024-11-18 23:21:37.912129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:18.636 [2024-11-18 23:21:37.912137] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:18.636 [2024-11-18 23:21:37.912144] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:18.636 [2024-11-18 23:21:37.912167] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:18.636 [2024-11-18 23:21:37.912175] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:18.636 [2024-11-18 23:21:37.912182] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:18.636 [2024-11-18 23:21:37.912191] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:18.636 [2024-11-18 23:21:37.912202] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:18.636 [2024-11-18 23:21:37.912219] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:18.636 [2024-11-18 23:21:37.912226] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:18.636 [2024-11-18 23:21:37.912235] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:18.636 [2024-11-18 23:21:37.912244] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:18.636 [2024-11-18 23:21:37.912252] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:18.636 [2024-11-18 23:21:37.912259] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:18.636 [2024-11-18 23:21:37.912267] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:18.636 [2024-11-18 23:21:37.912275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:18.636 [2024-11-18 23:21:37.912283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:18.636 [2024-11-18 23:21:37.912291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:18.636 [2024-11-18 23:21:37.912299] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:18.636 [2024-11-18 23:21:37.912306] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:18.636 [2024-11-18 23:21:37.912324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:18.636 [2024-11-18 23:21:37.912332] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:18.636 [2024-11-18 23:21:37.912339] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:18.636 [2024-11-18 23:21:37.912348] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:18.636 [2024-11-18 23:21:37.912356] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:18.636 [2024-11-18 23:21:37.912363] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:18.636 [2024-11-18 23:21:37.912370] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:18.636 [2024-11-18 23:21:37.912380] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:18.636 [2024-11-18 23:21:37.912389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.636 [2024-11-18 23:21:37.912397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:18.636 [2024-11-18 23:21:37.912404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.709 ms 00:26:18.636 [2024-11-18 23:21:37.912411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.636 [2024-11-18 23:21:37.940482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.636 [2024-11-18 23:21:37.940739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:18.636 [2024-11-18 23:21:37.940769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.991 ms 00:26:18.636 [2024-11-18 23:21:37.940783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.636 [2024-11-18 23:21:37.940924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.636 [2024-11-18 23:21:37.940938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:18.636 [2024-11-18 23:21:37.940952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:26:18.636 [2024-11-18 23:21:37.940964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.636 [2024-11-18 23:21:37.957124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.636 [2024-11-18 23:21:37.957192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:18.636 [2024-11-18 23:21:37.957212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.069 ms 00:26:18.636 [2024-11-18 23:21:37.957221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.636 [2024-11-18 23:21:37.957264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.636 [2024-11-18 23:21:37.957273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:18.636 [2024-11-18 23:21:37.957282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:18.636 [2024-11-18 23:21:37.957296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.636 [2024-11-18 23:21:37.958005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.636 [2024-11-18 23:21:37.958062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:18.636 [2024-11-18 23:21:37.958074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.653 ms 00:26:18.636 [2024-11-18 23:21:37.958083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.636 [2024-11-18 23:21:37.958274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.636 [2024-11-18 23:21:37.958286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:18.636 [2024-11-18 23:21:37.958295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.165 ms 00:26:18.636 [2024-11-18 23:21:37.958304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.636 [2024-11-18 23:21:37.968003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.636 [2024-11-18 23:21:37.968220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:18.636 [2024-11-18 23:21:37.968249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.674 ms 00:26:18.636 [2024-11-18 23:21:37.968263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.636 [2024-11-18 23:21:37.973056] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:18.636 [2024-11-18 23:21:37.973111] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:18.636 [2024-11-18 23:21:37.973125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.636 [2024-11-18 23:21:37.973135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:18.636 [2024-11-18 23:21:37.973146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.731 ms 00:26:18.636 [2024-11-18 23:21:37.973153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.636 [2024-11-18 23:21:37.989305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.636 [2024-11-18 23:21:37.989338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:18.636 [2024-11-18 23:21:37.989357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.070 ms 00:26:18.636 [2024-11-18 23:21:37.989365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.636 [2024-11-18 23:21:37.991512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.636 [2024-11-18 23:21:37.991629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:18.636 [2024-11-18 23:21:37.991643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.107 ms 00:26:18.636 [2024-11-18 23:21:37.991650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.636 [2024-11-18 23:21:37.993309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.636 [2024-11-18 23:21:37.993339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:18.636 [2024-11-18 23:21:37.993349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.631 ms 00:26:18.636 [2024-11-18 23:21:37.993356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.636 [2024-11-18 23:21:37.993677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.636 [2024-11-18 23:21:37.993689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:18.636 [2024-11-18 23:21:37.993698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:26:18.636 [2024-11-18 23:21:37.993705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.897 [2024-11-18 23:21:38.012542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.897 [2024-11-18 23:21:38.012599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:18.897 [2024-11-18 23:21:38.012615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.804 ms 00:26:18.897 [2024-11-18 23:21:38.012626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.897 [2024-11-18 23:21:38.020544] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:18.897 [2024-11-18 23:21:38.023388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.897 [2024-11-18 23:21:38.023421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:18.897 [2024-11-18 23:21:38.023439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.722 ms 00:26:18.897 [2024-11-18 23:21:38.023454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.897 [2024-11-18 23:21:38.023511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.897 [2024-11-18 23:21:38.023521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:18.897 [2024-11-18 23:21:38.023530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:26:18.897 [2024-11-18 23:21:38.023537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.897 [2024-11-18 23:21:38.024301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.897 [2024-11-18 23:21:38.024327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:18.897 [2024-11-18 23:21:38.024337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.723 ms 00:26:18.897 [2024-11-18 23:21:38.024347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.897 [2024-11-18 23:21:38.024371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.897 [2024-11-18 23:21:38.024379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:18.897 [2024-11-18 23:21:38.024388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:18.897 [2024-11-18 23:21:38.024396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.897 [2024-11-18 23:21:38.024438] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:18.897 [2024-11-18 23:21:38.024449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.897 [2024-11-18 23:21:38.024457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:18.897 [2024-11-18 23:21:38.024466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:26:18.897 [2024-11-18 23:21:38.024473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.897 [2024-11-18 23:21:38.028962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.897 [2024-11-18 23:21:38.028997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:18.897 [2024-11-18 23:21:38.029007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.468 ms 00:26:18.897 [2024-11-18 23:21:38.029016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.897 [2024-11-18 23:21:38.029087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.897 [2024-11-18 23:21:38.029097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:18.897 [2024-11-18 23:21:38.029105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:26:18.897 [2024-11-18 23:21:38.029113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.897 [2024-11-18 23:21:38.030592] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 142.233 ms, result 0 00:26:19.840  [2024-11-18T23:21:40.606Z] Copying: 22/1024 [MB] (22 MBps) [2024-11-18T23:21:41.549Z] Copying: 41/1024 [MB] (18 MBps) [2024-11-18T23:21:42.493Z] Copying: 63/1024 [MB] (21 MBps) [2024-11-18T23:21:43.550Z] Copying: 86/1024 [MB] (23 MBps) [2024-11-18T23:21:44.493Z] Copying: 103/1024 [MB] (17 MBps) [2024-11-18T23:21:45.437Z] Copying: 119/1024 [MB] (15 MBps) [2024-11-18T23:21:46.379Z] Copying: 141/1024 [MB] (21 MBps) [2024-11-18T23:21:47.324Z] Copying: 156/1024 [MB] (15 MBps) [2024-11-18T23:21:48.267Z] Copying: 168/1024 [MB] (12 MBps) [2024-11-18T23:21:49.643Z] Copying: 179/1024 [MB] (10 MBps) [2024-11-18T23:21:50.216Z] Copying: 193/1024 [MB] (14 MBps) [2024-11-18T23:21:51.600Z] Copying: 204/1024 [MB] (11 MBps) [2024-11-18T23:21:52.541Z] Copying: 215/1024 [MB] (10 MBps) [2024-11-18T23:21:53.479Z] Copying: 227/1024 [MB] (11 MBps) [2024-11-18T23:21:54.413Z] Copying: 241/1024 [MB] (13 MBps) [2024-11-18T23:21:55.354Z] Copying: 254/1024 [MB] (13 MBps) [2024-11-18T23:21:56.295Z] Copying: 265/1024 [MB] (11 MBps) [2024-11-18T23:21:57.237Z] Copying: 287/1024 [MB] (22 MBps) [2024-11-18T23:21:58.641Z] Copying: 298/1024 [MB] (10 MBps) [2024-11-18T23:21:59.212Z] Copying: 309/1024 [MB] (10 MBps) [2024-11-18T23:22:00.592Z] Copying: 319/1024 [MB] (10 MBps) [2024-11-18T23:22:01.533Z] Copying: 331/1024 [MB] (11 MBps) [2024-11-18T23:22:02.476Z] Copying: 342/1024 [MB] (10 MBps) [2024-11-18T23:22:03.417Z] Copying: 355/1024 [MB] (12 MBps) [2024-11-18T23:22:04.360Z] Copying: 367/1024 [MB] (11 MBps) [2024-11-18T23:22:05.303Z] Copying: 378/1024 [MB] (11 MBps) [2024-11-18T23:22:06.246Z] Copying: 393/1024 [MB] (15 MBps) [2024-11-18T23:22:07.629Z] Copying: 410/1024 [MB] (16 MBps) [2024-11-18T23:22:08.570Z] Copying: 427/1024 [MB] (17 MBps) [2024-11-18T23:22:09.513Z] Copying: 449/1024 [MB] (22 MBps) [2024-11-18T23:22:10.458Z] Copying: 467/1024 [MB] (17 MBps) [2024-11-18T23:22:11.401Z] Copying: 478/1024 [MB] (10 MBps) [2024-11-18T23:22:12.343Z] Copying: 489/1024 [MB] (10 MBps) [2024-11-18T23:22:13.321Z] Copying: 503/1024 [MB] (14 MBps) [2024-11-18T23:22:14.283Z] Copying: 514/1024 [MB] (11 MBps) [2024-11-18T23:22:15.225Z] Copying: 534/1024 [MB] (19 MBps) [2024-11-18T23:22:16.611Z] Copying: 547/1024 [MB] (12 MBps) [2024-11-18T23:22:17.554Z] Copying: 564/1024 [MB] (17 MBps) [2024-11-18T23:22:18.498Z] Copying: 580/1024 [MB] (16 MBps) [2024-11-18T23:22:19.442Z] Copying: 595/1024 [MB] (14 MBps) [2024-11-18T23:22:20.386Z] Copying: 606/1024 [MB] (11 MBps) [2024-11-18T23:22:21.329Z] Copying: 617/1024 [MB] (10 MBps) [2024-11-18T23:22:22.274Z] Copying: 628/1024 [MB] (10 MBps) [2024-11-18T23:22:23.220Z] Copying: 638/1024 [MB] (10 MBps) [2024-11-18T23:22:24.607Z] Copying: 649/1024 [MB] (10 MBps) [2024-11-18T23:22:25.552Z] Copying: 659/1024 [MB] (10 MBps) [2024-11-18T23:22:26.495Z] Copying: 669/1024 [MB] (10 MBps) [2024-11-18T23:22:27.438Z] Copying: 680/1024 [MB] (10 MBps) [2024-11-18T23:22:28.379Z] Copying: 696/1024 [MB] (16 MBps) [2024-11-18T23:22:29.324Z] Copying: 709/1024 [MB] (12 MBps) [2024-11-18T23:22:30.267Z] Copying: 719/1024 [MB] (10 MBps) [2024-11-18T23:22:31.212Z] Copying: 730/1024 [MB] (10 MBps) [2024-11-18T23:22:32.601Z] Copying: 741/1024 [MB] (11 MBps) [2024-11-18T23:22:33.546Z] Copying: 756/1024 [MB] (15 MBps) [2024-11-18T23:22:34.488Z] Copying: 767/1024 [MB] (10 MBps) [2024-11-18T23:22:35.433Z] Copying: 784/1024 [MB] (16 MBps) [2024-11-18T23:22:36.377Z] Copying: 799/1024 [MB] (15 MBps) [2024-11-18T23:22:37.322Z] Copying: 814/1024 [MB] (15 MBps) [2024-11-18T23:22:38.267Z] Copying: 834/1024 [MB] (19 MBps) [2024-11-18T23:22:39.212Z] Copying: 854/1024 [MB] (19 MBps) [2024-11-18T23:22:40.598Z] Copying: 869/1024 [MB] (15 MBps) [2024-11-18T23:22:41.543Z] Copying: 880/1024 [MB] (10 MBps) [2024-11-18T23:22:42.485Z] Copying: 890/1024 [MB] (10 MBps) [2024-11-18T23:22:43.426Z] Copying: 903/1024 [MB] (13 MBps) [2024-11-18T23:22:44.370Z] Copying: 929/1024 [MB] (26 MBps) [2024-11-18T23:22:45.349Z] Copying: 941/1024 [MB] (11 MBps) [2024-11-18T23:22:46.318Z] Copying: 960/1024 [MB] (19 MBps) [2024-11-18T23:22:47.261Z] Copying: 982/1024 [MB] (21 MBps) [2024-11-18T23:22:48.648Z] Copying: 996/1024 [MB] (14 MBps) [2024-11-18T23:22:48.910Z] Copying: 1012/1024 [MB] (15 MBps) [2024-11-18T23:22:49.173Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-11-18 23:22:48.987416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:29.795 [2024-11-18 23:22:48.987518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:29.795 [2024-11-18 23:22:48.987541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:29.796 [2024-11-18 23:22:48.987554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.796 [2024-11-18 23:22:48.987595] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:29.796 [2024-11-18 23:22:48.988646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:29.796 [2024-11-18 23:22:48.988682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:29.796 [2024-11-18 23:22:48.988698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.030 ms 00:27:29.796 [2024-11-18 23:22:48.988710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.796 [2024-11-18 23:22:48.990522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:29.796 [2024-11-18 23:22:48.990596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:29.796 [2024-11-18 23:22:48.990628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.776 ms 00:27:29.796 [2024-11-18 23:22:48.990655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.796 [2024-11-18 23:22:48.995356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:29.796 [2024-11-18 23:22:48.995546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:29.796 [2024-11-18 23:22:48.995611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.550 ms 00:27:29.796 [2024-11-18 23:22:48.995636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.796 [2024-11-18 23:22:49.002006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:29.796 [2024-11-18 23:22:49.002191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:29.796 [2024-11-18 23:22:49.002597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.332 ms 00:27:29.796 [2024-11-18 23:22:49.002651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.796 [2024-11-18 23:22:49.005771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:29.796 [2024-11-18 23:22:49.005959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:29.796 [2024-11-18 23:22:49.006143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.002 ms 00:27:29.796 [2024-11-18 23:22:49.006200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.796 [2024-11-18 23:22:49.011834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:29.796 [2024-11-18 23:22:49.012025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:29.796 [2024-11-18 23:22:49.012086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.575 ms 00:27:29.796 [2024-11-18 23:22:49.012110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.796 [2024-11-18 23:22:49.016959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:29.796 [2024-11-18 23:22:49.017105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:29.796 [2024-11-18 23:22:49.017191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.760 ms 00:27:29.796 [2024-11-18 23:22:49.017217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.796 [2024-11-18 23:22:49.020543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:29.796 [2024-11-18 23:22:49.020695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:29.796 [2024-11-18 23:22:49.020749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.291 ms 00:27:29.796 [2024-11-18 23:22:49.020772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.796 [2024-11-18 23:22:49.023974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:29.796 [2024-11-18 23:22:49.024124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:29.796 [2024-11-18 23:22:49.024192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.907 ms 00:27:29.796 [2024-11-18 23:22:49.024217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.796 [2024-11-18 23:22:49.026697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:29.796 [2024-11-18 23:22:49.026842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:29.796 [2024-11-18 23:22:49.026908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.218 ms 00:27:29.796 [2024-11-18 23:22:49.026931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.796 [2024-11-18 23:22:49.029017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:29.796 [2024-11-18 23:22:49.029173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:29.796 [2024-11-18 23:22:49.029190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.004 ms 00:27:29.796 [2024-11-18 23:22:49.029197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.796 [2024-11-18 23:22:49.029232] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:29.796 [2024-11-18 23:22:49.029258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:29.796 [2024-11-18 23:22:49.029270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:27:29.796 [2024-11-18 23:22:49.029279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:29.796 [2024-11-18 23:22:49.029287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:29.796 [2024-11-18 23:22:49.029297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:29.796 [2024-11-18 23:22:49.029305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:29.796 [2024-11-18 23:22:49.029313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:29.796 [2024-11-18 23:22:49.029320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:29.796 [2024-11-18 23:22:49.029329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:29.796 [2024-11-18 23:22:49.029337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:29.796 [2024-11-18 23:22:49.029345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:29.796 [2024-11-18 23:22:49.029353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:29.796 [2024-11-18 23:22:49.029361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:29.796 [2024-11-18 23:22:49.029370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:29.796 [2024-11-18 23:22:49.029378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:29.796 [2024-11-18 23:22:49.029386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:29.796 [2024-11-18 23:22:49.029393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:29.796 [2024-11-18 23:22:49.029401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:29.796 [2024-11-18 23:22:49.029409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:29.796 [2024-11-18 23:22:49.029417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:29.796 [2024-11-18 23:22:49.029424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:29.796 [2024-11-18 23:22:49.029432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:29.796 [2024-11-18 23:22:49.029439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:29.796 [2024-11-18 23:22:49.029447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:29.796 [2024-11-18 23:22:49.029454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:29.796 [2024-11-18 23:22:49.029462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:29.796 [2024-11-18 23:22:49.029471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:29.796 [2024-11-18 23:22:49.029479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:29.796 [2024-11-18 23:22:49.029487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:29.796 [2024-11-18 23:22:49.029495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:29.796 [2024-11-18 23:22:49.029506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:29.796 [2024-11-18 23:22:49.029515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:29.796 [2024-11-18 23:22:49.029522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:29.796 [2024-11-18 23:22:49.029530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:29.796 [2024-11-18 23:22:49.029539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:29.796 [2024-11-18 23:22:49.029547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:29.796 [2024-11-18 23:22:49.029555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:29.796 [2024-11-18 23:22:49.029563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:29.796 [2024-11-18 23:22:49.029570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:29.796 [2024-11-18 23:22:49.029578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:29.796 [2024-11-18 23:22:49.029586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:29.796 [2024-11-18 23:22:49.029594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:29.796 [2024-11-18 23:22:49.029602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:29.796 [2024-11-18 23:22:49.029610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:29.796 [2024-11-18 23:22:49.029618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:29.796 [2024-11-18 23:22:49.029625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:29.796 [2024-11-18 23:22:49.029633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:29.796 [2024-11-18 23:22:49.029640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:29.796 [2024-11-18 23:22:49.029648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:29.796 [2024-11-18 23:22:49.029657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:29.797 [2024-11-18 23:22:49.029664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:29.797 [2024-11-18 23:22:49.029673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:29.797 [2024-11-18 23:22:49.029681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:29.797 [2024-11-18 23:22:49.029689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:29.797 [2024-11-18 23:22:49.029696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:29.797 [2024-11-18 23:22:49.029704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:29.797 [2024-11-18 23:22:49.029713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:29.797 [2024-11-18 23:22:49.029722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:29.797 [2024-11-18 23:22:49.029731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:29.797 [2024-11-18 23:22:49.029738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:29.797 [2024-11-18 23:22:49.029745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:29.797 [2024-11-18 23:22:49.029753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:29.797 [2024-11-18 23:22:49.029762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:29.797 [2024-11-18 23:22:49.029771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:29.797 [2024-11-18 23:22:49.029779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:29.797 [2024-11-18 23:22:49.029787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:29.797 [2024-11-18 23:22:49.029795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:29.797 [2024-11-18 23:22:49.029803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:29.797 [2024-11-18 23:22:49.029811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:29.797 [2024-11-18 23:22:49.029818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:29.797 [2024-11-18 23:22:49.029826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:29.797 [2024-11-18 23:22:49.029833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:29.797 [2024-11-18 23:22:49.029842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:29.797 [2024-11-18 23:22:49.029849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:29.797 [2024-11-18 23:22:49.029857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:29.797 [2024-11-18 23:22:49.029866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:29.797 [2024-11-18 23:22:49.029874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:29.797 [2024-11-18 23:22:49.029882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:29.797 [2024-11-18 23:22:49.029890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:29.797 [2024-11-18 23:22:49.029898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:29.797 [2024-11-18 23:22:49.029906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:29.797 [2024-11-18 23:22:49.029914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:29.797 [2024-11-18 23:22:49.029922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:29.797 [2024-11-18 23:22:49.029930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:29.797 [2024-11-18 23:22:49.029937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:29.797 [2024-11-18 23:22:49.029946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:29.797 [2024-11-18 23:22:49.029954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:29.797 [2024-11-18 23:22:49.029962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:29.797 [2024-11-18 23:22:49.029969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:29.797 [2024-11-18 23:22:49.029976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:29.797 [2024-11-18 23:22:49.029984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:29.797 [2024-11-18 23:22:49.029992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:29.797 [2024-11-18 23:22:49.030000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:29.797 [2024-11-18 23:22:49.030008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:29.797 [2024-11-18 23:22:49.030018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:29.797 [2024-11-18 23:22:49.030027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:29.797 [2024-11-18 23:22:49.030035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:29.797 [2024-11-18 23:22:49.030043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:29.797 [2024-11-18 23:22:49.030052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:29.797 [2024-11-18 23:22:49.030060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:29.797 [2024-11-18 23:22:49.030076] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:29.797 [2024-11-18 23:22:49.030086] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d010ea4a-a8d8-4554-bde3-40831b373ea1 00:27:29.797 [2024-11-18 23:22:49.030095] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:27:29.797 [2024-11-18 23:22:49.030103] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:29.797 [2024-11-18 23:22:49.030112] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:29.797 [2024-11-18 23:22:49.030121] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:29.797 [2024-11-18 23:22:49.030130] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:29.797 [2024-11-18 23:22:49.030138] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:29.797 [2024-11-18 23:22:49.030147] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:29.797 [2024-11-18 23:22:49.030167] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:29.797 [2024-11-18 23:22:49.030175] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:29.797 [2024-11-18 23:22:49.030183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:29.797 [2024-11-18 23:22:49.030192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:29.797 [2024-11-18 23:22:49.030210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.952 ms 00:27:29.797 [2024-11-18 23:22:49.030218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.797 [2024-11-18 23:22:49.033440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:29.797 [2024-11-18 23:22:49.033472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:29.797 [2024-11-18 23:22:49.033483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.195 ms 00:27:29.797 [2024-11-18 23:22:49.033492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.797 [2024-11-18 23:22:49.033654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:29.797 [2024-11-18 23:22:49.033663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:29.797 [2024-11-18 23:22:49.033673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:27:29.797 [2024-11-18 23:22:49.033680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.797 [2024-11-18 23:22:49.042664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:29.797 [2024-11-18 23:22:49.042713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:29.797 [2024-11-18 23:22:49.042725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:29.797 [2024-11-18 23:22:49.042734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.797 [2024-11-18 23:22:49.042806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:29.797 [2024-11-18 23:22:49.042816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:29.797 [2024-11-18 23:22:49.042831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:29.797 [2024-11-18 23:22:49.042840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.797 [2024-11-18 23:22:49.042886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:29.797 [2024-11-18 23:22:49.042920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:29.797 [2024-11-18 23:22:49.042929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:29.797 [2024-11-18 23:22:49.042937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.797 [2024-11-18 23:22:49.042954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:29.797 [2024-11-18 23:22:49.042967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:29.797 [2024-11-18 23:22:49.042976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:29.797 [2024-11-18 23:22:49.042984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.797 [2024-11-18 23:22:49.062168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:29.797 [2024-11-18 23:22:49.062217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:29.797 [2024-11-18 23:22:49.062230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:29.797 [2024-11-18 23:22:49.062249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.797 [2024-11-18 23:22:49.077571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:29.797 [2024-11-18 23:22:49.077786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:29.797 [2024-11-18 23:22:49.077806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:29.797 [2024-11-18 23:22:49.077816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.797 [2024-11-18 23:22:49.077885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:29.797 [2024-11-18 23:22:49.077897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:29.798 [2024-11-18 23:22:49.077908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:29.798 [2024-11-18 23:22:49.077917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.798 [2024-11-18 23:22:49.077964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:29.798 [2024-11-18 23:22:49.077975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:29.798 [2024-11-18 23:22:49.077995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:29.798 [2024-11-18 23:22:49.078004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.798 [2024-11-18 23:22:49.078098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:29.798 [2024-11-18 23:22:49.078109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:29.798 [2024-11-18 23:22:49.078119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:29.798 [2024-11-18 23:22:49.078127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.798 [2024-11-18 23:22:49.078187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:29.798 [2024-11-18 23:22:49.078199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:29.798 [2024-11-18 23:22:49.078209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:29.798 [2024-11-18 23:22:49.078222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.798 [2024-11-18 23:22:49.078275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:29.798 [2024-11-18 23:22:49.078286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:29.798 [2024-11-18 23:22:49.078296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:29.798 [2024-11-18 23:22:49.078306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.798 [2024-11-18 23:22:49.078369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:29.798 [2024-11-18 23:22:49.078381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:29.798 [2024-11-18 23:22:49.078397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:29.798 [2024-11-18 23:22:49.078407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:29.798 [2024-11-18 23:22:49.078577] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 91.124 ms, result 0 00:27:30.058 00:27:30.058 00:27:30.058 23:22:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:27:32.608 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:27:32.608 23:22:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:27:32.608 23:22:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:27:32.608 23:22:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:32.608 23:22:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:27:32.608 23:22:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:27:32.608 23:22:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:32.608 23:22:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:27:32.608 Process with pid 89499 is not found 00:27:32.608 23:22:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 89499 00:27:32.608 23:22:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@950 -- # '[' -z 89499 ']' 00:27:32.608 23:22:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # kill -0 89499 00:27:32.608 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (89499) - No such process 00:27:32.608 23:22:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@977 -- # echo 'Process with pid 89499 is not found' 00:27:32.608 23:22:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:27:32.869 Remove shared memory files 00:27:32.869 23:22:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:27:32.869 23:22:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:32.869 23:22:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:27:33.132 23:22:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:27:33.132 23:22:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:27:33.132 23:22:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:33.132 23:22:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:27:33.132 ************************************ 00:27:33.132 END TEST ftl_dirty_shutdown 00:27:33.132 ************************************ 00:27:33.132 00:27:33.132 real 4m32.522s 00:27:33.132 user 5m2.618s 00:27:33.132 sys 0m28.470s 00:27:33.132 23:22:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:33.132 23:22:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:33.132 23:22:52 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:27:33.132 23:22:52 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:27:33.132 23:22:52 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:33.132 23:22:52 ftl -- common/autotest_common.sh@10 -- # set +x 00:27:33.132 ************************************ 00:27:33.132 START TEST ftl_upgrade_shutdown 00:27:33.132 ************************************ 00:27:33.132 23:22:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:27:33.132 * Looking for test storage... 00:27:33.132 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:27:33.132 23:22:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:27:33.132 23:22:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:27:33.132 23:22:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:27:33.132 23:22:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:27:33.132 23:22:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:33.132 23:22:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:33.132 23:22:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:33.132 23:22:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:27:33.132 23:22:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:27:33.132 23:22:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:27:33.132 23:22:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:27:33.132 23:22:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:27:33.132 23:22:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:27:33.132 23:22:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:27:33.132 23:22:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:33.132 23:22:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:27:33.132 23:22:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:27:33.132 23:22:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:33.132 23:22:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:33.132 23:22:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:27:33.132 23:22:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:27:33.132 23:22:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:33.132 23:22:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:27:33.132 23:22:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:27:33.132 23:22:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:27:33.132 23:22:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:27:33.132 23:22:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:33.132 23:22:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:27:33.132 23:22:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:27:33.132 23:22:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:33.132 23:22:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:33.132 23:22:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:27:33.132 23:22:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:33.132 23:22:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:27:33.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:33.132 --rc genhtml_branch_coverage=1 00:27:33.132 --rc genhtml_function_coverage=1 00:27:33.132 --rc genhtml_legend=1 00:27:33.132 --rc geninfo_all_blocks=1 00:27:33.132 --rc geninfo_unexecuted_blocks=1 00:27:33.132 00:27:33.132 ' 00:27:33.132 23:22:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:27:33.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:33.132 --rc genhtml_branch_coverage=1 00:27:33.132 --rc genhtml_function_coverage=1 00:27:33.132 --rc genhtml_legend=1 00:27:33.132 --rc geninfo_all_blocks=1 00:27:33.132 --rc geninfo_unexecuted_blocks=1 00:27:33.132 00:27:33.132 ' 00:27:33.132 23:22:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:27:33.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:33.132 --rc genhtml_branch_coverage=1 00:27:33.132 --rc genhtml_function_coverage=1 00:27:33.132 --rc genhtml_legend=1 00:27:33.132 --rc geninfo_all_blocks=1 00:27:33.132 --rc geninfo_unexecuted_blocks=1 00:27:33.132 00:27:33.132 ' 00:27:33.132 23:22:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:27:33.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:33.132 --rc genhtml_branch_coverage=1 00:27:33.132 --rc genhtml_function_coverage=1 00:27:33.132 --rc genhtml_legend=1 00:27:33.132 --rc geninfo_all_blocks=1 00:27:33.132 --rc geninfo_unexecuted_blocks=1 00:27:33.132 00:27:33.132 ' 00:27:33.132 23:22:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:27:33.132 23:22:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:27:33.132 23:22:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:27:33.132 23:22:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:27:33.132 23:22:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:27:33.132 23:22:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:33.132 23:22:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:33.132 23:22:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:27:33.132 23:22:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:27:33.132 23:22:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:33.132 23:22:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:33.132 23:22:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:27:33.132 23:22:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:27:33.132 23:22:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:33.132 23:22:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:33.132 23:22:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:27:33.132 23:22:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:27:33.132 23:22:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:33.132 23:22:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:33.132 23:22:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:27:33.132 23:22:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:27:33.132 23:22:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:33.133 23:22:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:33.133 23:22:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:33.133 23:22:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:33.133 23:22:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:27:33.133 23:22:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:27:33.133 23:22:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:33.133 23:22:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:33.133 23:22:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:33.133 23:22:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:27:33.133 23:22:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:27:33.133 23:22:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:27:33.133 23:22:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:27:33.133 23:22:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:27:33.133 23:22:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:27:33.133 23:22:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:27:33.133 23:22:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:27:33.133 23:22:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:27:33.133 23:22:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:27:33.133 23:22:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:27:33.133 23:22:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:27:33.133 23:22:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:27:33.133 23:22:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:27:33.133 23:22:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:27:33.133 23:22:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:33.133 23:22:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=92422 00:27:33.133 23:22:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:27:33.133 23:22:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:33.133 23:22:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 92422 00:27:33.133 23:22:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 92422 ']' 00:27:33.133 23:22:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:33.133 23:22:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:33.133 23:22:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:33.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:33.133 23:22:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:33.133 23:22:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:33.395 [2024-11-18 23:22:52.583627] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:27:33.395 [2024-11-18 23:22:52.583970] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92422 ] 00:27:33.395 [2024-11-18 23:22:52.739974] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:33.656 [2024-11-18 23:22:52.790250] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:34.227 23:22:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:34.227 23:22:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:27:34.227 23:22:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:34.227 23:22:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:27:34.227 23:22:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:27:34.227 23:22:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:34.227 23:22:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:27:34.227 23:22:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:34.227 23:22:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:27:34.227 23:22:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:34.227 23:22:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:27:34.227 23:22:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:34.227 23:22:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:27:34.227 23:22:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:34.227 23:22:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:27:34.227 23:22:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:34.227 23:22:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:27:34.227 23:22:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:27:34.227 23:22:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:27:34.227 23:22:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:27:34.227 23:22:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:27:34.227 23:22:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:27:34.227 23:22:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:27:34.489 23:22:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:27:34.489 23:22:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:27:34.489 23:22:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:27:34.489 23:22:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=basen1 00:27:34.489 23:22:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:27:34.489 23:22:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:27:34.489 23:22:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:27:34.489 23:22:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:27:34.751 23:22:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:27:34.751 { 00:27:34.751 "name": "basen1", 00:27:34.751 "aliases": [ 00:27:34.751 "80e88143-fca8-4105-87fd-a1d6e601d3eb" 00:27:34.751 ], 00:27:34.751 "product_name": "NVMe disk", 00:27:34.751 "block_size": 4096, 00:27:34.751 "num_blocks": 1310720, 00:27:34.751 "uuid": "80e88143-fca8-4105-87fd-a1d6e601d3eb", 00:27:34.751 "numa_id": -1, 00:27:34.751 "assigned_rate_limits": { 00:27:34.751 "rw_ios_per_sec": 0, 00:27:34.751 "rw_mbytes_per_sec": 0, 00:27:34.751 "r_mbytes_per_sec": 0, 00:27:34.751 "w_mbytes_per_sec": 0 00:27:34.751 }, 00:27:34.751 "claimed": true, 00:27:34.751 "claim_type": "read_many_write_one", 00:27:34.751 "zoned": false, 00:27:34.751 "supported_io_types": { 00:27:34.751 "read": true, 00:27:34.751 "write": true, 00:27:34.751 "unmap": true, 00:27:34.751 "flush": true, 00:27:34.751 "reset": true, 00:27:34.751 "nvme_admin": true, 00:27:34.751 "nvme_io": true, 00:27:34.751 "nvme_io_md": false, 00:27:34.751 "write_zeroes": true, 00:27:34.751 "zcopy": false, 00:27:34.751 "get_zone_info": false, 00:27:34.751 "zone_management": false, 00:27:34.751 "zone_append": false, 00:27:34.751 "compare": true, 00:27:34.751 "compare_and_write": false, 00:27:34.751 "abort": true, 00:27:34.751 "seek_hole": false, 00:27:34.751 "seek_data": false, 00:27:34.751 "copy": true, 00:27:34.751 "nvme_iov_md": false 00:27:34.751 }, 00:27:34.751 "driver_specific": { 00:27:34.751 "nvme": [ 00:27:34.751 { 00:27:34.751 "pci_address": "0000:00:11.0", 00:27:34.751 "trid": { 00:27:34.751 "trtype": "PCIe", 00:27:34.751 "traddr": "0000:00:11.0" 00:27:34.751 }, 00:27:34.751 "ctrlr_data": { 00:27:34.751 "cntlid": 0, 00:27:34.751 "vendor_id": "0x1b36", 00:27:34.751 "model_number": "QEMU NVMe Ctrl", 00:27:34.751 "serial_number": "12341", 00:27:34.751 "firmware_revision": "8.0.0", 00:27:34.751 "subnqn": "nqn.2019-08.org.qemu:12341", 00:27:34.751 "oacs": { 00:27:34.751 "security": 0, 00:27:34.751 "format": 1, 00:27:34.751 "firmware": 0, 00:27:34.751 "ns_manage": 1 00:27:34.751 }, 00:27:34.751 "multi_ctrlr": false, 00:27:34.751 "ana_reporting": false 00:27:34.751 }, 00:27:34.751 "vs": { 00:27:34.751 "nvme_version": "1.4" 00:27:34.751 }, 00:27:34.751 "ns_data": { 00:27:34.751 "id": 1, 00:27:34.751 "can_share": false 00:27:34.751 } 00:27:34.751 } 00:27:34.751 ], 00:27:34.751 "mp_policy": "active_passive" 00:27:34.751 } 00:27:34.751 } 00:27:34.751 ]' 00:27:34.751 23:22:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:27:34.751 23:22:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:27:34.751 23:22:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:27:34.751 23:22:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:27:34.751 23:22:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:27:34.751 23:22:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:27:34.751 23:22:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:27:34.751 23:22:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:27:34.751 23:22:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:27:34.751 23:22:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:34.751 23:22:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:35.012 23:22:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=357a325c-b3a9-478a-b762-7b8c20585fd4 00:27:35.012 23:22:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:27:35.012 23:22:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 357a325c-b3a9-478a-b762-7b8c20585fd4 00:27:35.012 23:22:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:27:35.273 23:22:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=bf06058c-7c3c-46da-a72b-5aeec6826756 00:27:35.273 23:22:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u bf06058c-7c3c-46da-a72b-5aeec6826756 00:27:35.533 23:22:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=94f745be-e384-4dc1-ace3-0740a8377c84 00:27:35.533 23:22:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 94f745be-e384-4dc1-ace3-0740a8377c84 ]] 00:27:35.533 23:22:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 94f745be-e384-4dc1-ace3-0740a8377c84 5120 00:27:35.533 23:22:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:27:35.533 23:22:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:27:35.533 23:22:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=94f745be-e384-4dc1-ace3-0740a8377c84 00:27:35.534 23:22:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:27:35.534 23:22:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 94f745be-e384-4dc1-ace3-0740a8377c84 00:27:35.534 23:22:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=94f745be-e384-4dc1-ace3-0740a8377c84 00:27:35.534 23:22:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:27:35.534 23:22:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:27:35.534 23:22:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:27:35.534 23:22:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 94f745be-e384-4dc1-ace3-0740a8377c84 00:27:35.795 23:22:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:27:35.795 { 00:27:35.795 "name": "94f745be-e384-4dc1-ace3-0740a8377c84", 00:27:35.795 "aliases": [ 00:27:35.795 "lvs/basen1p0" 00:27:35.795 ], 00:27:35.795 "product_name": "Logical Volume", 00:27:35.795 "block_size": 4096, 00:27:35.795 "num_blocks": 5242880, 00:27:35.795 "uuid": "94f745be-e384-4dc1-ace3-0740a8377c84", 00:27:35.795 "assigned_rate_limits": { 00:27:35.795 "rw_ios_per_sec": 0, 00:27:35.795 "rw_mbytes_per_sec": 0, 00:27:35.795 "r_mbytes_per_sec": 0, 00:27:35.795 "w_mbytes_per_sec": 0 00:27:35.795 }, 00:27:35.795 "claimed": false, 00:27:35.795 "zoned": false, 00:27:35.795 "supported_io_types": { 00:27:35.795 "read": true, 00:27:35.795 "write": true, 00:27:35.795 "unmap": true, 00:27:35.795 "flush": false, 00:27:35.795 "reset": true, 00:27:35.795 "nvme_admin": false, 00:27:35.795 "nvme_io": false, 00:27:35.795 "nvme_io_md": false, 00:27:35.795 "write_zeroes": true, 00:27:35.795 "zcopy": false, 00:27:35.795 "get_zone_info": false, 00:27:35.795 "zone_management": false, 00:27:35.795 "zone_append": false, 00:27:35.795 "compare": false, 00:27:35.795 "compare_and_write": false, 00:27:35.795 "abort": false, 00:27:35.795 "seek_hole": true, 00:27:35.795 "seek_data": true, 00:27:35.795 "copy": false, 00:27:35.795 "nvme_iov_md": false 00:27:35.795 }, 00:27:35.795 "driver_specific": { 00:27:35.795 "lvol": { 00:27:35.795 "lvol_store_uuid": "bf06058c-7c3c-46da-a72b-5aeec6826756", 00:27:35.795 "base_bdev": "basen1", 00:27:35.795 "thin_provision": true, 00:27:35.795 "num_allocated_clusters": 0, 00:27:35.795 "snapshot": false, 00:27:35.795 "clone": false, 00:27:35.795 "esnap_clone": false 00:27:35.795 } 00:27:35.795 } 00:27:35.795 } 00:27:35.795 ]' 00:27:35.795 23:22:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:27:35.795 23:22:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:27:35.795 23:22:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:27:35.795 23:22:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=5242880 00:27:35.795 23:22:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=20480 00:27:35.795 23:22:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 20480 00:27:35.795 23:22:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:27:35.795 23:22:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:27:35.795 23:22:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:27:36.056 23:22:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:27:36.056 23:22:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:27:36.056 23:22:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:27:36.318 23:22:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:27:36.318 23:22:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:27:36.319 23:22:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 94f745be-e384-4dc1-ace3-0740a8377c84 -c cachen1p0 --l2p_dram_limit 2 00:27:36.581 [2024-11-18 23:22:55.761994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.581 [2024-11-18 23:22:55.762081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:36.581 [2024-11-18 23:22:55.762101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:36.581 [2024-11-18 23:22:55.762114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.581 [2024-11-18 23:22:55.762219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.581 [2024-11-18 23:22:55.762237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:36.581 [2024-11-18 23:22:55.762247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.080 ms 00:27:36.581 [2024-11-18 23:22:55.762265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.581 [2024-11-18 23:22:55.762292] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:36.581 [2024-11-18 23:22:55.762621] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:36.581 [2024-11-18 23:22:55.762638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.581 [2024-11-18 23:22:55.762655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:36.581 [2024-11-18 23:22:55.762667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.354 ms 00:27:36.581 [2024-11-18 23:22:55.762678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.581 [2024-11-18 23:22:55.762716] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID c31e050b-0568-4228-9bf4-65df49cad241 00:27:36.581 [2024-11-18 23:22:55.765097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.581 [2024-11-18 23:22:55.765311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:27:36.581 [2024-11-18 23:22:55.765339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.046 ms 00:27:36.581 [2024-11-18 23:22:55.765349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.581 [2024-11-18 23:22:55.778379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.581 [2024-11-18 23:22:55.778443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:36.581 [2024-11-18 23:22:55.778458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.911 ms 00:27:36.581 [2024-11-18 23:22:55.778467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.581 [2024-11-18 23:22:55.778531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.581 [2024-11-18 23:22:55.778540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:36.581 [2024-11-18 23:22:55.778552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:27:36.581 [2024-11-18 23:22:55.778564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.581 [2024-11-18 23:22:55.778635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.581 [2024-11-18 23:22:55.778647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:36.581 [2024-11-18 23:22:55.778659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:27:36.581 [2024-11-18 23:22:55.778667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.581 [2024-11-18 23:22:55.778702] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:36.581 [2024-11-18 23:22:55.781658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.581 [2024-11-18 23:22:55.781707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:36.581 [2024-11-18 23:22:55.781723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.967 ms 00:27:36.581 [2024-11-18 23:22:55.781735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.581 [2024-11-18 23:22:55.781770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.581 [2024-11-18 23:22:55.781783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:36.581 [2024-11-18 23:22:55.781794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:36.581 [2024-11-18 23:22:55.781809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.581 [2024-11-18 23:22:55.781829] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:27:36.581 [2024-11-18 23:22:55.782000] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:36.581 [2024-11-18 23:22:55.782015] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:36.581 [2024-11-18 23:22:55.782031] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:27:36.581 [2024-11-18 23:22:55.782042] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:36.581 [2024-11-18 23:22:55.782063] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:36.581 [2024-11-18 23:22:55.782073] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:36.581 [2024-11-18 23:22:55.782091] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:36.581 [2024-11-18 23:22:55.782100] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:36.581 [2024-11-18 23:22:55.782111] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:36.581 [2024-11-18 23:22:55.782125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.581 [2024-11-18 23:22:55.782136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:36.581 [2024-11-18 23:22:55.782145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.299 ms 00:27:36.581 [2024-11-18 23:22:55.782177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.581 [2024-11-18 23:22:55.782266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.581 [2024-11-18 23:22:55.782281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:36.581 [2024-11-18 23:22:55.782289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.070 ms 00:27:36.581 [2024-11-18 23:22:55.782300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.581 [2024-11-18 23:22:55.782404] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:36.581 [2024-11-18 23:22:55.782420] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:36.581 [2024-11-18 23:22:55.782429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:36.581 [2024-11-18 23:22:55.782440] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:36.581 [2024-11-18 23:22:55.782449] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:36.581 [2024-11-18 23:22:55.782459] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:36.581 [2024-11-18 23:22:55.782466] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:36.581 [2024-11-18 23:22:55.782478] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:36.581 [2024-11-18 23:22:55.782485] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:36.581 [2024-11-18 23:22:55.782495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:36.581 [2024-11-18 23:22:55.782503] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:36.581 [2024-11-18 23:22:55.782512] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:36.581 [2024-11-18 23:22:55.782519] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:36.581 [2024-11-18 23:22:55.782531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:36.581 [2024-11-18 23:22:55.782538] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:36.581 [2024-11-18 23:22:55.782547] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:36.581 [2024-11-18 23:22:55.782555] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:36.581 [2024-11-18 23:22:55.782565] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:36.582 [2024-11-18 23:22:55.782571] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:36.582 [2024-11-18 23:22:55.782581] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:36.582 [2024-11-18 23:22:55.782588] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:36.582 [2024-11-18 23:22:55.782598] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:36.582 [2024-11-18 23:22:55.782607] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:36.582 [2024-11-18 23:22:55.782617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:36.582 [2024-11-18 23:22:55.782625] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:36.582 [2024-11-18 23:22:55.782634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:36.582 [2024-11-18 23:22:55.782642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:36.582 [2024-11-18 23:22:55.782651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:36.582 [2024-11-18 23:22:55.782658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:36.582 [2024-11-18 23:22:55.782670] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:36.582 [2024-11-18 23:22:55.782676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:36.582 [2024-11-18 23:22:55.782686] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:36.582 [2024-11-18 23:22:55.782692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:36.582 [2024-11-18 23:22:55.782703] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:36.582 [2024-11-18 23:22:55.782710] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:36.582 [2024-11-18 23:22:55.782720] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:36.582 [2024-11-18 23:22:55.782727] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:36.582 [2024-11-18 23:22:55.782737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:36.582 [2024-11-18 23:22:55.782745] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:36.582 [2024-11-18 23:22:55.782757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:36.582 [2024-11-18 23:22:55.782774] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:36.582 [2024-11-18 23:22:55.782785] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:36.582 [2024-11-18 23:22:55.782792] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:36.582 [2024-11-18 23:22:55.782801] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:36.582 [2024-11-18 23:22:55.782810] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:36.582 [2024-11-18 23:22:55.782824] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:36.582 [2024-11-18 23:22:55.782833] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:36.582 [2024-11-18 23:22:55.782843] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:36.582 [2024-11-18 23:22:55.782851] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:36.582 [2024-11-18 23:22:55.782860] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:36.582 [2024-11-18 23:22:55.782898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:36.582 [2024-11-18 23:22:55.782908] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:36.582 [2024-11-18 23:22:55.782915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:36.582 [2024-11-18 23:22:55.782931] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:36.582 [2024-11-18 23:22:55.782943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:36.582 [2024-11-18 23:22:55.782955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:36.582 [2024-11-18 23:22:55.782964] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:36.582 [2024-11-18 23:22:55.782975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:36.582 [2024-11-18 23:22:55.782985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:36.582 [2024-11-18 23:22:55.782998] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:36.582 [2024-11-18 23:22:55.783006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:36.582 [2024-11-18 23:22:55.783021] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:36.582 [2024-11-18 23:22:55.783040] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:36.582 [2024-11-18 23:22:55.783050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:36.582 [2024-11-18 23:22:55.783058] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:36.582 [2024-11-18 23:22:55.783067] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:36.582 [2024-11-18 23:22:55.783074] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:36.582 [2024-11-18 23:22:55.783083] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:36.582 [2024-11-18 23:22:55.783091] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:36.582 [2024-11-18 23:22:55.783101] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:36.582 [2024-11-18 23:22:55.783114] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:36.582 [2024-11-18 23:22:55.783126] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:36.582 [2024-11-18 23:22:55.783134] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:36.582 [2024-11-18 23:22:55.783144] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:36.582 [2024-11-18 23:22:55.783165] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:36.582 [2024-11-18 23:22:55.783177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.582 [2024-11-18 23:22:55.783186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:36.582 [2024-11-18 23:22:55.783200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.838 ms 00:27:36.582 [2024-11-18 23:22:55.783207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.582 [2024-11-18 23:22:55.783281] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:27:36.582 [2024-11-18 23:22:55.783293] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:27:40.805 [2024-11-18 23:22:59.755483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.805 [2024-11-18 23:22:59.755558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:27:40.805 [2024-11-18 23:22:59.755581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3972.199 ms 00:27:40.805 [2024-11-18 23:22:59.755594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.805 [2024-11-18 23:22:59.767559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.805 [2024-11-18 23:22:59.767608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:40.805 [2024-11-18 23:22:59.767629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.861 ms 00:27:40.805 [2024-11-18 23:22:59.767638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.805 [2024-11-18 23:22:59.767688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.805 [2024-11-18 23:22:59.767697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:40.805 [2024-11-18 23:22:59.767712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:27:40.805 [2024-11-18 23:22:59.767720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.805 [2024-11-18 23:22:59.778690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.805 [2024-11-18 23:22:59.778734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:40.805 [2024-11-18 23:22:59.778754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.916 ms 00:27:40.805 [2024-11-18 23:22:59.778763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.806 [2024-11-18 23:22:59.778797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.806 [2024-11-18 23:22:59.778809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:40.806 [2024-11-18 23:22:59.778820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:40.806 [2024-11-18 23:22:59.778827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.806 [2024-11-18 23:22:59.779347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.806 [2024-11-18 23:22:59.779365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:40.806 [2024-11-18 23:22:59.779378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.461 ms 00:27:40.806 [2024-11-18 23:22:59.779387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.806 [2024-11-18 23:22:59.779435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.806 [2024-11-18 23:22:59.779445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:40.806 [2024-11-18 23:22:59.779459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:27:40.806 [2024-11-18 23:22:59.779468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.806 [2024-11-18 23:22:59.806560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.806 [2024-11-18 23:22:59.806652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:40.806 [2024-11-18 23:22:59.806690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.057 ms 00:27:40.806 [2024-11-18 23:22:59.806712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.806 [2024-11-18 23:22:59.817000] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:40.806 [2024-11-18 23:22:59.818134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.806 [2024-11-18 23:22:59.818187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:40.806 [2024-11-18 23:22:59.818199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.166 ms 00:27:40.806 [2024-11-18 23:22:59.818212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.806 [2024-11-18 23:22:59.834249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.806 [2024-11-18 23:22:59.834295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:27:40.806 [2024-11-18 23:22:59.834307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.009 ms 00:27:40.806 [2024-11-18 23:22:59.834320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.806 [2024-11-18 23:22:59.834413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.806 [2024-11-18 23:22:59.834426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:40.806 [2024-11-18 23:22:59.834435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:27:40.806 [2024-11-18 23:22:59.834446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.806 [2024-11-18 23:22:59.837747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.806 [2024-11-18 23:22:59.837892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:27:40.806 [2024-11-18 23:22:59.837910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.269 ms 00:27:40.806 [2024-11-18 23:22:59.837921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.806 [2024-11-18 23:22:59.841768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.806 [2024-11-18 23:22:59.841808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:27:40.806 [2024-11-18 23:22:59.841819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.764 ms 00:27:40.806 [2024-11-18 23:22:59.841828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.806 [2024-11-18 23:22:59.842149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.806 [2024-11-18 23:22:59.842179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:40.806 [2024-11-18 23:22:59.842188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.285 ms 00:27:40.806 [2024-11-18 23:22:59.842250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.806 [2024-11-18 23:22:59.874870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.806 [2024-11-18 23:22:59.874915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:27:40.806 [2024-11-18 23:22:59.874928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.574 ms 00:27:40.806 [2024-11-18 23:22:59.874938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.806 [2024-11-18 23:22:59.879754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.806 [2024-11-18 23:22:59.879795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:27:40.806 [2024-11-18 23:22:59.879806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.763 ms 00:27:40.806 [2024-11-18 23:22:59.879817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.806 [2024-11-18 23:22:59.884027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.806 [2024-11-18 23:22:59.884177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:27:40.806 [2024-11-18 23:22:59.884193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.173 ms 00:27:40.806 [2024-11-18 23:22:59.884203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.806 [2024-11-18 23:22:59.888783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.806 [2024-11-18 23:22:59.888823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:27:40.806 [2024-11-18 23:22:59.888833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.506 ms 00:27:40.806 [2024-11-18 23:22:59.888845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.806 [2024-11-18 23:22:59.888886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.806 [2024-11-18 23:22:59.888899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:40.806 [2024-11-18 23:22:59.888908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:40.806 [2024-11-18 23:22:59.888918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.806 [2024-11-18 23:22:59.888988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.806 [2024-11-18 23:22:59.889000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:40.806 [2024-11-18 23:22:59.889008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:27:40.806 [2024-11-18 23:22:59.889018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.806 [2024-11-18 23:22:59.890141] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4127.722 ms, result 0 00:27:40.806 { 00:27:40.806 "name": "ftl", 00:27:40.806 "uuid": "c31e050b-0568-4228-9bf4-65df49cad241" 00:27:40.806 } 00:27:40.806 23:22:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:27:40.806 [2024-11-18 23:23:00.106112] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:40.806 23:23:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:27:41.066 23:23:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:27:41.326 [2024-11-18 23:23:00.542634] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:41.326 23:23:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:27:41.586 [2024-11-18 23:23:00.747069] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:41.586 23:23:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:27:41.847 Fill FTL, iteration 1 00:27:41.847 23:23:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:27:41.847 23:23:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:27:41.847 23:23:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:27:41.847 23:23:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:27:41.847 23:23:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:27:41.847 23:23:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:27:41.847 23:23:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:27:41.847 23:23:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:27:41.847 23:23:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:27:41.847 23:23:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:41.847 23:23:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:27:41.847 23:23:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:27:41.847 23:23:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:41.847 23:23:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:41.847 23:23:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:41.847 23:23:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:27:41.847 23:23:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=92548 00:27:41.847 23:23:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:27:41.847 23:23:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 92548 /var/tmp/spdk.tgt.sock 00:27:41.847 23:23:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 92548 ']' 00:27:41.848 23:23:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:27:41.848 23:23:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:41.848 23:23:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:27:41.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:27:41.848 23:23:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:41.848 23:23:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:41.848 23:23:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:27:41.848 [2024-11-18 23:23:01.179814] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:27:41.848 [2024-11-18 23:23:01.180362] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92548 ] 00:27:42.108 [2024-11-18 23:23:01.328919] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:42.109 [2024-11-18 23:23:01.394217] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:42.680 23:23:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:42.680 23:23:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:27:42.680 23:23:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:27:42.941 ftln1 00:27:42.941 23:23:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:27:42.941 23:23:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:27:43.202 23:23:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:27:43.202 23:23:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 92548 00:27:43.202 23:23:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 92548 ']' 00:27:43.202 23:23:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 92548 00:27:43.202 23:23:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:27:43.202 23:23:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:43.202 23:23:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92548 00:27:43.202 killing process with pid 92548 00:27:43.202 23:23:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:43.202 23:23:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:43.202 23:23:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92548' 00:27:43.202 23:23:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 92548 00:27:43.202 23:23:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 92548 00:27:43.768 23:23:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:27:43.768 23:23:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:27:43.768 [2024-11-18 23:23:02.925317] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:27:43.768 [2024-11-18 23:23:02.925437] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92579 ] 00:27:43.768 [2024-11-18 23:23:03.074949] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:43.768 [2024-11-18 23:23:03.103510] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:45.143  [2024-11-18T23:23:05.455Z] Copying: 246/1024 [MB] (246 MBps) [2024-11-18T23:23:06.390Z] Copying: 488/1024 [MB] (242 MBps) [2024-11-18T23:23:07.325Z] Copying: 740/1024 [MB] (252 MBps) [2024-11-18T23:23:07.584Z] Copying: 986/1024 [MB] (246 MBps) [2024-11-18T23:23:07.842Z] Copying: 1024/1024 [MB] (average 244 MBps) 00:27:48.464 00:27:48.464 23:23:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:27:48.464 23:23:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:27:48.464 Calculate MD5 checksum, iteration 1 00:27:48.464 23:23:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:48.465 23:23:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:48.465 23:23:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:48.465 23:23:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:48.465 23:23:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:48.465 23:23:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:48.465 [2024-11-18 23:23:07.681953] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:27:48.465 [2024-11-18 23:23:07.682080] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92632 ] 00:27:48.465 [2024-11-18 23:23:07.829433] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:48.723 [2024-11-18 23:23:07.857671] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:49.660  [2024-11-18T23:23:09.975Z] Copying: 636/1024 [MB] (636 MBps) [2024-11-18T23:23:09.975Z] Copying: 1024/1024 [MB] (average 628 MBps) 00:27:50.597 00:27:50.597 23:23:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:27:50.597 23:23:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:53.142 23:23:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:53.142 Fill FTL, iteration 2 00:27:53.142 23:23:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=2e88461f0dd535d2a57b506be0ec77b7 00:27:53.142 23:23:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:53.142 23:23:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:53.142 23:23:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:27:53.142 23:23:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:53.142 23:23:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:53.142 23:23:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:53.142 23:23:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:53.142 23:23:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:53.142 23:23:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:53.142 [2024-11-18 23:23:11.950990] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:27:53.142 [2024-11-18 23:23:11.951240] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92682 ] 00:27:53.142 [2024-11-18 23:23:12.092539] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:53.142 [2024-11-18 23:23:12.120915] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:54.079  [2024-11-18T23:23:14.392Z] Copying: 241/1024 [MB] (241 MBps) [2024-11-18T23:23:15.327Z] Copying: 486/1024 [MB] (245 MBps) [2024-11-18T23:23:16.704Z] Copying: 729/1024 [MB] (243 MBps) [2024-11-18T23:23:16.704Z] Copying: 973/1024 [MB] (244 MBps) [2024-11-18T23:23:16.704Z] Copying: 1024/1024 [MB] (average 241 MBps) 00:27:57.326 00:27:57.586 23:23:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:27:57.586 23:23:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:27:57.586 Calculate MD5 checksum, iteration 2 00:27:57.586 23:23:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:57.586 23:23:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:57.586 23:23:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:57.586 23:23:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:57.586 23:23:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:57.586 23:23:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:57.586 [2024-11-18 23:23:16.757369] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:27:57.586 [2024-11-18 23:23:16.757625] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92735 ] 00:27:57.586 [2024-11-18 23:23:16.900273] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:57.586 [2024-11-18 23:23:16.929674] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:27:59.576  [2024-11-18T23:23:19.518Z] Copying: 634/1024 [MB] (634 MBps) [2024-11-18T23:23:20.086Z] Copying: 1024/1024 [MB] (average 628 MBps) 00:28:00.708 00:28:00.708 23:23:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:28:00.708 23:23:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:03.253 23:23:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:28:03.253 23:23:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=8f7ffe214f73b56723f8d6049c3325c8 00:28:03.253 23:23:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:28:03.253 23:23:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:28:03.253 23:23:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:28:03.253 [2024-11-18 23:23:22.190495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.253 [2024-11-18 23:23:22.190537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:03.253 [2024-11-18 23:23:22.190551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:28:03.253 [2024-11-18 23:23:22.190558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.253 [2024-11-18 23:23:22.190578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.253 [2024-11-18 23:23:22.190586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:03.253 [2024-11-18 23:23:22.190596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:03.253 [2024-11-18 23:23:22.190603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.253 [2024-11-18 23:23:22.190619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.253 [2024-11-18 23:23:22.190626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:03.253 [2024-11-18 23:23:22.190632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:03.253 [2024-11-18 23:23:22.190639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.253 [2024-11-18 23:23:22.190694] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.188 ms, result 0 00:28:03.253 true 00:28:03.253 23:23:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:03.253 { 00:28:03.253 "name": "ftl", 00:28:03.253 "properties": [ 00:28:03.253 { 00:28:03.253 "name": "superblock_version", 00:28:03.253 "value": 5, 00:28:03.253 "read-only": true 00:28:03.253 }, 00:28:03.253 { 00:28:03.253 "name": "base_device", 00:28:03.253 "bands": [ 00:28:03.253 { 00:28:03.253 "id": 0, 00:28:03.253 "state": "FREE", 00:28:03.253 "validity": 0.0 00:28:03.253 }, 00:28:03.253 { 00:28:03.253 "id": 1, 00:28:03.253 "state": "FREE", 00:28:03.253 "validity": 0.0 00:28:03.253 }, 00:28:03.253 { 00:28:03.253 "id": 2, 00:28:03.253 "state": "FREE", 00:28:03.253 "validity": 0.0 00:28:03.253 }, 00:28:03.253 { 00:28:03.253 "id": 3, 00:28:03.253 "state": "FREE", 00:28:03.253 "validity": 0.0 00:28:03.253 }, 00:28:03.253 { 00:28:03.253 "id": 4, 00:28:03.253 "state": "FREE", 00:28:03.253 "validity": 0.0 00:28:03.253 }, 00:28:03.253 { 00:28:03.253 "id": 5, 00:28:03.253 "state": "FREE", 00:28:03.254 "validity": 0.0 00:28:03.254 }, 00:28:03.254 { 00:28:03.254 "id": 6, 00:28:03.254 "state": "FREE", 00:28:03.254 "validity": 0.0 00:28:03.254 }, 00:28:03.254 { 00:28:03.254 "id": 7, 00:28:03.254 "state": "FREE", 00:28:03.254 "validity": 0.0 00:28:03.254 }, 00:28:03.254 { 00:28:03.254 "id": 8, 00:28:03.254 "state": "FREE", 00:28:03.254 "validity": 0.0 00:28:03.254 }, 00:28:03.254 { 00:28:03.254 "id": 9, 00:28:03.254 "state": "FREE", 00:28:03.254 "validity": 0.0 00:28:03.254 }, 00:28:03.254 { 00:28:03.254 "id": 10, 00:28:03.254 "state": "FREE", 00:28:03.254 "validity": 0.0 00:28:03.254 }, 00:28:03.254 { 00:28:03.254 "id": 11, 00:28:03.254 "state": "FREE", 00:28:03.254 "validity": 0.0 00:28:03.254 }, 00:28:03.254 { 00:28:03.254 "id": 12, 00:28:03.254 "state": "FREE", 00:28:03.254 "validity": 0.0 00:28:03.254 }, 00:28:03.254 { 00:28:03.254 "id": 13, 00:28:03.254 "state": "FREE", 00:28:03.254 "validity": 0.0 00:28:03.254 }, 00:28:03.254 { 00:28:03.254 "id": 14, 00:28:03.254 "state": "FREE", 00:28:03.254 "validity": 0.0 00:28:03.254 }, 00:28:03.254 { 00:28:03.254 "id": 15, 00:28:03.254 "state": "FREE", 00:28:03.254 "validity": 0.0 00:28:03.254 }, 00:28:03.254 { 00:28:03.254 "id": 16, 00:28:03.254 "state": "FREE", 00:28:03.254 "validity": 0.0 00:28:03.254 }, 00:28:03.254 { 00:28:03.254 "id": 17, 00:28:03.254 "state": "FREE", 00:28:03.254 "validity": 0.0 00:28:03.254 } 00:28:03.254 ], 00:28:03.254 "read-only": true 00:28:03.254 }, 00:28:03.254 { 00:28:03.254 "name": "cache_device", 00:28:03.254 "type": "bdev", 00:28:03.254 "chunks": [ 00:28:03.254 { 00:28:03.254 "id": 0, 00:28:03.254 "state": "INACTIVE", 00:28:03.254 "utilization": 0.0 00:28:03.254 }, 00:28:03.254 { 00:28:03.254 "id": 1, 00:28:03.254 "state": "CLOSED", 00:28:03.254 "utilization": 1.0 00:28:03.254 }, 00:28:03.254 { 00:28:03.254 "id": 2, 00:28:03.254 "state": "CLOSED", 00:28:03.254 "utilization": 1.0 00:28:03.254 }, 00:28:03.254 { 00:28:03.254 "id": 3, 00:28:03.254 "state": "OPEN", 00:28:03.254 "utilization": 0.001953125 00:28:03.254 }, 00:28:03.254 { 00:28:03.254 "id": 4, 00:28:03.254 "state": "OPEN", 00:28:03.254 "utilization": 0.0 00:28:03.254 } 00:28:03.254 ], 00:28:03.254 "read-only": true 00:28:03.254 }, 00:28:03.254 { 00:28:03.254 "name": "verbose_mode", 00:28:03.254 "value": true, 00:28:03.254 "unit": "", 00:28:03.254 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:28:03.254 }, 00:28:03.254 { 00:28:03.254 "name": "prep_upgrade_on_shutdown", 00:28:03.254 "value": false, 00:28:03.254 "unit": "", 00:28:03.254 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:28:03.254 } 00:28:03.254 ] 00:28:03.254 } 00:28:03.254 23:23:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:28:03.254 [2024-11-18 23:23:22.554824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.254 [2024-11-18 23:23:22.554955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:03.254 [2024-11-18 23:23:22.555040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:28:03.254 [2024-11-18 23:23:22.555059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.254 [2024-11-18 23:23:22.555093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.254 [2024-11-18 23:23:22.555111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:03.254 [2024-11-18 23:23:22.555126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:03.254 [2024-11-18 23:23:22.555191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.254 [2024-11-18 23:23:22.555223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.254 [2024-11-18 23:23:22.555240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:03.254 [2024-11-18 23:23:22.555256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:03.254 [2024-11-18 23:23:22.555273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.254 [2024-11-18 23:23:22.555363] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.528 ms, result 0 00:28:03.254 true 00:28:03.254 23:23:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:28:03.254 23:23:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:28:03.254 23:23:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:03.516 23:23:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:28:03.516 23:23:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:28:03.516 23:23:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:28:03.777 [2024-11-18 23:23:22.975178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.777 [2024-11-18 23:23:22.975212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:03.777 [2024-11-18 23:23:22.975220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:03.777 [2024-11-18 23:23:22.975227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.777 [2024-11-18 23:23:22.975244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.777 [2024-11-18 23:23:22.975251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:03.777 [2024-11-18 23:23:22.975258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:03.777 [2024-11-18 23:23:22.975264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.777 [2024-11-18 23:23:22.975279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.777 [2024-11-18 23:23:22.975285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:03.777 [2024-11-18 23:23:22.975291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:03.777 [2024-11-18 23:23:22.975297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.777 [2024-11-18 23:23:22.975340] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.167 ms, result 0 00:28:03.777 true 00:28:03.777 23:23:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:04.039 { 00:28:04.039 "name": "ftl", 00:28:04.039 "properties": [ 00:28:04.039 { 00:28:04.039 "name": "superblock_version", 00:28:04.039 "value": 5, 00:28:04.039 "read-only": true 00:28:04.039 }, 00:28:04.039 { 00:28:04.039 "name": "base_device", 00:28:04.039 "bands": [ 00:28:04.039 { 00:28:04.039 "id": 0, 00:28:04.039 "state": "FREE", 00:28:04.039 "validity": 0.0 00:28:04.039 }, 00:28:04.039 { 00:28:04.039 "id": 1, 00:28:04.039 "state": "FREE", 00:28:04.039 "validity": 0.0 00:28:04.039 }, 00:28:04.039 { 00:28:04.039 "id": 2, 00:28:04.039 "state": "FREE", 00:28:04.039 "validity": 0.0 00:28:04.039 }, 00:28:04.039 { 00:28:04.039 "id": 3, 00:28:04.039 "state": "FREE", 00:28:04.039 "validity": 0.0 00:28:04.039 }, 00:28:04.039 { 00:28:04.039 "id": 4, 00:28:04.039 "state": "FREE", 00:28:04.039 "validity": 0.0 00:28:04.039 }, 00:28:04.039 { 00:28:04.039 "id": 5, 00:28:04.039 "state": "FREE", 00:28:04.039 "validity": 0.0 00:28:04.039 }, 00:28:04.039 { 00:28:04.039 "id": 6, 00:28:04.039 "state": "FREE", 00:28:04.039 "validity": 0.0 00:28:04.039 }, 00:28:04.039 { 00:28:04.039 "id": 7, 00:28:04.039 "state": "FREE", 00:28:04.039 "validity": 0.0 00:28:04.039 }, 00:28:04.039 { 00:28:04.039 "id": 8, 00:28:04.039 "state": "FREE", 00:28:04.039 "validity": 0.0 00:28:04.039 }, 00:28:04.039 { 00:28:04.039 "id": 9, 00:28:04.039 "state": "FREE", 00:28:04.039 "validity": 0.0 00:28:04.039 }, 00:28:04.039 { 00:28:04.039 "id": 10, 00:28:04.039 "state": "FREE", 00:28:04.039 "validity": 0.0 00:28:04.039 }, 00:28:04.039 { 00:28:04.039 "id": 11, 00:28:04.039 "state": "FREE", 00:28:04.039 "validity": 0.0 00:28:04.039 }, 00:28:04.039 { 00:28:04.039 "id": 12, 00:28:04.039 "state": "FREE", 00:28:04.039 "validity": 0.0 00:28:04.039 }, 00:28:04.039 { 00:28:04.039 "id": 13, 00:28:04.039 "state": "FREE", 00:28:04.039 "validity": 0.0 00:28:04.039 }, 00:28:04.039 { 00:28:04.039 "id": 14, 00:28:04.039 "state": "FREE", 00:28:04.039 "validity": 0.0 00:28:04.039 }, 00:28:04.039 { 00:28:04.039 "id": 15, 00:28:04.039 "state": "FREE", 00:28:04.039 "validity": 0.0 00:28:04.039 }, 00:28:04.039 { 00:28:04.039 "id": 16, 00:28:04.039 "state": "FREE", 00:28:04.039 "validity": 0.0 00:28:04.039 }, 00:28:04.039 { 00:28:04.039 "id": 17, 00:28:04.039 "state": "FREE", 00:28:04.039 "validity": 0.0 00:28:04.039 } 00:28:04.039 ], 00:28:04.039 "read-only": true 00:28:04.039 }, 00:28:04.039 { 00:28:04.039 "name": "cache_device", 00:28:04.039 "type": "bdev", 00:28:04.039 "chunks": [ 00:28:04.039 { 00:28:04.039 "id": 0, 00:28:04.039 "state": "INACTIVE", 00:28:04.039 "utilization": 0.0 00:28:04.039 }, 00:28:04.039 { 00:28:04.039 "id": 1, 00:28:04.039 "state": "CLOSED", 00:28:04.039 "utilization": 1.0 00:28:04.039 }, 00:28:04.039 { 00:28:04.039 "id": 2, 00:28:04.039 "state": "CLOSED", 00:28:04.039 "utilization": 1.0 00:28:04.039 }, 00:28:04.039 { 00:28:04.039 "id": 3, 00:28:04.039 "state": "OPEN", 00:28:04.039 "utilization": 0.001953125 00:28:04.039 }, 00:28:04.039 { 00:28:04.039 "id": 4, 00:28:04.039 "state": "OPEN", 00:28:04.039 "utilization": 0.0 00:28:04.039 } 00:28:04.039 ], 00:28:04.039 "read-only": true 00:28:04.039 }, 00:28:04.039 { 00:28:04.039 "name": "verbose_mode", 00:28:04.039 "value": true, 00:28:04.039 "unit": "", 00:28:04.039 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:28:04.039 }, 00:28:04.039 { 00:28:04.039 "name": "prep_upgrade_on_shutdown", 00:28:04.039 "value": true, 00:28:04.039 "unit": "", 00:28:04.039 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:28:04.039 } 00:28:04.039 ] 00:28:04.039 } 00:28:04.039 23:23:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:28:04.039 23:23:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 92422 ]] 00:28:04.039 23:23:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 92422 00:28:04.039 23:23:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 92422 ']' 00:28:04.039 23:23:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 92422 00:28:04.039 23:23:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:28:04.039 23:23:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:04.039 23:23:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92422 00:28:04.039 killing process with pid 92422 00:28:04.039 23:23:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:04.039 23:23:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:04.040 23:23:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92422' 00:28:04.040 23:23:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 92422 00:28:04.040 23:23:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 92422 00:28:04.040 [2024-11-18 23:23:23.337499] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:28:04.040 [2024-11-18 23:23:23.343483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.040 [2024-11-18 23:23:23.343515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:28:04.040 [2024-11-18 23:23:23.343526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:04.040 [2024-11-18 23:23:23.343533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.040 [2024-11-18 23:23:23.343552] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:28:04.040 [2024-11-18 23:23:23.344064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.040 [2024-11-18 23:23:23.344094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:28:04.040 [2024-11-18 23:23:23.344102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.502 ms 00:28:04.040 [2024-11-18 23:23:23.344109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.045 [2024-11-18 23:23:31.819670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:14.045 [2024-11-18 23:23:31.819717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:28:14.045 [2024-11-18 23:23:31.819735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8475.539 ms 00:28:14.045 [2024-11-18 23:23:31.819742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.045 [2024-11-18 23:23:31.820870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:14.045 [2024-11-18 23:23:31.820888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:28:14.045 [2024-11-18 23:23:31.820895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.115 ms 00:28:14.045 [2024-11-18 23:23:31.820902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.045 [2024-11-18 23:23:31.821782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:14.045 [2024-11-18 23:23:31.821794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:28:14.045 [2024-11-18 23:23:31.821802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.859 ms 00:28:14.045 [2024-11-18 23:23:31.821812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.045 [2024-11-18 23:23:31.823720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:14.045 [2024-11-18 23:23:31.823747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:28:14.045 [2024-11-18 23:23:31.823754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.872 ms 00:28:14.045 [2024-11-18 23:23:31.823761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.046 [2024-11-18 23:23:31.825852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:14.046 [2024-11-18 23:23:31.825879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:28:14.046 [2024-11-18 23:23:31.825887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.065 ms 00:28:14.046 [2024-11-18 23:23:31.825893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.046 [2024-11-18 23:23:31.825956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:14.046 [2024-11-18 23:23:31.825964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:28:14.046 [2024-11-18 23:23:31.825975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:28:14.046 [2024-11-18 23:23:31.825982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.046 [2024-11-18 23:23:31.827320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:14.046 [2024-11-18 23:23:31.827346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:28:14.046 [2024-11-18 23:23:31.827354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.326 ms 00:28:14.046 [2024-11-18 23:23:31.827359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.046 [2024-11-18 23:23:31.828317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:14.046 [2024-11-18 23:23:31.828342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:28:14.046 [2024-11-18 23:23:31.828349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.924 ms 00:28:14.046 [2024-11-18 23:23:31.828355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.046 [2024-11-18 23:23:31.829459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:14.046 [2024-11-18 23:23:31.829485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:28:14.046 [2024-11-18 23:23:31.829492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.080 ms 00:28:14.046 [2024-11-18 23:23:31.829497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.046 [2024-11-18 23:23:31.830550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:14.046 [2024-11-18 23:23:31.830576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:28:14.046 [2024-11-18 23:23:31.830582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.005 ms 00:28:14.046 [2024-11-18 23:23:31.830588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.046 [2024-11-18 23:23:31.830611] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:28:14.046 [2024-11-18 23:23:31.830628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:14.046 [2024-11-18 23:23:31.830637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:28:14.046 [2024-11-18 23:23:31.830643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:28:14.046 [2024-11-18 23:23:31.830649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:14.046 [2024-11-18 23:23:31.830656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:14.046 [2024-11-18 23:23:31.830661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:14.046 [2024-11-18 23:23:31.830667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:14.046 [2024-11-18 23:23:31.830674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:14.046 [2024-11-18 23:23:31.830680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:14.046 [2024-11-18 23:23:31.830685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:14.046 [2024-11-18 23:23:31.830691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:14.046 [2024-11-18 23:23:31.830698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:14.046 [2024-11-18 23:23:31.830703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:14.046 [2024-11-18 23:23:31.830709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:14.046 [2024-11-18 23:23:31.830715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:14.046 [2024-11-18 23:23:31.830720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:14.046 [2024-11-18 23:23:31.830726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:14.046 [2024-11-18 23:23:31.830732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:14.046 [2024-11-18 23:23:31.830748] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:28:14.046 [2024-11-18 23:23:31.830755] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: c31e050b-0568-4228-9bf4-65df49cad241 00:28:14.046 [2024-11-18 23:23:31.830761] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:28:14.046 [2024-11-18 23:23:31.830766] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:28:14.046 [2024-11-18 23:23:31.830772] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:28:14.046 [2024-11-18 23:23:31.830779] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:28:14.046 [2024-11-18 23:23:31.830785] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:28:14.046 [2024-11-18 23:23:31.830963] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:28:14.046 [2024-11-18 23:23:31.830970] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:28:14.046 [2024-11-18 23:23:31.830975] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:28:14.046 [2024-11-18 23:23:31.830980] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:28:14.046 [2024-11-18 23:23:31.830986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:14.046 [2024-11-18 23:23:31.830992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:28:14.046 [2024-11-18 23:23:31.830999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.376 ms 00:28:14.046 [2024-11-18 23:23:31.831005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.046 [2024-11-18 23:23:31.832715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:14.046 [2024-11-18 23:23:31.832740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:28:14.046 [2024-11-18 23:23:31.832748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.697 ms 00:28:14.046 [2024-11-18 23:23:31.832758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.046 [2024-11-18 23:23:31.832842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:14.046 [2024-11-18 23:23:31.832850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:28:14.046 [2024-11-18 23:23:31.832860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.071 ms 00:28:14.046 [2024-11-18 23:23:31.832866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.046 [2024-11-18 23:23:31.838755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:14.046 [2024-11-18 23:23:31.838782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:14.046 [2024-11-18 23:23:31.838794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:14.046 [2024-11-18 23:23:31.838800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.046 [2024-11-18 23:23:31.838825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:14.046 [2024-11-18 23:23:31.838831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:14.046 [2024-11-18 23:23:31.838838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:14.046 [2024-11-18 23:23:31.838844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.046 [2024-11-18 23:23:31.838897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:14.046 [2024-11-18 23:23:31.838905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:14.046 [2024-11-18 23:23:31.838912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:14.046 [2024-11-18 23:23:31.838921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.046 [2024-11-18 23:23:31.838934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:14.046 [2024-11-18 23:23:31.838940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:14.046 [2024-11-18 23:23:31.838947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:14.046 [2024-11-18 23:23:31.838953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.046 [2024-11-18 23:23:31.849251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:14.046 [2024-11-18 23:23:31.849284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:14.046 [2024-11-18 23:23:31.849298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:14.046 [2024-11-18 23:23:31.849305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.046 [2024-11-18 23:23:31.857707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:14.046 [2024-11-18 23:23:31.857742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:14.046 [2024-11-18 23:23:31.857752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:14.046 [2024-11-18 23:23:31.857758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.046 [2024-11-18 23:23:31.857822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:14.046 [2024-11-18 23:23:31.857831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:14.046 [2024-11-18 23:23:31.857838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:14.046 [2024-11-18 23:23:31.857845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.046 [2024-11-18 23:23:31.857874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:14.046 [2024-11-18 23:23:31.857883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:14.046 [2024-11-18 23:23:31.857889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:14.046 [2024-11-18 23:23:31.857896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.046 [2024-11-18 23:23:31.857960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:14.047 [2024-11-18 23:23:31.857968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:14.047 [2024-11-18 23:23:31.857978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:14.047 [2024-11-18 23:23:31.857984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.047 [2024-11-18 23:23:31.858011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:14.047 [2024-11-18 23:23:31.858020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:28:14.047 [2024-11-18 23:23:31.858027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:14.047 [2024-11-18 23:23:31.858033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.047 [2024-11-18 23:23:31.858072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:14.047 [2024-11-18 23:23:31.858080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:14.047 [2024-11-18 23:23:31.858086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:14.047 [2024-11-18 23:23:31.858093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.047 [2024-11-18 23:23:31.858138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:14.047 [2024-11-18 23:23:31.858146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:14.047 [2024-11-18 23:23:31.858361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:14.047 [2024-11-18 23:23:31.858392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.047 [2024-11-18 23:23:31.858541] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8515.027 ms, result 0 00:28:15.960 23:23:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:28:15.960 23:23:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:28:15.960 23:23:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:28:15.960 23:23:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:28:15.960 23:23:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:15.960 23:23:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=92927 00:28:15.960 23:23:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:28:15.960 23:23:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 92927 00:28:15.960 23:23:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 92927 ']' 00:28:15.960 23:23:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:15.960 23:23:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:15.960 23:23:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:15.960 23:23:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:15.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:15.961 23:23:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:15.961 23:23:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:15.961 [2024-11-18 23:23:35.191232] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:28:15.961 [2024-11-18 23:23:35.191550] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92927 ] 00:28:16.221 [2024-11-18 23:23:35.337415] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:16.221 [2024-11-18 23:23:35.384410] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:16.481 [2024-11-18 23:23:35.680523] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:16.481 [2024-11-18 23:23:35.680749] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:16.481 [2024-11-18 23:23:35.818742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:16.481 [2024-11-18 23:23:35.818860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:28:16.481 [2024-11-18 23:23:35.818913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:16.481 [2024-11-18 23:23:35.818935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:16.481 [2024-11-18 23:23:35.818999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:16.481 [2024-11-18 23:23:35.819018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:16.481 [2024-11-18 23:23:35.819033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:28:16.481 [2024-11-18 23:23:35.819048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:16.481 [2024-11-18 23:23:35.819294] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:28:16.481 [2024-11-18 23:23:35.819531] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:28:16.481 [2024-11-18 23:23:35.819570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:16.481 [2024-11-18 23:23:35.819587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:16.481 [2024-11-18 23:23:35.819651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.284 ms 00:28:16.481 [2024-11-18 23:23:35.819670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:16.481 [2024-11-18 23:23:35.820974] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:28:16.481 [2024-11-18 23:23:35.823493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:16.481 [2024-11-18 23:23:35.823525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:28:16.481 [2024-11-18 23:23:35.823534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.520 ms 00:28:16.481 [2024-11-18 23:23:35.823543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:16.481 [2024-11-18 23:23:35.823586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:16.481 [2024-11-18 23:23:35.823595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:28:16.481 [2024-11-18 23:23:35.823601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:28:16.481 [2024-11-18 23:23:35.823607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:16.481 [2024-11-18 23:23:35.829766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:16.481 [2024-11-18 23:23:35.829871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:16.481 [2024-11-18 23:23:35.829885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.121 ms 00:28:16.481 [2024-11-18 23:23:35.829891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:16.481 [2024-11-18 23:23:35.829926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:16.481 [2024-11-18 23:23:35.829933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:16.481 [2024-11-18 23:23:35.829940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:28:16.481 [2024-11-18 23:23:35.829946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:16.481 [2024-11-18 23:23:35.829986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:16.481 [2024-11-18 23:23:35.829994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:28:16.481 [2024-11-18 23:23:35.830004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:16.481 [2024-11-18 23:23:35.830011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:16.481 [2024-11-18 23:23:35.830028] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:28:16.481 [2024-11-18 23:23:35.831570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:16.481 [2024-11-18 23:23:35.831593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:16.481 [2024-11-18 23:23:35.831600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.546 ms 00:28:16.481 [2024-11-18 23:23:35.831606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:16.481 [2024-11-18 23:23:35.831630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:16.481 [2024-11-18 23:23:35.831638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:28:16.481 [2024-11-18 23:23:35.831644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:16.481 [2024-11-18 23:23:35.831652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:16.481 [2024-11-18 23:23:35.831674] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:28:16.482 [2024-11-18 23:23:35.831691] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:28:16.482 [2024-11-18 23:23:35.831726] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:28:16.482 [2024-11-18 23:23:35.831739] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:28:16.482 [2024-11-18 23:23:35.831820] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:28:16.482 [2024-11-18 23:23:35.831828] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:28:16.482 [2024-11-18 23:23:35.831839] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:28:16.482 [2024-11-18 23:23:35.831849] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:28:16.482 [2024-11-18 23:23:35.831856] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:28:16.482 [2024-11-18 23:23:35.831863] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:28:16.482 [2024-11-18 23:23:35.831871] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:28:16.482 [2024-11-18 23:23:35.831880] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:28:16.482 [2024-11-18 23:23:35.831885] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:28:16.482 [2024-11-18 23:23:35.831894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:16.482 [2024-11-18 23:23:35.831900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:28:16.482 [2024-11-18 23:23:35.831906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.223 ms 00:28:16.482 [2024-11-18 23:23:35.831911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:16.482 [2024-11-18 23:23:35.831978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:16.482 [2024-11-18 23:23:35.831985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:28:16.482 [2024-11-18 23:23:35.831991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:28:16.482 [2024-11-18 23:23:35.832002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:16.482 [2024-11-18 23:23:35.832084] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:28:16.482 [2024-11-18 23:23:35.832094] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:28:16.482 [2024-11-18 23:23:35.832100] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:16.482 [2024-11-18 23:23:35.832107] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:16.482 [2024-11-18 23:23:35.832113] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:28:16.482 [2024-11-18 23:23:35.832118] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:28:16.482 [2024-11-18 23:23:35.832123] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:28:16.482 [2024-11-18 23:23:35.832129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:28:16.482 [2024-11-18 23:23:35.832135] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:28:16.482 [2024-11-18 23:23:35.832140] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:16.482 [2024-11-18 23:23:35.832145] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:28:16.482 [2024-11-18 23:23:35.832151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:28:16.482 [2024-11-18 23:23:35.832170] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:16.482 [2024-11-18 23:23:35.832175] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:28:16.482 [2024-11-18 23:23:35.832187] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:28:16.482 [2024-11-18 23:23:35.832192] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:16.482 [2024-11-18 23:23:35.832198] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:28:16.482 [2024-11-18 23:23:35.832208] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:28:16.482 [2024-11-18 23:23:35.832214] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:16.482 [2024-11-18 23:23:35.832219] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:28:16.482 [2024-11-18 23:23:35.832226] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:28:16.482 [2024-11-18 23:23:35.832232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:16.482 [2024-11-18 23:23:35.832238] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:28:16.482 [2024-11-18 23:23:35.832244] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:28:16.482 [2024-11-18 23:23:35.832250] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:16.482 [2024-11-18 23:23:35.832256] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:28:16.482 [2024-11-18 23:23:35.832261] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:28:16.482 [2024-11-18 23:23:35.832268] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:16.482 [2024-11-18 23:23:35.832274] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:28:16.482 [2024-11-18 23:23:35.832280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:28:16.482 [2024-11-18 23:23:35.832286] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:16.482 [2024-11-18 23:23:35.832292] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:28:16.482 [2024-11-18 23:23:35.832298] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:28:16.482 [2024-11-18 23:23:35.832306] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:16.482 [2024-11-18 23:23:35.832312] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:28:16.482 [2024-11-18 23:23:35.832317] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:28:16.482 [2024-11-18 23:23:35.832323] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:16.482 [2024-11-18 23:23:35.832329] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:28:16.482 [2024-11-18 23:23:35.832335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:28:16.482 [2024-11-18 23:23:35.832341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:16.482 [2024-11-18 23:23:35.832346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:28:16.482 [2024-11-18 23:23:35.832352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:28:16.482 [2024-11-18 23:23:35.832358] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:16.482 [2024-11-18 23:23:35.832363] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:28:16.482 [2024-11-18 23:23:35.832373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:28:16.482 [2024-11-18 23:23:35.832379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:16.482 [2024-11-18 23:23:35.832386] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:16.482 [2024-11-18 23:23:35.832395] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:28:16.482 [2024-11-18 23:23:35.832401] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:28:16.482 [2024-11-18 23:23:35.832412] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:28:16.482 [2024-11-18 23:23:35.832419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:28:16.482 [2024-11-18 23:23:35.832425] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:28:16.482 [2024-11-18 23:23:35.832431] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:28:16.482 [2024-11-18 23:23:35.832438] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:28:16.482 [2024-11-18 23:23:35.832446] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:16.482 [2024-11-18 23:23:35.832453] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:28:16.482 [2024-11-18 23:23:35.832460] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:28:16.482 [2024-11-18 23:23:35.832466] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:28:16.482 [2024-11-18 23:23:35.832472] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:28:16.482 [2024-11-18 23:23:35.832478] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:28:16.482 [2024-11-18 23:23:35.832484] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:28:16.482 [2024-11-18 23:23:35.832490] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:28:16.482 [2024-11-18 23:23:35.832496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:28:16.482 [2024-11-18 23:23:35.832502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:28:16.482 [2024-11-18 23:23:35.832509] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:28:16.482 [2024-11-18 23:23:35.832517] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:28:16.482 [2024-11-18 23:23:35.832523] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:28:16.482 [2024-11-18 23:23:35.832529] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:28:16.482 [2024-11-18 23:23:35.832535] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:28:16.482 [2024-11-18 23:23:35.832541] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:28:16.482 [2024-11-18 23:23:35.832548] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:16.482 [2024-11-18 23:23:35.832554] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:16.482 [2024-11-18 23:23:35.832560] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:28:16.482 [2024-11-18 23:23:35.832565] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:28:16.482 [2024-11-18 23:23:35.832570] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:28:16.482 [2024-11-18 23:23:35.832576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:16.482 [2024-11-18 23:23:35.832581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:28:16.482 [2024-11-18 23:23:35.832588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.545 ms 00:28:16.483 [2024-11-18 23:23:35.832595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:16.483 [2024-11-18 23:23:35.832629] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:28:16.483 [2024-11-18 23:23:35.832636] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:28:19.776 [2024-11-18 23:23:38.623650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.776 [2024-11-18 23:23:38.623850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:28:19.776 [2024-11-18 23:23:38.623916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2791.018 ms 00:28:19.776 [2024-11-18 23:23:38.623941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.776 [2024-11-18 23:23:38.635567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.776 [2024-11-18 23:23:38.635713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:19.776 [2024-11-18 23:23:38.635771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.481 ms 00:28:19.776 [2024-11-18 23:23:38.635795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.776 [2024-11-18 23:23:38.635875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.776 [2024-11-18 23:23:38.635901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:28:19.776 [2024-11-18 23:23:38.635922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:28:19.776 [2024-11-18 23:23:38.635942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.776 [2024-11-18 23:23:38.655766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.776 [2024-11-18 23:23:38.655963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:19.776 [2024-11-18 23:23:38.656407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.715 ms 00:28:19.776 [2024-11-18 23:23:38.656471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.776 [2024-11-18 23:23:38.656669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.776 [2024-11-18 23:23:38.656721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:19.776 [2024-11-18 23:23:38.656753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:28:19.776 [2024-11-18 23:23:38.656839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.776 [2024-11-18 23:23:38.657452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.776 [2024-11-18 23:23:38.657517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:19.776 [2024-11-18 23:23:38.657615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.511 ms 00:28:19.776 [2024-11-18 23:23:38.657648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.776 [2024-11-18 23:23:38.657741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.776 [2024-11-18 23:23:38.657824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:19.776 [2024-11-18 23:23:38.657858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:28:19.776 [2024-11-18 23:23:38.657871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.776 [2024-11-18 23:23:38.666471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.776 [2024-11-18 23:23:38.666612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:19.776 [2024-11-18 23:23:38.666678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.567 ms 00:28:19.776 [2024-11-18 23:23:38.666738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.776 [2024-11-18 23:23:38.670031] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:28:19.776 [2024-11-18 23:23:38.670168] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:28:19.776 [2024-11-18 23:23:38.670230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.776 [2024-11-18 23:23:38.670253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:28:19.776 [2024-11-18 23:23:38.670273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.331 ms 00:28:19.776 [2024-11-18 23:23:38.670292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.776 [2024-11-18 23:23:38.674509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.776 [2024-11-18 23:23:38.674622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:28:19.776 [2024-11-18 23:23:38.674680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.172 ms 00:28:19.776 [2024-11-18 23:23:38.674702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.776 [2024-11-18 23:23:38.677066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.776 [2024-11-18 23:23:38.677216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:28:19.776 [2024-11-18 23:23:38.677273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.067 ms 00:28:19.776 [2024-11-18 23:23:38.677297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.776 [2024-11-18 23:23:38.679267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.776 [2024-11-18 23:23:38.679380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:28:19.776 [2024-11-18 23:23:38.679432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.923 ms 00:28:19.776 [2024-11-18 23:23:38.679454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.776 [2024-11-18 23:23:38.679911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.776 [2024-11-18 23:23:38.679965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:28:19.776 [2024-11-18 23:23:38.680038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.260 ms 00:28:19.776 [2024-11-18 23:23:38.680092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.776 [2024-11-18 23:23:38.700360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.776 [2024-11-18 23:23:38.700503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:28:19.776 [2024-11-18 23:23:38.700565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.216 ms 00:28:19.776 [2024-11-18 23:23:38.700587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.776 [2024-11-18 23:23:38.708671] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:28:19.776 [2024-11-18 23:23:38.709606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.776 [2024-11-18 23:23:38.709713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:28:19.776 [2024-11-18 23:23:38.709763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.971 ms 00:28:19.776 [2024-11-18 23:23:38.709792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.776 [2024-11-18 23:23:38.709861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.776 [2024-11-18 23:23:38.709889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:28:19.776 [2024-11-18 23:23:38.709911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:28:19.776 [2024-11-18 23:23:38.709930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.776 [2024-11-18 23:23:38.710010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.776 [2024-11-18 23:23:38.710103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:28:19.776 [2024-11-18 23:23:38.710124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:28:19.776 [2024-11-18 23:23:38.710143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.776 [2024-11-18 23:23:38.710204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.776 [2024-11-18 23:23:38.710655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:28:19.776 [2024-11-18 23:23:38.710707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:28:19.776 [2024-11-18 23:23:38.710789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.776 [2024-11-18 23:23:38.710865] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:28:19.776 [2024-11-18 23:23:38.710894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.776 [2024-11-18 23:23:38.710962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:28:19.776 [2024-11-18 23:23:38.710986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:28:19.776 [2024-11-18 23:23:38.711035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.776 [2024-11-18 23:23:38.715382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.776 [2024-11-18 23:23:38.715513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:28:19.776 [2024-11-18 23:23:38.715568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.303 ms 00:28:19.777 [2024-11-18 23:23:38.715609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.777 [2024-11-18 23:23:38.715905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.777 [2024-11-18 23:23:38.715942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:28:19.777 [2024-11-18 23:23:38.715955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:28:19.777 [2024-11-18 23:23:38.715965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.777 [2024-11-18 23:23:38.717144] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2897.958 ms, result 0 00:28:19.777 [2024-11-18 23:23:38.730322] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:19.777 [2024-11-18 23:23:38.746288] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:28:19.777 [2024-11-18 23:23:38.754429] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:20.346 23:23:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:20.346 23:23:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:28:20.346 23:23:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:20.346 23:23:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:28:20.346 23:23:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:28:20.346 [2024-11-18 23:23:39.643808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:20.346 [2024-11-18 23:23:39.643897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:20.346 [2024-11-18 23:23:39.643915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:28:20.346 [2024-11-18 23:23:39.643931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:20.346 [2024-11-18 23:23:39.643964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:20.346 [2024-11-18 23:23:39.643975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:20.346 [2024-11-18 23:23:39.643984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:20.346 [2024-11-18 23:23:39.643994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:20.346 [2024-11-18 23:23:39.644021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:20.346 [2024-11-18 23:23:39.644031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:20.346 [2024-11-18 23:23:39.644041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:20.346 [2024-11-18 23:23:39.644050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:20.346 [2024-11-18 23:23:39.644121] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.318 ms, result 0 00:28:20.346 true 00:28:20.346 23:23:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:20.606 { 00:28:20.606 "name": "ftl", 00:28:20.606 "properties": [ 00:28:20.606 { 00:28:20.606 "name": "superblock_version", 00:28:20.606 "value": 5, 00:28:20.606 "read-only": true 00:28:20.607 }, 00:28:20.607 { 00:28:20.607 "name": "base_device", 00:28:20.607 "bands": [ 00:28:20.607 { 00:28:20.607 "id": 0, 00:28:20.607 "state": "CLOSED", 00:28:20.607 "validity": 1.0 00:28:20.607 }, 00:28:20.607 { 00:28:20.607 "id": 1, 00:28:20.607 "state": "CLOSED", 00:28:20.607 "validity": 1.0 00:28:20.607 }, 00:28:20.607 { 00:28:20.607 "id": 2, 00:28:20.607 "state": "CLOSED", 00:28:20.607 "validity": 0.007843137254901933 00:28:20.607 }, 00:28:20.607 { 00:28:20.607 "id": 3, 00:28:20.607 "state": "FREE", 00:28:20.607 "validity": 0.0 00:28:20.607 }, 00:28:20.607 { 00:28:20.607 "id": 4, 00:28:20.607 "state": "FREE", 00:28:20.607 "validity": 0.0 00:28:20.607 }, 00:28:20.607 { 00:28:20.607 "id": 5, 00:28:20.607 "state": "FREE", 00:28:20.607 "validity": 0.0 00:28:20.607 }, 00:28:20.607 { 00:28:20.607 "id": 6, 00:28:20.607 "state": "FREE", 00:28:20.607 "validity": 0.0 00:28:20.607 }, 00:28:20.607 { 00:28:20.607 "id": 7, 00:28:20.607 "state": "FREE", 00:28:20.607 "validity": 0.0 00:28:20.607 }, 00:28:20.607 { 00:28:20.607 "id": 8, 00:28:20.607 "state": "FREE", 00:28:20.607 "validity": 0.0 00:28:20.607 }, 00:28:20.607 { 00:28:20.607 "id": 9, 00:28:20.607 "state": "FREE", 00:28:20.607 "validity": 0.0 00:28:20.607 }, 00:28:20.607 { 00:28:20.607 "id": 10, 00:28:20.607 "state": "FREE", 00:28:20.607 "validity": 0.0 00:28:20.607 }, 00:28:20.607 { 00:28:20.607 "id": 11, 00:28:20.607 "state": "FREE", 00:28:20.607 "validity": 0.0 00:28:20.607 }, 00:28:20.607 { 00:28:20.607 "id": 12, 00:28:20.607 "state": "FREE", 00:28:20.607 "validity": 0.0 00:28:20.607 }, 00:28:20.607 { 00:28:20.607 "id": 13, 00:28:20.607 "state": "FREE", 00:28:20.607 "validity": 0.0 00:28:20.607 }, 00:28:20.607 { 00:28:20.607 "id": 14, 00:28:20.607 "state": "FREE", 00:28:20.607 "validity": 0.0 00:28:20.607 }, 00:28:20.607 { 00:28:20.607 "id": 15, 00:28:20.607 "state": "FREE", 00:28:20.607 "validity": 0.0 00:28:20.607 }, 00:28:20.607 { 00:28:20.607 "id": 16, 00:28:20.607 "state": "FREE", 00:28:20.607 "validity": 0.0 00:28:20.607 }, 00:28:20.607 { 00:28:20.607 "id": 17, 00:28:20.607 "state": "FREE", 00:28:20.607 "validity": 0.0 00:28:20.607 } 00:28:20.607 ], 00:28:20.607 "read-only": true 00:28:20.607 }, 00:28:20.607 { 00:28:20.607 "name": "cache_device", 00:28:20.607 "type": "bdev", 00:28:20.607 "chunks": [ 00:28:20.607 { 00:28:20.607 "id": 0, 00:28:20.607 "state": "INACTIVE", 00:28:20.607 "utilization": 0.0 00:28:20.607 }, 00:28:20.607 { 00:28:20.607 "id": 1, 00:28:20.607 "state": "OPEN", 00:28:20.607 "utilization": 0.0 00:28:20.607 }, 00:28:20.607 { 00:28:20.607 "id": 2, 00:28:20.607 "state": "OPEN", 00:28:20.607 "utilization": 0.0 00:28:20.607 }, 00:28:20.607 { 00:28:20.607 "id": 3, 00:28:20.607 "state": "FREE", 00:28:20.607 "utilization": 0.0 00:28:20.607 }, 00:28:20.607 { 00:28:20.607 "id": 4, 00:28:20.607 "state": "FREE", 00:28:20.607 "utilization": 0.0 00:28:20.607 } 00:28:20.607 ], 00:28:20.607 "read-only": true 00:28:20.607 }, 00:28:20.607 { 00:28:20.607 "name": "verbose_mode", 00:28:20.607 "value": true, 00:28:20.607 "unit": "", 00:28:20.607 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:28:20.607 }, 00:28:20.607 { 00:28:20.607 "name": "prep_upgrade_on_shutdown", 00:28:20.607 "value": false, 00:28:20.607 "unit": "", 00:28:20.607 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:28:20.607 } 00:28:20.607 ] 00:28:20.607 } 00:28:20.607 23:23:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:28:20.607 23:23:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:28:20.607 23:23:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:20.866 23:23:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:28:20.866 23:23:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:28:20.866 23:23:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:28:20.866 23:23:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:20.866 23:23:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:28:21.127 23:23:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:28:21.127 23:23:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:28:21.127 23:23:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:28:21.127 23:23:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:28:21.128 23:23:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:28:21.128 23:23:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:21.128 Validate MD5 checksum, iteration 1 00:28:21.128 23:23:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:28:21.128 23:23:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:21.128 23:23:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:21.128 23:23:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:21.128 23:23:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:21.128 23:23:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:21.128 23:23:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:21.128 [2024-11-18 23:23:40.412620] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:28:21.128 [2024-11-18 23:23:40.412785] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92996 ] 00:28:21.389 [2024-11-18 23:23:40.566966] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:21.389 [2024-11-18 23:23:40.618092] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:22.778  [2024-11-18T23:23:43.100Z] Copying: 561/1024 [MB] (561 MBps) [2024-11-18T23:23:44.043Z] Copying: 1024/1024 [MB] (average 542 MBps) 00:28:24.665 00:28:24.665 23:23:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:28:24.665 23:23:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:26.580 23:23:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:26.580 Validate MD5 checksum, iteration 2 00:28:26.580 23:23:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=2e88461f0dd535d2a57b506be0ec77b7 00:28:26.580 23:23:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 2e88461f0dd535d2a57b506be0ec77b7 != \2\e\8\8\4\6\1\f\0\d\d\5\3\5\d\2\a\5\7\b\5\0\6\b\e\0\e\c\7\7\b\7 ]] 00:28:26.580 23:23:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:26.580 23:23:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:26.580 23:23:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:28:26.580 23:23:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:26.580 23:23:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:26.581 23:23:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:26.581 23:23:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:26.581 23:23:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:26.581 23:23:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:26.581 [2024-11-18 23:23:45.801990] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:28:26.581 [2024-11-18 23:23:45.802123] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93057 ] 00:28:26.581 [2024-11-18 23:23:45.949942] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:26.842 [2024-11-18 23:23:45.981724] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:28.230  [2024-11-18T23:23:48.181Z] Copying: 659/1024 [MB] (659 MBps) [2024-11-18T23:23:51.486Z] Copying: 1024/1024 [MB] (average 603 MBps) 00:28:32.108 00:28:32.108 23:23:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:28:32.108 23:23:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:34.024 23:23:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:34.024 23:23:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=8f7ffe214f73b56723f8d6049c3325c8 00:28:34.024 23:23:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 8f7ffe214f73b56723f8d6049c3325c8 != \8\f\7\f\f\e\2\1\4\f\7\3\b\5\6\7\2\3\f\8\d\6\0\4\9\c\3\3\2\5\c\8 ]] 00:28:34.024 23:23:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:34.024 23:23:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:34.024 23:23:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:28:34.024 23:23:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 92927 ]] 00:28:34.024 23:23:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 92927 00:28:34.024 23:23:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:28:34.024 23:23:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:28:34.024 23:23:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:28:34.024 23:23:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:28:34.024 23:23:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:34.024 23:23:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:34.024 23:23:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=93135 00:28:34.024 23:23:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:28:34.024 23:23:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 93135 00:28:34.024 23:23:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 93135 ']' 00:28:34.024 23:23:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:34.024 23:23:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:34.024 23:23:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:34.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:34.024 23:23:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:34.024 23:23:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:34.024 [2024-11-18 23:23:53.040955] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:28:34.024 [2024-11-18 23:23:53.041222] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93135 ] 00:28:34.024 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 830: 92927 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:28:34.024 [2024-11-18 23:23:53.184215] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:34.024 [2024-11-18 23:23:53.226584] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:34.286 [2024-11-18 23:23:53.519589] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:34.286 [2024-11-18 23:23:53.519795] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:34.286 [2024-11-18 23:23:53.657933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:34.286 [2024-11-18 23:23:53.658059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:28:34.286 [2024-11-18 23:23:53.658110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:34.286 [2024-11-18 23:23:53.658134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.286 [2024-11-18 23:23:53.658208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:34.286 [2024-11-18 23:23:53.658229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:34.286 [2024-11-18 23:23:53.658246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:28:34.286 [2024-11-18 23:23:53.658263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.286 [2024-11-18 23:23:53.658296] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:28:34.286 [2024-11-18 23:23:53.658542] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:28:34.286 [2024-11-18 23:23:53.658682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:34.286 [2024-11-18 23:23:53.658699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:34.286 [2024-11-18 23:23:53.658718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.393 ms 00:28:34.286 [2024-11-18 23:23:53.658733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.286 [2024-11-18 23:23:53.658975] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:28:34.550 [2024-11-18 23:23:53.663038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:34.550 [2024-11-18 23:23:53.663068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:28:34.550 [2024-11-18 23:23:53.663076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.066 ms 00:28:34.550 [2024-11-18 23:23:53.663086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.550 [2024-11-18 23:23:53.663992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:34.550 [2024-11-18 23:23:53.664019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:28:34.550 [2024-11-18 23:23:53.664028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:28:34.550 [2024-11-18 23:23:53.664034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.550 [2024-11-18 23:23:53.664254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:34.550 [2024-11-18 23:23:53.664263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:34.550 [2024-11-18 23:23:53.664275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.185 ms 00:28:34.550 [2024-11-18 23:23:53.664281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.550 [2024-11-18 23:23:53.664312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:34.550 [2024-11-18 23:23:53.664319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:34.550 [2024-11-18 23:23:53.664328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:28:34.550 [2024-11-18 23:23:53.664333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.550 [2024-11-18 23:23:53.664353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:34.550 [2024-11-18 23:23:53.664360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:28:34.550 [2024-11-18 23:23:53.664366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:34.550 [2024-11-18 23:23:53.664376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.550 [2024-11-18 23:23:53.664393] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:28:34.550 [2024-11-18 23:23:53.665096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:34.550 [2024-11-18 23:23:53.665109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:34.550 [2024-11-18 23:23:53.665116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.709 ms 00:28:34.550 [2024-11-18 23:23:53.665122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.550 [2024-11-18 23:23:53.665141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:34.550 [2024-11-18 23:23:53.665149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:28:34.550 [2024-11-18 23:23:53.665167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:34.550 [2024-11-18 23:23:53.665180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.550 [2024-11-18 23:23:53.665203] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:28:34.550 [2024-11-18 23:23:53.665218] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:28:34.550 [2024-11-18 23:23:53.665249] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:28:34.550 [2024-11-18 23:23:53.665261] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:28:34.550 [2024-11-18 23:23:53.665342] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:28:34.550 [2024-11-18 23:23:53.665352] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:28:34.550 [2024-11-18 23:23:53.665362] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:28:34.550 [2024-11-18 23:23:53.665374] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:28:34.550 [2024-11-18 23:23:53.665381] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:28:34.550 [2024-11-18 23:23:53.665388] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:28:34.550 [2024-11-18 23:23:53.665394] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:28:34.550 [2024-11-18 23:23:53.665399] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:28:34.550 [2024-11-18 23:23:53.665405] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:28:34.550 [2024-11-18 23:23:53.665412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:34.550 [2024-11-18 23:23:53.665418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:28:34.550 [2024-11-18 23:23:53.665424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.211 ms 00:28:34.550 [2024-11-18 23:23:53.665430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.550 [2024-11-18 23:23:53.665496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:34.550 [2024-11-18 23:23:53.665502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:28:34.550 [2024-11-18 23:23:53.665508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:28:34.550 [2024-11-18 23:23:53.665513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.550 [2024-11-18 23:23:53.665590] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:28:34.550 [2024-11-18 23:23:53.665598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:28:34.550 [2024-11-18 23:23:53.665604] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:34.550 [2024-11-18 23:23:53.665610] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:34.550 [2024-11-18 23:23:53.665616] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:28:34.550 [2024-11-18 23:23:53.665621] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:28:34.550 [2024-11-18 23:23:53.665626] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:28:34.550 [2024-11-18 23:23:53.665631] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:28:34.550 [2024-11-18 23:23:53.665637] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:28:34.550 [2024-11-18 23:23:53.665642] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:34.550 [2024-11-18 23:23:53.665647] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:28:34.550 [2024-11-18 23:23:53.665653] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:28:34.550 [2024-11-18 23:23:53.665669] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:34.550 [2024-11-18 23:23:53.665675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:28:34.550 [2024-11-18 23:23:53.665680] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:28:34.550 [2024-11-18 23:23:53.665686] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:34.550 [2024-11-18 23:23:53.665694] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:28:34.550 [2024-11-18 23:23:53.665699] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:28:34.550 [2024-11-18 23:23:53.665704] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:34.550 [2024-11-18 23:23:53.665710] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:28:34.550 [2024-11-18 23:23:53.665715] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:28:34.550 [2024-11-18 23:23:53.665720] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:34.550 [2024-11-18 23:23:53.665725] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:28:34.550 [2024-11-18 23:23:53.665731] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:28:34.550 [2024-11-18 23:23:53.665736] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:34.550 [2024-11-18 23:23:53.665741] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:28:34.550 [2024-11-18 23:23:53.665746] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:28:34.550 [2024-11-18 23:23:53.665752] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:34.551 [2024-11-18 23:23:53.665758] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:28:34.551 [2024-11-18 23:23:53.665765] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:28:34.551 [2024-11-18 23:23:53.665771] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:34.551 [2024-11-18 23:23:53.665777] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:28:34.551 [2024-11-18 23:23:53.665785] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:28:34.551 [2024-11-18 23:23:53.665791] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:34.551 [2024-11-18 23:23:53.665796] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:28:34.551 [2024-11-18 23:23:53.665802] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:28:34.551 [2024-11-18 23:23:53.665808] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:34.551 [2024-11-18 23:23:53.665814] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:28:34.551 [2024-11-18 23:23:53.665820] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:28:34.551 [2024-11-18 23:23:53.665826] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:34.551 [2024-11-18 23:23:53.665832] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:28:34.551 [2024-11-18 23:23:53.665838] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:28:34.551 [2024-11-18 23:23:53.665843] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:34.551 [2024-11-18 23:23:53.665849] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:28:34.551 [2024-11-18 23:23:53.665861] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:28:34.551 [2024-11-18 23:23:53.665872] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:34.551 [2024-11-18 23:23:53.665878] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:34.551 [2024-11-18 23:23:53.665885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:28:34.551 [2024-11-18 23:23:53.665893] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:28:34.551 [2024-11-18 23:23:53.665899] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:28:34.551 [2024-11-18 23:23:53.665905] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:28:34.551 [2024-11-18 23:23:53.665910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:28:34.551 [2024-11-18 23:23:53.665916] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:28:34.551 [2024-11-18 23:23:53.665923] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:28:34.551 [2024-11-18 23:23:53.665931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:34.551 [2024-11-18 23:23:53.665938] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:28:34.551 [2024-11-18 23:23:53.665944] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:28:34.551 [2024-11-18 23:23:53.665951] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:28:34.551 [2024-11-18 23:23:53.665957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:28:34.551 [2024-11-18 23:23:53.665964] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:28:34.551 [2024-11-18 23:23:53.665970] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:28:34.551 [2024-11-18 23:23:53.665977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:28:34.551 [2024-11-18 23:23:53.665983] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:28:34.551 [2024-11-18 23:23:53.665990] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:28:34.551 [2024-11-18 23:23:53.665998] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:28:34.551 [2024-11-18 23:23:53.666004] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:28:34.551 [2024-11-18 23:23:53.666010] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:28:34.551 [2024-11-18 23:23:53.666017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:28:34.551 [2024-11-18 23:23:53.666023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:28:34.551 [2024-11-18 23:23:53.666029] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:28:34.551 [2024-11-18 23:23:53.666036] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:34.551 [2024-11-18 23:23:53.666043] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:34.551 [2024-11-18 23:23:53.666050] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:28:34.551 [2024-11-18 23:23:53.666056] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:28:34.551 [2024-11-18 23:23:53.666062] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:28:34.551 [2024-11-18 23:23:53.666068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:34.551 [2024-11-18 23:23:53.666077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:28:34.551 [2024-11-18 23:23:53.666084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.532 ms 00:28:34.551 [2024-11-18 23:23:53.666091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.551 [2024-11-18 23:23:53.674556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:34.551 [2024-11-18 23:23:53.674694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:34.551 [2024-11-18 23:23:53.674777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.414 ms 00:28:34.551 [2024-11-18 23:23:53.674800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.551 [2024-11-18 23:23:53.674844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:34.551 [2024-11-18 23:23:53.674860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:28:34.551 [2024-11-18 23:23:53.674878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:28:34.551 [2024-11-18 23:23:53.674893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.551 [2024-11-18 23:23:53.696621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:34.551 [2024-11-18 23:23:53.696796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:34.551 [2024-11-18 23:23:53.696944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.678 ms 00:28:34.551 [2024-11-18 23:23:53.697084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.551 [2024-11-18 23:23:53.697274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:34.551 [2024-11-18 23:23:53.697364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:34.551 [2024-11-18 23:23:53.697516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:28:34.551 [2024-11-18 23:23:53.697640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.551 [2024-11-18 23:23:53.697967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:34.551 [2024-11-18 23:23:53.698138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:34.551 [2024-11-18 23:23:53.698322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.118 ms 00:28:34.551 [2024-11-18 23:23:53.698399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.551 [2024-11-18 23:23:53.698642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:34.551 [2024-11-18 23:23:53.698801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:34.551 [2024-11-18 23:23:53.698926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:28:34.551 [2024-11-18 23:23:53.699140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.551 [2024-11-18 23:23:53.708805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:34.551 [2024-11-18 23:23:53.708889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:34.551 [2024-11-18 23:23:53.708929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.450 ms 00:28:34.551 [2024-11-18 23:23:53.708947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.551 [2024-11-18 23:23:53.709077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:34.551 [2024-11-18 23:23:53.709105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:28:34.551 [2024-11-18 23:23:53.709121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:34.551 [2024-11-18 23:23:53.709136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.551 [2024-11-18 23:23:53.713057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:34.551 [2024-11-18 23:23:53.713143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:28:34.551 [2024-11-18 23:23:53.713198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.887 ms 00:28:34.551 [2024-11-18 23:23:53.713218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.551 [2024-11-18 23:23:53.714317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:34.551 [2024-11-18 23:23:53.714391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:28:34.551 [2024-11-18 23:23:53.714433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.227 ms 00:28:34.551 [2024-11-18 23:23:53.714450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.551 [2024-11-18 23:23:53.729544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:34.551 [2024-11-18 23:23:53.729579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:28:34.551 [2024-11-18 23:23:53.729590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.052 ms 00:28:34.551 [2024-11-18 23:23:53.729597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.551 [2024-11-18 23:23:53.729719] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:28:34.551 [2024-11-18 23:23:53.729811] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:28:34.551 [2024-11-18 23:23:53.729898] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:28:34.551 [2024-11-18 23:23:53.729987] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:28:34.551 [2024-11-18 23:23:53.729994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:34.552 [2024-11-18 23:23:53.730001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:28:34.552 [2024-11-18 23:23:53.730009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.362 ms 00:28:34.552 [2024-11-18 23:23:53.730016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.552 [2024-11-18 23:23:53.730047] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:28:34.552 [2024-11-18 23:23:53.730060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:34.552 [2024-11-18 23:23:53.730067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:28:34.552 [2024-11-18 23:23:53.730079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:28:34.552 [2024-11-18 23:23:53.730085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.552 [2024-11-18 23:23:53.732315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:34.552 [2024-11-18 23:23:53.732347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:28:34.552 [2024-11-18 23:23:53.732357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.210 ms 00:28:34.552 [2024-11-18 23:23:53.732363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.552 [2024-11-18 23:23:53.732864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:34.552 [2024-11-18 23:23:53.732891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:28:34.552 [2024-11-18 23:23:53.732902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:28:34.552 [2024-11-18 23:23:53.732909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:34.552 [2024-11-18 23:23:53.732967] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:28:34.552 [2024-11-18 23:23:53.733127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:34.552 [2024-11-18 23:23:53.733136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:28:34.552 [2024-11-18 23:23:53.733143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.161 ms 00:28:34.552 [2024-11-18 23:23:53.733150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.126 [2024-11-18 23:23:54.376827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:35.126 [2024-11-18 23:23:54.376922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:28:35.126 [2024-11-18 23:23:54.376951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 643.386 ms 00:28:35.126 [2024-11-18 23:23:54.376965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.126 [2024-11-18 23:23:54.379432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:35.126 [2024-11-18 23:23:54.379682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:28:35.126 [2024-11-18 23:23:54.379714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.784 ms 00:28:35.126 [2024-11-18 23:23:54.379731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.126 [2024-11-18 23:23:54.380775] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:28:35.126 [2024-11-18 23:23:54.380830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:35.126 [2024-11-18 23:23:54.380847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:28:35.126 [2024-11-18 23:23:54.380873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.052 ms 00:28:35.126 [2024-11-18 23:23:54.380888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.126 [2024-11-18 23:23:54.380973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:35.126 [2024-11-18 23:23:54.380993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:28:35.126 [2024-11-18 23:23:54.381011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:28:35.126 [2024-11-18 23:23:54.381032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.126 [2024-11-18 23:23:54.381109] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 648.114 ms, result 0 00:28:35.126 [2024-11-18 23:23:54.381216] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:28:35.126 [2024-11-18 23:23:54.381338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:35.126 [2024-11-18 23:23:54.381362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:28:35.126 [2024-11-18 23:23:54.381379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.124 ms 00:28:35.126 [2024-11-18 23:23:54.381392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.073 [2024-11-18 23:23:55.331637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:36.073 [2024-11-18 23:23:55.331707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:28:36.073 [2024-11-18 23:23:55.331722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 949.430 ms 00:28:36.073 [2024-11-18 23:23:55.331731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.073 [2024-11-18 23:23:55.333388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:36.073 [2024-11-18 23:23:55.333425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:28:36.073 [2024-11-18 23:23:55.333436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.163 ms 00:28:36.073 [2024-11-18 23:23:55.333444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.073 [2024-11-18 23:23:55.333770] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:28:36.073 [2024-11-18 23:23:55.333793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:36.073 [2024-11-18 23:23:55.333802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:28:36.073 [2024-11-18 23:23:55.333811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.322 ms 00:28:36.073 [2024-11-18 23:23:55.333819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.073 [2024-11-18 23:23:55.333847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:36.073 [2024-11-18 23:23:55.333857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:28:36.073 [2024-11-18 23:23:55.333865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:36.073 [2024-11-18 23:23:55.333873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.073 [2024-11-18 23:23:55.333909] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 952.705 ms, result 0 00:28:36.073 [2024-11-18 23:23:55.333960] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:36.073 [2024-11-18 23:23:55.333972] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:28:36.073 [2024-11-18 23:23:55.333982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:36.073 [2024-11-18 23:23:55.333990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:28:36.073 [2024-11-18 23:23:55.334003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1601.033 ms 00:28:36.073 [2024-11-18 23:23:55.334010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.073 [2024-11-18 23:23:55.334040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:36.073 [2024-11-18 23:23:55.334053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:28:36.073 [2024-11-18 23:23:55.334065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:36.073 [2024-11-18 23:23:55.334072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.073 [2024-11-18 23:23:55.342704] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:28:36.073 [2024-11-18 23:23:55.342924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:36.073 [2024-11-18 23:23:55.342939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:28:36.073 [2024-11-18 23:23:55.342948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.835 ms 00:28:36.073 [2024-11-18 23:23:55.342956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.073 [2024-11-18 23:23:55.343686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:36.073 [2024-11-18 23:23:55.343704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:28:36.073 [2024-11-18 23:23:55.343715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.658 ms 00:28:36.073 [2024-11-18 23:23:55.343723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.073 [2024-11-18 23:23:55.345972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:36.073 [2024-11-18 23:23:55.346085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:28:36.073 [2024-11-18 23:23:55.346100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.230 ms 00:28:36.073 [2024-11-18 23:23:55.346113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.073 [2024-11-18 23:23:55.346171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:36.073 [2024-11-18 23:23:55.346186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:28:36.073 [2024-11-18 23:23:55.346195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:36.073 [2024-11-18 23:23:55.346202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.073 [2024-11-18 23:23:55.346312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:36.073 [2024-11-18 23:23:55.346325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:28:36.073 [2024-11-18 23:23:55.346333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:28:36.073 [2024-11-18 23:23:55.346341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.073 [2024-11-18 23:23:55.346365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:36.073 [2024-11-18 23:23:55.346373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:28:36.073 [2024-11-18 23:23:55.346381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:36.073 [2024-11-18 23:23:55.346388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.073 [2024-11-18 23:23:55.346423] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:28:36.073 [2024-11-18 23:23:55.346437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:36.073 [2024-11-18 23:23:55.346445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:28:36.073 [2024-11-18 23:23:55.346453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:28:36.073 [2024-11-18 23:23:55.346460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.073 [2024-11-18 23:23:55.346511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:36.073 [2024-11-18 23:23:55.346523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:28:36.073 [2024-11-18 23:23:55.346530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:28:36.073 [2024-11-18 23:23:55.346538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.073 [2024-11-18 23:23:55.347523] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1689.118 ms, result 0 00:28:36.073 [2024-11-18 23:23:55.383316] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:36.074 [2024-11-18 23:23:55.399297] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:28:36.074 [2024-11-18 23:23:55.407456] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:36.335 Validate MD5 checksum, iteration 1 00:28:36.335 23:23:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:36.335 23:23:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:28:36.335 23:23:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:36.335 23:23:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:28:36.335 23:23:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:28:36.335 23:23:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:28:36.335 23:23:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:28:36.335 23:23:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:36.335 23:23:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:28:36.335 23:23:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:36.335 23:23:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:36.335 23:23:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:36.335 23:23:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:36.335 23:23:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:36.335 23:23:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:36.335 [2024-11-18 23:23:55.619117] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:28:36.335 [2024-11-18 23:23:55.619284] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93171 ] 00:28:36.596 [2024-11-18 23:23:55.770875] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:36.596 [2024-11-18 23:23:55.822136] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:37.981  [2024-11-18T23:23:58.321Z] Copying: 592/1024 [MB] (592 MBps) [2024-11-18T23:23:58.922Z] Copying: 1024/1024 [MB] (average 547 MBps) 00:28:39.544 00:28:39.544 23:23:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:28:39.544 23:23:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:42.092 23:24:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:42.092 Validate MD5 checksum, iteration 2 00:28:42.092 23:24:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=2e88461f0dd535d2a57b506be0ec77b7 00:28:42.092 23:24:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 2e88461f0dd535d2a57b506be0ec77b7 != \2\e\8\8\4\6\1\f\0\d\d\5\3\5\d\2\a\5\7\b\5\0\6\b\e\0\e\c\7\7\b\7 ]] 00:28:42.092 23:24:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:42.092 23:24:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:42.092 23:24:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:28:42.092 23:24:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:42.092 23:24:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:42.092 23:24:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:42.092 23:24:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:42.092 23:24:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:42.092 23:24:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:42.092 [2024-11-18 23:24:00.929751] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:28:42.092 [2024-11-18 23:24:00.930034] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93227 ] 00:28:42.092 [2024-11-18 23:24:01.079711] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:42.092 [2024-11-18 23:24:01.112669] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:43.474  [2024-11-18T23:24:03.418Z] Copying: 590/1024 [MB] (590 MBps) [2024-11-18T23:24:03.676Z] Copying: 1024/1024 [MB] (average 585 MBps) 00:28:44.298 00:28:44.298 23:24:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:28:44.299 23:24:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:46.846 23:24:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:46.846 23:24:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=8f7ffe214f73b56723f8d6049c3325c8 00:28:46.846 23:24:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 8f7ffe214f73b56723f8d6049c3325c8 != \8\f\7\f\f\e\2\1\4\f\7\3\b\5\6\7\2\3\f\8\d\6\0\4\9\c\3\3\2\5\c\8 ]] 00:28:46.846 23:24:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:46.846 23:24:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:46.846 23:24:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:28:46.846 23:24:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:28:46.846 23:24:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:28:46.846 23:24:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:46.846 23:24:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:28:46.846 23:24:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:28:46.846 23:24:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:28:46.846 23:24:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:28:46.846 23:24:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 93135 ]] 00:28:46.846 23:24:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 93135 00:28:46.846 23:24:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 93135 ']' 00:28:46.846 23:24:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 93135 00:28:46.846 23:24:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:28:46.846 23:24:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:46.846 23:24:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93135 00:28:46.846 killing process with pid 93135 00:28:46.846 23:24:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:46.846 23:24:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:46.846 23:24:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93135' 00:28:46.846 23:24:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 93135 00:28:46.846 23:24:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 93135 00:28:46.846 [2024-11-18 23:24:06.028955] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:28:46.847 [2024-11-18 23:24:06.035500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.847 [2024-11-18 23:24:06.035616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:28:46.847 [2024-11-18 23:24:06.035671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:46.847 [2024-11-18 23:24:06.035691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.847 [2024-11-18 23:24:06.035724] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:28:46.847 [2024-11-18 23:24:06.036254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.847 [2024-11-18 23:24:06.036276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:28:46.847 [2024-11-18 23:24:06.036285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.516 ms 00:28:46.847 [2024-11-18 23:24:06.036291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.847 [2024-11-18 23:24:06.036483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.847 [2024-11-18 23:24:06.036491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:28:46.847 [2024-11-18 23:24:06.036498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.171 ms 00:28:46.847 [2024-11-18 23:24:06.036505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.847 [2024-11-18 23:24:06.037584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.847 [2024-11-18 23:24:06.037607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:28:46.847 [2024-11-18 23:24:06.037614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.065 ms 00:28:46.847 [2024-11-18 23:24:06.037620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.847 [2024-11-18 23:24:06.038515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.847 [2024-11-18 23:24:06.038597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:28:46.847 [2024-11-18 23:24:06.038608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.870 ms 00:28:46.847 [2024-11-18 23:24:06.038629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.847 [2024-11-18 23:24:06.040115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.847 [2024-11-18 23:24:06.040145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:28:46.847 [2024-11-18 23:24:06.040162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.456 ms 00:28:46.847 [2024-11-18 23:24:06.040169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.847 [2024-11-18 23:24:06.041520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.847 [2024-11-18 23:24:06.041551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:28:46.847 [2024-11-18 23:24:06.041559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.325 ms 00:28:46.847 [2024-11-18 23:24:06.041565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.847 [2024-11-18 23:24:06.041615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.847 [2024-11-18 23:24:06.041622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:28:46.847 [2024-11-18 23:24:06.041628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:28:46.847 [2024-11-18 23:24:06.041635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.847 [2024-11-18 23:24:06.043082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.847 [2024-11-18 23:24:06.043107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:28:46.847 [2024-11-18 23:24:06.043114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.434 ms 00:28:46.847 [2024-11-18 23:24:06.043119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.847 [2024-11-18 23:24:06.044458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.847 [2024-11-18 23:24:06.044483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:28:46.847 [2024-11-18 23:24:06.044490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.315 ms 00:28:46.847 [2024-11-18 23:24:06.044496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.847 [2024-11-18 23:24:06.045554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.847 [2024-11-18 23:24:06.045579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:28:46.847 [2024-11-18 23:24:06.045586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.034 ms 00:28:46.847 [2024-11-18 23:24:06.045591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.847 [2024-11-18 23:24:06.046516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.847 [2024-11-18 23:24:06.046540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:28:46.847 [2024-11-18 23:24:06.046548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.877 ms 00:28:46.847 [2024-11-18 23:24:06.046554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.847 [2024-11-18 23:24:06.046577] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:28:46.847 [2024-11-18 23:24:06.046589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:46.847 [2024-11-18 23:24:06.046600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:28:46.847 [2024-11-18 23:24:06.046607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:28:46.847 [2024-11-18 23:24:06.046622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:46.847 [2024-11-18 23:24:06.046628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:46.847 [2024-11-18 23:24:06.046635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:46.847 [2024-11-18 23:24:06.046641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:46.847 [2024-11-18 23:24:06.046647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:46.847 [2024-11-18 23:24:06.046653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:46.847 [2024-11-18 23:24:06.046660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:46.847 [2024-11-18 23:24:06.046665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:46.847 [2024-11-18 23:24:06.046672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:46.847 [2024-11-18 23:24:06.046678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:46.847 [2024-11-18 23:24:06.046684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:46.847 [2024-11-18 23:24:06.046690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:46.847 [2024-11-18 23:24:06.046696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:46.847 [2024-11-18 23:24:06.046702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:46.847 [2024-11-18 23:24:06.046708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:46.847 [2024-11-18 23:24:06.046716] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:28:46.847 [2024-11-18 23:24:06.046722] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: c31e050b-0568-4228-9bf4-65df49cad241 00:28:46.847 [2024-11-18 23:24:06.046728] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:28:46.847 [2024-11-18 23:24:06.046734] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:28:46.847 [2024-11-18 23:24:06.046740] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:28:46.847 [2024-11-18 23:24:06.046746] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:28:46.847 [2024-11-18 23:24:06.046752] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:28:46.847 [2024-11-18 23:24:06.046758] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:28:46.847 [2024-11-18 23:24:06.046763] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:28:46.847 [2024-11-18 23:24:06.046769] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:28:46.847 [2024-11-18 23:24:06.046774] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:28:46.847 [2024-11-18 23:24:06.046780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.847 [2024-11-18 23:24:06.046786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:28:46.847 [2024-11-18 23:24:06.046792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.203 ms 00:28:46.847 [2024-11-18 23:24:06.046802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.847 [2024-11-18 23:24:06.048421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.847 [2024-11-18 23:24:06.048513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:28:46.847 [2024-11-18 23:24:06.048525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.606 ms 00:28:46.847 [2024-11-18 23:24:06.048531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.847 [2024-11-18 23:24:06.048618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.847 [2024-11-18 23:24:06.048626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:28:46.847 [2024-11-18 23:24:06.048637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.071 ms 00:28:46.847 [2024-11-18 23:24:06.048643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.847 [2024-11-18 23:24:06.054552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:46.847 [2024-11-18 23:24:06.054579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:46.847 [2024-11-18 23:24:06.054588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:46.847 [2024-11-18 23:24:06.054595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.847 [2024-11-18 23:24:06.054626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:46.847 [2024-11-18 23:24:06.054633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:46.847 [2024-11-18 23:24:06.054643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:46.847 [2024-11-18 23:24:06.054649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.847 [2024-11-18 23:24:06.054713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:46.847 [2024-11-18 23:24:06.054726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:46.847 [2024-11-18 23:24:06.054733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:46.847 [2024-11-18 23:24:06.054739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.847 [2024-11-18 23:24:06.054756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:46.847 [2024-11-18 23:24:06.054763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:46.847 [2024-11-18 23:24:06.054769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:46.847 [2024-11-18 23:24:06.054777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.848 [2024-11-18 23:24:06.065331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:46.848 [2024-11-18 23:24:06.065372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:46.848 [2024-11-18 23:24:06.065385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:46.848 [2024-11-18 23:24:06.065392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.848 [2024-11-18 23:24:06.073675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:46.848 [2024-11-18 23:24:06.073708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:46.848 [2024-11-18 23:24:06.073722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:46.848 [2024-11-18 23:24:06.073729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.848 [2024-11-18 23:24:06.073796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:46.848 [2024-11-18 23:24:06.073804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:46.848 [2024-11-18 23:24:06.073811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:46.848 [2024-11-18 23:24:06.073817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.848 [2024-11-18 23:24:06.073845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:46.848 [2024-11-18 23:24:06.073853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:46.848 [2024-11-18 23:24:06.073863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:46.848 [2024-11-18 23:24:06.073870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.848 [2024-11-18 23:24:06.073931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:46.848 [2024-11-18 23:24:06.073939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:46.848 [2024-11-18 23:24:06.073946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:46.848 [2024-11-18 23:24:06.073952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.848 [2024-11-18 23:24:06.073978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:46.848 [2024-11-18 23:24:06.073986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:28:46.848 [2024-11-18 23:24:06.073995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:46.848 [2024-11-18 23:24:06.074002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.848 [2024-11-18 23:24:06.074039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:46.848 [2024-11-18 23:24:06.074046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:46.848 [2024-11-18 23:24:06.074053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:46.848 [2024-11-18 23:24:06.074059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.848 [2024-11-18 23:24:06.074099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:46.848 [2024-11-18 23:24:06.074106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:46.848 [2024-11-18 23:24:06.074113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:46.848 [2024-11-18 23:24:06.074119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.848 [2024-11-18 23:24:06.074253] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 38.724 ms, result 0 00:28:47.110 23:24:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:28:47.110 23:24:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:47.110 23:24:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:28:47.110 23:24:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:28:47.110 23:24:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:28:47.110 23:24:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:47.110 Remove shared memory files 00:28:47.110 23:24:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:28:47.110 23:24:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:47.110 23:24:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:28:47.110 23:24:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:28:47.110 23:24:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid92927 00:28:47.110 23:24:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:47.110 23:24:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:28:47.110 00:28:47.110 real 1m13.981s 00:28:47.110 user 1m37.748s 00:28:47.110 sys 0m20.551s 00:28:47.110 23:24:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:47.110 23:24:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:47.110 ************************************ 00:28:47.110 END TEST ftl_upgrade_shutdown 00:28:47.110 ************************************ 00:28:47.110 23:24:06 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:28:47.110 23:24:06 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:28:47.110 23:24:06 ftl -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:28:47.110 23:24:06 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:47.110 23:24:06 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:47.110 ************************************ 00:28:47.110 START TEST ftl_restore_fast 00:28:47.110 ************************************ 00:28:47.110 23:24:06 ftl.ftl_restore_fast -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:28:47.110 * Looking for test storage... 00:28:47.110 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:28:47.110 23:24:06 ftl.ftl_restore_fast -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:47.110 23:24:06 ftl.ftl_restore_fast -- common/autotest_common.sh@1681 -- # lcov --version 00:28:47.110 23:24:06 ftl.ftl_restore_fast -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:47.110 23:24:06 ftl.ftl_restore_fast -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:47.110 23:24:06 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:47.110 23:24:06 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:47.110 23:24:06 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:47.110 23:24:06 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:28:47.110 23:24:06 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:28:47.110 23:24:06 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:28:47.110 23:24:06 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:28:47.110 23:24:06 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:28:47.110 23:24:06 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:28:47.110 23:24:06 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:28:47.110 23:24:06 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:47.110 23:24:06 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:28:47.110 23:24:06 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:28:47.110 23:24:06 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:47.110 23:24:06 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:47.110 23:24:06 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:28:47.110 23:24:06 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:28:47.110 23:24:06 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:47.111 23:24:06 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:28:47.372 23:24:06 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:28:47.372 23:24:06 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:28:47.372 23:24:06 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:28:47.372 23:24:06 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:47.372 23:24:06 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:28:47.372 23:24:06 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:28:47.372 23:24:06 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:47.372 23:24:06 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:47.372 23:24:06 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:28:47.372 23:24:06 ftl.ftl_restore_fast -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:47.372 23:24:06 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:47.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:47.372 --rc genhtml_branch_coverage=1 00:28:47.372 --rc genhtml_function_coverage=1 00:28:47.372 --rc genhtml_legend=1 00:28:47.372 --rc geninfo_all_blocks=1 00:28:47.372 --rc geninfo_unexecuted_blocks=1 00:28:47.372 00:28:47.372 ' 00:28:47.372 23:24:06 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:47.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:47.372 --rc genhtml_branch_coverage=1 00:28:47.372 --rc genhtml_function_coverage=1 00:28:47.372 --rc genhtml_legend=1 00:28:47.372 --rc geninfo_all_blocks=1 00:28:47.372 --rc geninfo_unexecuted_blocks=1 00:28:47.372 00:28:47.372 ' 00:28:47.372 23:24:06 ftl.ftl_restore_fast -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:47.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:47.372 --rc genhtml_branch_coverage=1 00:28:47.372 --rc genhtml_function_coverage=1 00:28:47.372 --rc genhtml_legend=1 00:28:47.372 --rc geninfo_all_blocks=1 00:28:47.373 --rc geninfo_unexecuted_blocks=1 00:28:47.373 00:28:47.373 ' 00:28:47.373 23:24:06 ftl.ftl_restore_fast -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:47.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:47.373 --rc genhtml_branch_coverage=1 00:28:47.373 --rc genhtml_function_coverage=1 00:28:47.373 --rc genhtml_legend=1 00:28:47.373 --rc geninfo_all_blocks=1 00:28:47.373 --rc geninfo_unexecuted_blocks=1 00:28:47.373 00:28:47.373 ' 00:28:47.373 23:24:06 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:28:47.373 23:24:06 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:28:47.373 23:24:06 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:28:47.373 23:24:06 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:28:47.373 23:24:06 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:28:47.373 23:24:06 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:28:47.373 23:24:06 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:47.373 23:24:06 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:28:47.373 23:24:06 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:28:47.373 23:24:06 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:47.373 23:24:06 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:47.373 23:24:06 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:28:47.373 23:24:06 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:28:47.373 23:24:06 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:47.373 23:24:06 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:47.373 23:24:06 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:28:47.373 23:24:06 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:28:47.373 23:24:06 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:47.373 23:24:06 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:47.373 23:24:06 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:28:47.373 23:24:06 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:28:47.373 23:24:06 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:47.373 23:24:06 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:47.373 23:24:06 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:47.373 23:24:06 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:47.373 23:24:06 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:28:47.373 23:24:06 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:28:47.373 23:24:06 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:47.373 23:24:06 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:47.373 23:24:06 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:47.373 23:24:06 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:28:47.373 23:24:06 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.jba6pWsB3f 00:28:47.373 23:24:06 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:28:47.373 23:24:06 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:28:47.373 23:24:06 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:28:47.373 23:24:06 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:28:47.373 23:24:06 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:28:47.373 23:24:06 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:28:47.373 23:24:06 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:28:47.373 23:24:06 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:28:47.373 23:24:06 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:28:47.373 23:24:06 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:28:47.373 23:24:06 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:28:47.373 23:24:06 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=93362 00:28:47.373 23:24:06 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 93362 00:28:47.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:47.373 23:24:06 ftl.ftl_restore_fast -- common/autotest_common.sh@831 -- # '[' -z 93362 ']' 00:28:47.373 23:24:06 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:47.373 23:24:06 ftl.ftl_restore_fast -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:47.373 23:24:06 ftl.ftl_restore_fast -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:47.373 23:24:06 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:47.373 23:24:06 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:28:47.373 23:24:06 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:47.373 [2024-11-18 23:24:06.602995] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:28:47.373 [2024-11-18 23:24:06.603140] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93362 ] 00:28:47.634 [2024-11-18 23:24:06.755672] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:47.634 [2024-11-18 23:24:06.828550] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:48.205 23:24:07 ftl.ftl_restore_fast -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:48.205 23:24:07 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # return 0 00:28:48.205 23:24:07 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:28:48.205 23:24:07 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:28:48.205 23:24:07 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:28:48.205 23:24:07 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:28:48.205 23:24:07 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:28:48.205 23:24:07 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:28:48.467 23:24:07 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:28:48.467 23:24:07 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:28:48.467 23:24:07 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:28:48.467 23:24:07 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:28:48.467 23:24:07 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:48.467 23:24:07 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:28:48.467 23:24:07 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:28:48.467 23:24:07 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:28:48.728 23:24:07 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:48.728 { 00:28:48.728 "name": "nvme0n1", 00:28:48.728 "aliases": [ 00:28:48.728 "f111fb4a-5559-489d-a97b-91adca071b47" 00:28:48.728 ], 00:28:48.728 "product_name": "NVMe disk", 00:28:48.728 "block_size": 4096, 00:28:48.728 "num_blocks": 1310720, 00:28:48.728 "uuid": "f111fb4a-5559-489d-a97b-91adca071b47", 00:28:48.728 "numa_id": -1, 00:28:48.728 "assigned_rate_limits": { 00:28:48.728 "rw_ios_per_sec": 0, 00:28:48.728 "rw_mbytes_per_sec": 0, 00:28:48.728 "r_mbytes_per_sec": 0, 00:28:48.728 "w_mbytes_per_sec": 0 00:28:48.728 }, 00:28:48.728 "claimed": true, 00:28:48.728 "claim_type": "read_many_write_one", 00:28:48.728 "zoned": false, 00:28:48.728 "supported_io_types": { 00:28:48.728 "read": true, 00:28:48.728 "write": true, 00:28:48.728 "unmap": true, 00:28:48.728 "flush": true, 00:28:48.728 "reset": true, 00:28:48.728 "nvme_admin": true, 00:28:48.728 "nvme_io": true, 00:28:48.728 "nvme_io_md": false, 00:28:48.728 "write_zeroes": true, 00:28:48.728 "zcopy": false, 00:28:48.728 "get_zone_info": false, 00:28:48.728 "zone_management": false, 00:28:48.728 "zone_append": false, 00:28:48.728 "compare": true, 00:28:48.728 "compare_and_write": false, 00:28:48.728 "abort": true, 00:28:48.728 "seek_hole": false, 00:28:48.728 "seek_data": false, 00:28:48.728 "copy": true, 00:28:48.728 "nvme_iov_md": false 00:28:48.728 }, 00:28:48.728 "driver_specific": { 00:28:48.728 "nvme": [ 00:28:48.728 { 00:28:48.728 "pci_address": "0000:00:11.0", 00:28:48.728 "trid": { 00:28:48.728 "trtype": "PCIe", 00:28:48.728 "traddr": "0000:00:11.0" 00:28:48.728 }, 00:28:48.728 "ctrlr_data": { 00:28:48.728 "cntlid": 0, 00:28:48.728 "vendor_id": "0x1b36", 00:28:48.728 "model_number": "QEMU NVMe Ctrl", 00:28:48.728 "serial_number": "12341", 00:28:48.728 "firmware_revision": "8.0.0", 00:28:48.728 "subnqn": "nqn.2019-08.org.qemu:12341", 00:28:48.728 "oacs": { 00:28:48.728 "security": 0, 00:28:48.728 "format": 1, 00:28:48.728 "firmware": 0, 00:28:48.728 "ns_manage": 1 00:28:48.728 }, 00:28:48.728 "multi_ctrlr": false, 00:28:48.728 "ana_reporting": false 00:28:48.728 }, 00:28:48.728 "vs": { 00:28:48.728 "nvme_version": "1.4" 00:28:48.728 }, 00:28:48.728 "ns_data": { 00:28:48.728 "id": 1, 00:28:48.728 "can_share": false 00:28:48.728 } 00:28:48.728 } 00:28:48.728 ], 00:28:48.728 "mp_policy": "active_passive" 00:28:48.728 } 00:28:48.728 } 00:28:48.728 ]' 00:28:48.728 23:24:07 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:48.728 23:24:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:28:48.728 23:24:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:48.728 23:24:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=1310720 00:28:48.728 23:24:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:28:48.728 23:24:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 5120 00:28:48.728 23:24:08 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:28:48.728 23:24:08 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:28:48.728 23:24:08 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:28:48.728 23:24:08 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:48.728 23:24:08 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:48.989 23:24:08 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=bf06058c-7c3c-46da-a72b-5aeec6826756 00:28:48.989 23:24:08 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:28:48.989 23:24:08 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bf06058c-7c3c-46da-a72b-5aeec6826756 00:28:49.251 23:24:08 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:28:49.512 23:24:08 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=8d294bbd-be84-4cc8-975e-5ff15e1e36b3 00:28:49.512 23:24:08 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 8d294bbd-be84-4cc8-975e-5ff15e1e36b3 00:28:49.772 23:24:09 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=17322793-84c6-4ea6-8d2f-4ff3e95a01c8 00:28:49.773 23:24:09 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:28:49.773 23:24:09 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 17322793-84c6-4ea6-8d2f-4ff3e95a01c8 00:28:49.773 23:24:09 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:28:49.773 23:24:09 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:28:49.773 23:24:09 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=17322793-84c6-4ea6-8d2f-4ff3e95a01c8 00:28:49.773 23:24:09 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:28:49.773 23:24:09 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size 17322793-84c6-4ea6-8d2f-4ff3e95a01c8 00:28:49.773 23:24:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=17322793-84c6-4ea6-8d2f-4ff3e95a01c8 00:28:49.773 23:24:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:49.773 23:24:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:28:49.773 23:24:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:28:49.773 23:24:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 17322793-84c6-4ea6-8d2f-4ff3e95a01c8 00:28:50.046 23:24:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:50.046 { 00:28:50.046 "name": "17322793-84c6-4ea6-8d2f-4ff3e95a01c8", 00:28:50.046 "aliases": [ 00:28:50.046 "lvs/nvme0n1p0" 00:28:50.046 ], 00:28:50.046 "product_name": "Logical Volume", 00:28:50.046 "block_size": 4096, 00:28:50.046 "num_blocks": 26476544, 00:28:50.046 "uuid": "17322793-84c6-4ea6-8d2f-4ff3e95a01c8", 00:28:50.046 "assigned_rate_limits": { 00:28:50.046 "rw_ios_per_sec": 0, 00:28:50.046 "rw_mbytes_per_sec": 0, 00:28:50.046 "r_mbytes_per_sec": 0, 00:28:50.046 "w_mbytes_per_sec": 0 00:28:50.046 }, 00:28:50.046 "claimed": false, 00:28:50.046 "zoned": false, 00:28:50.046 "supported_io_types": { 00:28:50.046 "read": true, 00:28:50.046 "write": true, 00:28:50.046 "unmap": true, 00:28:50.046 "flush": false, 00:28:50.046 "reset": true, 00:28:50.046 "nvme_admin": false, 00:28:50.046 "nvme_io": false, 00:28:50.046 "nvme_io_md": false, 00:28:50.046 "write_zeroes": true, 00:28:50.046 "zcopy": false, 00:28:50.046 "get_zone_info": false, 00:28:50.046 "zone_management": false, 00:28:50.046 "zone_append": false, 00:28:50.046 "compare": false, 00:28:50.046 "compare_and_write": false, 00:28:50.046 "abort": false, 00:28:50.046 "seek_hole": true, 00:28:50.046 "seek_data": true, 00:28:50.046 "copy": false, 00:28:50.046 "nvme_iov_md": false 00:28:50.046 }, 00:28:50.046 "driver_specific": { 00:28:50.046 "lvol": { 00:28:50.046 "lvol_store_uuid": "8d294bbd-be84-4cc8-975e-5ff15e1e36b3", 00:28:50.046 "base_bdev": "nvme0n1", 00:28:50.046 "thin_provision": true, 00:28:50.046 "num_allocated_clusters": 0, 00:28:50.046 "snapshot": false, 00:28:50.046 "clone": false, 00:28:50.046 "esnap_clone": false 00:28:50.046 } 00:28:50.046 } 00:28:50.046 } 00:28:50.046 ]' 00:28:50.046 23:24:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:50.046 23:24:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:28:50.046 23:24:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:50.046 23:24:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:28:50.046 23:24:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:28:50.046 23:24:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:28:50.046 23:24:09 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:28:50.046 23:24:09 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:28:50.046 23:24:09 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:28:50.307 23:24:09 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:28:50.307 23:24:09 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:28:50.307 23:24:09 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size 17322793-84c6-4ea6-8d2f-4ff3e95a01c8 00:28:50.307 23:24:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=17322793-84c6-4ea6-8d2f-4ff3e95a01c8 00:28:50.307 23:24:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:50.307 23:24:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:28:50.307 23:24:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:28:50.307 23:24:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 17322793-84c6-4ea6-8d2f-4ff3e95a01c8 00:28:50.569 23:24:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:50.569 { 00:28:50.569 "name": "17322793-84c6-4ea6-8d2f-4ff3e95a01c8", 00:28:50.569 "aliases": [ 00:28:50.569 "lvs/nvme0n1p0" 00:28:50.569 ], 00:28:50.569 "product_name": "Logical Volume", 00:28:50.569 "block_size": 4096, 00:28:50.569 "num_blocks": 26476544, 00:28:50.569 "uuid": "17322793-84c6-4ea6-8d2f-4ff3e95a01c8", 00:28:50.569 "assigned_rate_limits": { 00:28:50.569 "rw_ios_per_sec": 0, 00:28:50.569 "rw_mbytes_per_sec": 0, 00:28:50.569 "r_mbytes_per_sec": 0, 00:28:50.569 "w_mbytes_per_sec": 0 00:28:50.569 }, 00:28:50.569 "claimed": false, 00:28:50.569 "zoned": false, 00:28:50.569 "supported_io_types": { 00:28:50.569 "read": true, 00:28:50.569 "write": true, 00:28:50.569 "unmap": true, 00:28:50.569 "flush": false, 00:28:50.569 "reset": true, 00:28:50.569 "nvme_admin": false, 00:28:50.569 "nvme_io": false, 00:28:50.569 "nvme_io_md": false, 00:28:50.569 "write_zeroes": true, 00:28:50.569 "zcopy": false, 00:28:50.569 "get_zone_info": false, 00:28:50.569 "zone_management": false, 00:28:50.569 "zone_append": false, 00:28:50.569 "compare": false, 00:28:50.569 "compare_and_write": false, 00:28:50.569 "abort": false, 00:28:50.569 "seek_hole": true, 00:28:50.569 "seek_data": true, 00:28:50.569 "copy": false, 00:28:50.569 "nvme_iov_md": false 00:28:50.569 }, 00:28:50.569 "driver_specific": { 00:28:50.569 "lvol": { 00:28:50.569 "lvol_store_uuid": "8d294bbd-be84-4cc8-975e-5ff15e1e36b3", 00:28:50.569 "base_bdev": "nvme0n1", 00:28:50.569 "thin_provision": true, 00:28:50.569 "num_allocated_clusters": 0, 00:28:50.569 "snapshot": false, 00:28:50.569 "clone": false, 00:28:50.569 "esnap_clone": false 00:28:50.569 } 00:28:50.569 } 00:28:50.569 } 00:28:50.569 ]' 00:28:50.569 23:24:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:50.569 23:24:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:28:50.569 23:24:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:50.569 23:24:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:28:50.569 23:24:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:28:50.569 23:24:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:28:50.569 23:24:09 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:28:50.569 23:24:09 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:28:50.831 23:24:10 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:28:50.831 23:24:10 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size 17322793-84c6-4ea6-8d2f-4ff3e95a01c8 00:28:50.831 23:24:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=17322793-84c6-4ea6-8d2f-4ff3e95a01c8 00:28:50.831 23:24:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:50.831 23:24:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:28:50.831 23:24:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:28:50.831 23:24:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 17322793-84c6-4ea6-8d2f-4ff3e95a01c8 00:28:51.093 23:24:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:51.093 { 00:28:51.093 "name": "17322793-84c6-4ea6-8d2f-4ff3e95a01c8", 00:28:51.093 "aliases": [ 00:28:51.093 "lvs/nvme0n1p0" 00:28:51.093 ], 00:28:51.093 "product_name": "Logical Volume", 00:28:51.093 "block_size": 4096, 00:28:51.093 "num_blocks": 26476544, 00:28:51.093 "uuid": "17322793-84c6-4ea6-8d2f-4ff3e95a01c8", 00:28:51.093 "assigned_rate_limits": { 00:28:51.093 "rw_ios_per_sec": 0, 00:28:51.093 "rw_mbytes_per_sec": 0, 00:28:51.093 "r_mbytes_per_sec": 0, 00:28:51.093 "w_mbytes_per_sec": 0 00:28:51.093 }, 00:28:51.093 "claimed": false, 00:28:51.093 "zoned": false, 00:28:51.093 "supported_io_types": { 00:28:51.093 "read": true, 00:28:51.093 "write": true, 00:28:51.093 "unmap": true, 00:28:51.093 "flush": false, 00:28:51.093 "reset": true, 00:28:51.093 "nvme_admin": false, 00:28:51.093 "nvme_io": false, 00:28:51.093 "nvme_io_md": false, 00:28:51.093 "write_zeroes": true, 00:28:51.093 "zcopy": false, 00:28:51.093 "get_zone_info": false, 00:28:51.093 "zone_management": false, 00:28:51.093 "zone_append": false, 00:28:51.093 "compare": false, 00:28:51.093 "compare_and_write": false, 00:28:51.093 "abort": false, 00:28:51.093 "seek_hole": true, 00:28:51.093 "seek_data": true, 00:28:51.093 "copy": false, 00:28:51.093 "nvme_iov_md": false 00:28:51.093 }, 00:28:51.093 "driver_specific": { 00:28:51.093 "lvol": { 00:28:51.093 "lvol_store_uuid": "8d294bbd-be84-4cc8-975e-5ff15e1e36b3", 00:28:51.093 "base_bdev": "nvme0n1", 00:28:51.093 "thin_provision": true, 00:28:51.093 "num_allocated_clusters": 0, 00:28:51.093 "snapshot": false, 00:28:51.093 "clone": false, 00:28:51.093 "esnap_clone": false 00:28:51.093 } 00:28:51.093 } 00:28:51.093 } 00:28:51.093 ]' 00:28:51.093 23:24:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:51.093 23:24:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:28:51.093 23:24:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:51.093 23:24:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:28:51.093 23:24:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:28:51.093 23:24:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:28:51.093 23:24:10 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:28:51.093 23:24:10 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 17322793-84c6-4ea6-8d2f-4ff3e95a01c8 --l2p_dram_limit 10' 00:28:51.093 23:24:10 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:28:51.093 23:24:10 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:28:51.093 23:24:10 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:28:51.093 23:24:10 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:28:51.093 23:24:10 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:28:51.093 23:24:10 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 17322793-84c6-4ea6-8d2f-4ff3e95a01c8 --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:28:51.354 [2024-11-18 23:24:10.496406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.354 [2024-11-18 23:24:10.496456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:51.354 [2024-11-18 23:24:10.496469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:51.354 [2024-11-18 23:24:10.496478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.354 [2024-11-18 23:24:10.496539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.354 [2024-11-18 23:24:10.496549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:51.354 [2024-11-18 23:24:10.496556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:28:51.354 [2024-11-18 23:24:10.496567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.354 [2024-11-18 23:24:10.496588] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:51.354 [2024-11-18 23:24:10.497618] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:51.354 [2024-11-18 23:24:10.497648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.354 [2024-11-18 23:24:10.497658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:51.354 [2024-11-18 23:24:10.497671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.068 ms 00:28:51.354 [2024-11-18 23:24:10.497680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.354 [2024-11-18 23:24:10.497746] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 010f5a10-8b64-4503-bab5-5638e96e85f0 00:28:51.354 [2024-11-18 23:24:10.499068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.354 [2024-11-18 23:24:10.499097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:28:51.354 [2024-11-18 23:24:10.499108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:28:51.354 [2024-11-18 23:24:10.499114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.354 [2024-11-18 23:24:10.506072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.354 [2024-11-18 23:24:10.506101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:51.354 [2024-11-18 23:24:10.506110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.917 ms 00:28:51.354 [2024-11-18 23:24:10.506117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.354 [2024-11-18 23:24:10.506198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.354 [2024-11-18 23:24:10.506206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:51.354 [2024-11-18 23:24:10.506214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:28:51.354 [2024-11-18 23:24:10.506222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.354 [2024-11-18 23:24:10.506269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.354 [2024-11-18 23:24:10.506280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:51.354 [2024-11-18 23:24:10.506288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:51.354 [2024-11-18 23:24:10.506295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.354 [2024-11-18 23:24:10.506316] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:51.354 [2024-11-18 23:24:10.508005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.354 [2024-11-18 23:24:10.508035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:51.354 [2024-11-18 23:24:10.508044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.697 ms 00:28:51.354 [2024-11-18 23:24:10.508056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.354 [2024-11-18 23:24:10.508087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.354 [2024-11-18 23:24:10.508096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:51.354 [2024-11-18 23:24:10.508102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:28:51.354 [2024-11-18 23:24:10.508112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.354 [2024-11-18 23:24:10.508125] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:28:51.354 [2024-11-18 23:24:10.508261] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:51.354 [2024-11-18 23:24:10.508317] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:51.354 [2024-11-18 23:24:10.508329] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:51.354 [2024-11-18 23:24:10.508337] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:51.354 [2024-11-18 23:24:10.508346] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:51.354 [2024-11-18 23:24:10.508353] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:51.354 [2024-11-18 23:24:10.508365] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:51.354 [2024-11-18 23:24:10.508372] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:51.354 [2024-11-18 23:24:10.508379] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:51.354 [2024-11-18 23:24:10.508387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.354 [2024-11-18 23:24:10.508394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:51.354 [2024-11-18 23:24:10.508400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:28:51.354 [2024-11-18 23:24:10.508408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.354 [2024-11-18 23:24:10.508474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.354 [2024-11-18 23:24:10.508484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:51.354 [2024-11-18 23:24:10.508490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:28:51.354 [2024-11-18 23:24:10.508498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.354 [2024-11-18 23:24:10.508575] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:51.354 [2024-11-18 23:24:10.508586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:51.354 [2024-11-18 23:24:10.508595] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:51.354 [2024-11-18 23:24:10.508602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:51.354 [2024-11-18 23:24:10.508608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:51.354 [2024-11-18 23:24:10.508615] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:51.354 [2024-11-18 23:24:10.508620] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:51.354 [2024-11-18 23:24:10.508627] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:51.354 [2024-11-18 23:24:10.508633] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:51.354 [2024-11-18 23:24:10.508639] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:51.354 [2024-11-18 23:24:10.508644] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:51.354 [2024-11-18 23:24:10.508651] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:51.354 [2024-11-18 23:24:10.508656] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:51.354 [2024-11-18 23:24:10.508664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:51.354 [2024-11-18 23:24:10.508670] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:51.354 [2024-11-18 23:24:10.508676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:51.354 [2024-11-18 23:24:10.508681] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:51.354 [2024-11-18 23:24:10.508688] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:51.354 [2024-11-18 23:24:10.508692] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:51.354 [2024-11-18 23:24:10.508699] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:51.355 [2024-11-18 23:24:10.508704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:51.355 [2024-11-18 23:24:10.508711] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:51.355 [2024-11-18 23:24:10.508716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:51.355 [2024-11-18 23:24:10.508723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:51.355 [2024-11-18 23:24:10.508727] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:51.355 [2024-11-18 23:24:10.508736] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:51.355 [2024-11-18 23:24:10.508741] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:51.355 [2024-11-18 23:24:10.508748] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:51.355 [2024-11-18 23:24:10.508753] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:51.355 [2024-11-18 23:24:10.508761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:51.355 [2024-11-18 23:24:10.508766] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:51.355 [2024-11-18 23:24:10.508773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:51.355 [2024-11-18 23:24:10.508778] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:51.355 [2024-11-18 23:24:10.508785] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:51.355 [2024-11-18 23:24:10.508790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:51.355 [2024-11-18 23:24:10.508797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:51.355 [2024-11-18 23:24:10.508802] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:51.355 [2024-11-18 23:24:10.508809] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:51.355 [2024-11-18 23:24:10.508814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:51.355 [2024-11-18 23:24:10.508820] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:51.355 [2024-11-18 23:24:10.508825] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:51.355 [2024-11-18 23:24:10.508832] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:51.355 [2024-11-18 23:24:10.508836] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:51.355 [2024-11-18 23:24:10.508843] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:51.355 [2024-11-18 23:24:10.508849] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:51.355 [2024-11-18 23:24:10.508858] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:51.355 [2024-11-18 23:24:10.508864] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:51.355 [2024-11-18 23:24:10.508874] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:51.355 [2024-11-18 23:24:10.508878] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:51.355 [2024-11-18 23:24:10.508885] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:51.355 [2024-11-18 23:24:10.508890] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:51.355 [2024-11-18 23:24:10.508896] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:51.355 [2024-11-18 23:24:10.508901] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:51.355 [2024-11-18 23:24:10.508911] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:51.355 [2024-11-18 23:24:10.508918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:51.355 [2024-11-18 23:24:10.508926] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:51.355 [2024-11-18 23:24:10.508932] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:51.355 [2024-11-18 23:24:10.508941] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:51.355 [2024-11-18 23:24:10.508947] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:51.355 [2024-11-18 23:24:10.508954] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:51.355 [2024-11-18 23:24:10.508959] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:51.355 [2024-11-18 23:24:10.508968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:51.355 [2024-11-18 23:24:10.508973] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:51.355 [2024-11-18 23:24:10.508980] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:51.355 [2024-11-18 23:24:10.508985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:51.355 [2024-11-18 23:24:10.508992] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:51.355 [2024-11-18 23:24:10.508998] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:51.355 [2024-11-18 23:24:10.509005] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:51.355 [2024-11-18 23:24:10.509010] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:51.355 [2024-11-18 23:24:10.509017] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:51.355 [2024-11-18 23:24:10.509025] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:51.355 [2024-11-18 23:24:10.509033] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:51.355 [2024-11-18 23:24:10.509038] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:51.355 [2024-11-18 23:24:10.509045] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:51.355 [2024-11-18 23:24:10.509050] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:51.355 [2024-11-18 23:24:10.509058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.355 [2024-11-18 23:24:10.509063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:51.355 [2024-11-18 23:24:10.509072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.533 ms 00:28:51.355 [2024-11-18 23:24:10.509077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.355 [2024-11-18 23:24:10.509109] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:28:51.355 [2024-11-18 23:24:10.509117] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:28:54.657 [2024-11-18 23:24:13.778286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.657 [2024-11-18 23:24:13.778373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:28:54.657 [2024-11-18 23:24:13.778399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3269.167 ms 00:28:54.657 [2024-11-18 23:24:13.778409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.657 [2024-11-18 23:24:13.794944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.657 [2024-11-18 23:24:13.795004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:54.657 [2024-11-18 23:24:13.795025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.428 ms 00:28:54.657 [2024-11-18 23:24:13.795035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.657 [2024-11-18 23:24:13.795140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.657 [2024-11-18 23:24:13.795151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:54.657 [2024-11-18 23:24:13.795183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:28:54.657 [2024-11-18 23:24:13.795192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.657 [2024-11-18 23:24:13.809716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.657 [2024-11-18 23:24:13.809769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:54.657 [2024-11-18 23:24:13.809785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.466 ms 00:28:54.657 [2024-11-18 23:24:13.809795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.657 [2024-11-18 23:24:13.809844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.657 [2024-11-18 23:24:13.809857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:54.657 [2024-11-18 23:24:13.809870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:54.657 [2024-11-18 23:24:13.809882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.657 [2024-11-18 23:24:13.810633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.657 [2024-11-18 23:24:13.810673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:54.657 [2024-11-18 23:24:13.810689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.694 ms 00:28:54.657 [2024-11-18 23:24:13.810699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.657 [2024-11-18 23:24:13.810828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.657 [2024-11-18 23:24:13.810837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:54.657 [2024-11-18 23:24:13.810854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:28:54.657 [2024-11-18 23:24:13.810863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.657 [2024-11-18 23:24:13.838481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.657 [2024-11-18 23:24:13.838541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:54.657 [2024-11-18 23:24:13.838557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.583 ms 00:28:54.657 [2024-11-18 23:24:13.838566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.657 [2024-11-18 23:24:13.850236] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:54.657 [2024-11-18 23:24:13.855257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.657 [2024-11-18 23:24:13.855309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:54.657 [2024-11-18 23:24:13.855323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.532 ms 00:28:54.657 [2024-11-18 23:24:13.855335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.657 [2024-11-18 23:24:13.947402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.657 [2024-11-18 23:24:13.947472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:28:54.657 [2024-11-18 23:24:13.947486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 92.032 ms 00:28:54.657 [2024-11-18 23:24:13.947502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.657 [2024-11-18 23:24:13.947733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.657 [2024-11-18 23:24:13.947749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:54.657 [2024-11-18 23:24:13.947758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.173 ms 00:28:54.657 [2024-11-18 23:24:13.947770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.657 [2024-11-18 23:24:13.953398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.658 [2024-11-18 23:24:13.953457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:28:54.658 [2024-11-18 23:24:13.953469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.591 ms 00:28:54.658 [2024-11-18 23:24:13.953481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.658 [2024-11-18 23:24:13.958224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.658 [2024-11-18 23:24:13.958279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:28:54.658 [2024-11-18 23:24:13.958291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.691 ms 00:28:54.658 [2024-11-18 23:24:13.958302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.658 [2024-11-18 23:24:13.958783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.658 [2024-11-18 23:24:13.958812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:54.658 [2024-11-18 23:24:13.958823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.340 ms 00:28:54.658 [2024-11-18 23:24:13.958838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.658 [2024-11-18 23:24:14.007869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.658 [2024-11-18 23:24:14.007940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:28:54.658 [2024-11-18 23:24:14.007955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.990 ms 00:28:54.658 [2024-11-18 23:24:14.007968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.658 [2024-11-18 23:24:14.015373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.658 [2024-11-18 23:24:14.015436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:28:54.658 [2024-11-18 23:24:14.015448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.335 ms 00:28:54.658 [2024-11-18 23:24:14.015460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.658 [2024-11-18 23:24:14.020736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.658 [2024-11-18 23:24:14.020795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:28:54.658 [2024-11-18 23:24:14.020805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.225 ms 00:28:54.658 [2024-11-18 23:24:14.020817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.658 [2024-11-18 23:24:14.026313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.658 [2024-11-18 23:24:14.026375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:54.658 [2024-11-18 23:24:14.026386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.447 ms 00:28:54.658 [2024-11-18 23:24:14.026401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.658 [2024-11-18 23:24:14.026486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.658 [2024-11-18 23:24:14.026500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:54.658 [2024-11-18 23:24:14.026511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:54.658 [2024-11-18 23:24:14.026523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.658 [2024-11-18 23:24:14.026642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:54.658 [2024-11-18 23:24:14.026657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:54.658 [2024-11-18 23:24:14.026666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:28:54.658 [2024-11-18 23:24:14.026686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:54.658 [2024-11-18 23:24:14.028108] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3531.134 ms, result 0 00:28:54.919 { 00:28:54.919 "name": "ftl0", 00:28:54.919 "uuid": "010f5a10-8b64-4503-bab5-5638e96e85f0" 00:28:54.919 } 00:28:54.919 23:24:14 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:28:54.919 23:24:14 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:28:54.919 23:24:14 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:28:54.919 23:24:14 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:28:55.184 [2024-11-18 23:24:14.471267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:55.184 [2024-11-18 23:24:14.471370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:55.184 [2024-11-18 23:24:14.471388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:55.184 [2024-11-18 23:24:14.471398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:55.184 [2024-11-18 23:24:14.471433] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:55.184 [2024-11-18 23:24:14.472449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:55.184 [2024-11-18 23:24:14.472501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:55.184 [2024-11-18 23:24:14.472514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.998 ms 00:28:55.184 [2024-11-18 23:24:14.472526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:55.184 [2024-11-18 23:24:14.472825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:55.184 [2024-11-18 23:24:14.472851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:55.184 [2024-11-18 23:24:14.472863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:28:55.184 [2024-11-18 23:24:14.472875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:55.184 [2024-11-18 23:24:14.476177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:55.184 [2024-11-18 23:24:14.476202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:55.184 [2024-11-18 23:24:14.476213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.285 ms 00:28:55.184 [2024-11-18 23:24:14.476225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:55.184 [2024-11-18 23:24:14.482564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:55.184 [2024-11-18 23:24:14.482621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:55.184 [2024-11-18 23:24:14.482639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.319 ms 00:28:55.184 [2024-11-18 23:24:14.482651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:55.184 [2024-11-18 23:24:14.485588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:55.184 [2024-11-18 23:24:14.485653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:55.184 [2024-11-18 23:24:14.485664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.844 ms 00:28:55.184 [2024-11-18 23:24:14.485676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:55.184 [2024-11-18 23:24:14.493012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:55.184 [2024-11-18 23:24:14.493078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:55.184 [2024-11-18 23:24:14.493090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.288 ms 00:28:55.184 [2024-11-18 23:24:14.493102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:55.184 [2024-11-18 23:24:14.493286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:55.184 [2024-11-18 23:24:14.493307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:55.184 [2024-11-18 23:24:14.493318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:28:55.184 [2024-11-18 23:24:14.493331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:55.184 [2024-11-18 23:24:14.496447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:55.184 [2024-11-18 23:24:14.496509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:55.184 [2024-11-18 23:24:14.496521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.093 ms 00:28:55.184 [2024-11-18 23:24:14.496532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:55.184 [2024-11-18 23:24:14.499282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:55.184 [2024-11-18 23:24:14.499346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:55.184 [2024-11-18 23:24:14.499356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.699 ms 00:28:55.185 [2024-11-18 23:24:14.499368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:55.185 [2024-11-18 23:24:14.501704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:55.185 [2024-11-18 23:24:14.501762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:55.185 [2024-11-18 23:24:14.501772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.289 ms 00:28:55.185 [2024-11-18 23:24:14.501783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:55.185 [2024-11-18 23:24:14.504180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:55.185 [2024-11-18 23:24:14.504235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:55.185 [2024-11-18 23:24:14.504244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.323 ms 00:28:55.185 [2024-11-18 23:24:14.504254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:55.185 [2024-11-18 23:24:14.504337] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:55.185 [2024-11-18 23:24:14.504358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:55.185 [2024-11-18 23:24:14.504374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:55.185 [2024-11-18 23:24:14.504391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:55.185 [2024-11-18 23:24:14.504399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:55.185 [2024-11-18 23:24:14.504414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:55.185 [2024-11-18 23:24:14.504422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:55.185 [2024-11-18 23:24:14.504433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:55.185 [2024-11-18 23:24:14.504441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:55.185 [2024-11-18 23:24:14.504451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:55.185 [2024-11-18 23:24:14.504459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:55.185 [2024-11-18 23:24:14.504468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:55.185 [2024-11-18 23:24:14.504476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:55.185 [2024-11-18 23:24:14.504485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:55.185 [2024-11-18 23:24:14.504493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:55.185 [2024-11-18 23:24:14.504504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:55.185 [2024-11-18 23:24:14.504512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:55.185 [2024-11-18 23:24:14.504522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:55.185 [2024-11-18 23:24:14.504530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:55.185 [2024-11-18 23:24:14.504540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:55.185 [2024-11-18 23:24:14.504548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:55.185 [2024-11-18 23:24:14.504561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:55.185 [2024-11-18 23:24:14.504568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:55.185 [2024-11-18 23:24:14.504578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:55.185 [2024-11-18 23:24:14.504585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:55.185 [2024-11-18 23:24:14.504594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:55.185 [2024-11-18 23:24:14.504602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:55.185 [2024-11-18 23:24:14.504612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:55.185 [2024-11-18 23:24:14.504619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:55.185 [2024-11-18 23:24:14.504630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:55.185 [2024-11-18 23:24:14.504637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:55.185 [2024-11-18 23:24:14.504649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:55.185 [2024-11-18 23:24:14.504657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:55.185 [2024-11-18 23:24:14.504669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:55.185 [2024-11-18 23:24:14.504677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:55.185 [2024-11-18 23:24:14.504686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:55.185 [2024-11-18 23:24:14.504694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:55.185 [2024-11-18 23:24:14.504706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:55.185 [2024-11-18 23:24:14.504714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:55.185 [2024-11-18 23:24:14.504723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:55.185 [2024-11-18 23:24:14.504731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:55.185 [2024-11-18 23:24:14.504740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:55.185 [2024-11-18 23:24:14.504749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:55.185 [2024-11-18 23:24:14.504759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:55.185 [2024-11-18 23:24:14.504767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:55.185 [2024-11-18 23:24:14.504776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:55.185 [2024-11-18 23:24:14.504785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:55.185 [2024-11-18 23:24:14.504795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:55.185 [2024-11-18 23:24:14.504802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:55.185 [2024-11-18 23:24:14.504812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:55.185 [2024-11-18 23:24:14.504819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:55.185 [2024-11-18 23:24:14.504828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:55.185 [2024-11-18 23:24:14.504836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:55.185 [2024-11-18 23:24:14.504848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:55.185 [2024-11-18 23:24:14.504855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:55.185 [2024-11-18 23:24:14.504867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:55.185 [2024-11-18 23:24:14.504874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:55.185 [2024-11-18 23:24:14.504883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:55.185 [2024-11-18 23:24:14.504891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:55.185 [2024-11-18 23:24:14.504900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:55.185 [2024-11-18 23:24:14.504908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:55.185 [2024-11-18 23:24:14.504918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:55.185 [2024-11-18 23:24:14.504924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:55.185 [2024-11-18 23:24:14.504936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:55.185 [2024-11-18 23:24:14.504945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:55.185 [2024-11-18 23:24:14.504965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:55.185 [2024-11-18 23:24:14.504973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:55.185 [2024-11-18 23:24:14.504983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:55.185 [2024-11-18 23:24:14.504991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:55.185 [2024-11-18 23:24:14.505003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:55.185 [2024-11-18 23:24:14.505012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:55.185 [2024-11-18 23:24:14.505022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:55.185 [2024-11-18 23:24:14.505030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:55.185 [2024-11-18 23:24:14.505040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:55.185 [2024-11-18 23:24:14.505048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:55.185 [2024-11-18 23:24:14.505060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:55.185 [2024-11-18 23:24:14.505068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:55.185 [2024-11-18 23:24:14.505077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:55.185 [2024-11-18 23:24:14.505085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:55.185 [2024-11-18 23:24:14.505094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:55.185 [2024-11-18 23:24:14.505102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:55.185 [2024-11-18 23:24:14.505112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:55.186 [2024-11-18 23:24:14.505120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:55.186 [2024-11-18 23:24:14.505130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:55.186 [2024-11-18 23:24:14.505137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:55.186 [2024-11-18 23:24:14.505149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:55.186 [2024-11-18 23:24:14.505175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:55.186 [2024-11-18 23:24:14.505185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:55.186 [2024-11-18 23:24:14.505193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:55.186 [2024-11-18 23:24:14.505203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:55.186 [2024-11-18 23:24:14.505211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:55.186 [2024-11-18 23:24:14.505223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:55.186 [2024-11-18 23:24:14.505231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:55.186 [2024-11-18 23:24:14.505241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:55.186 [2024-11-18 23:24:14.505248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:55.186 [2024-11-18 23:24:14.505260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:55.186 [2024-11-18 23:24:14.505269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:55.186 [2024-11-18 23:24:14.505280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:55.186 [2024-11-18 23:24:14.505288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:55.186 [2024-11-18 23:24:14.505298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:55.186 [2024-11-18 23:24:14.505307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:55.186 [2024-11-18 23:24:14.505329] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:55.186 [2024-11-18 23:24:14.505339] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 010f5a10-8b64-4503-bab5-5638e96e85f0 00:28:55.186 [2024-11-18 23:24:14.505351] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:55.186 [2024-11-18 23:24:14.505364] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:55.186 [2024-11-18 23:24:14.505374] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:55.186 [2024-11-18 23:24:14.505383] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:55.186 [2024-11-18 23:24:14.505394] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:55.186 [2024-11-18 23:24:14.505409] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:55.186 [2024-11-18 23:24:14.505420] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:55.186 [2024-11-18 23:24:14.505426] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:55.186 [2024-11-18 23:24:14.505436] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:55.186 [2024-11-18 23:24:14.505444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:55.186 [2024-11-18 23:24:14.505459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:55.186 [2024-11-18 23:24:14.505468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.109 ms 00:28:55.186 [2024-11-18 23:24:14.505478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:55.186 [2024-11-18 23:24:14.508730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:55.186 [2024-11-18 23:24:14.508782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:55.186 [2024-11-18 23:24:14.508793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.229 ms 00:28:55.186 [2024-11-18 23:24:14.508811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:55.186 [2024-11-18 23:24:14.508973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:55.186 [2024-11-18 23:24:14.508986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:55.186 [2024-11-18 23:24:14.508995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:28:55.186 [2024-11-18 23:24:14.509005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:55.186 [2024-11-18 23:24:14.519904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:55.186 [2024-11-18 23:24:14.519966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:55.186 [2024-11-18 23:24:14.519978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:55.186 [2024-11-18 23:24:14.519991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:55.186 [2024-11-18 23:24:14.520068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:55.186 [2024-11-18 23:24:14.520081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:55.186 [2024-11-18 23:24:14.520097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:55.186 [2024-11-18 23:24:14.520107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:55.186 [2024-11-18 23:24:14.520219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:55.186 [2024-11-18 23:24:14.520238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:55.186 [2024-11-18 23:24:14.520246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:55.186 [2024-11-18 23:24:14.520258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:55.186 [2024-11-18 23:24:14.520277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:55.186 [2024-11-18 23:24:14.520297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:55.186 [2024-11-18 23:24:14.520304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:55.186 [2024-11-18 23:24:14.520314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:55.186 [2024-11-18 23:24:14.540150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:55.186 [2024-11-18 23:24:14.540226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:55.186 [2024-11-18 23:24:14.540239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:55.186 [2024-11-18 23:24:14.540257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:55.186 [2024-11-18 23:24:14.555842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:55.186 [2024-11-18 23:24:14.555913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:55.186 [2024-11-18 23:24:14.555926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:55.186 [2024-11-18 23:24:14.555947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:55.186 [2024-11-18 23:24:14.556088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:55.186 [2024-11-18 23:24:14.556105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:55.186 [2024-11-18 23:24:14.556114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:55.186 [2024-11-18 23:24:14.556125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:55.448 [2024-11-18 23:24:14.556200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:55.448 [2024-11-18 23:24:14.556216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:55.448 [2024-11-18 23:24:14.556227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:55.448 [2024-11-18 23:24:14.556241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:55.448 [2024-11-18 23:24:14.556338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:55.448 [2024-11-18 23:24:14.556353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:55.448 [2024-11-18 23:24:14.556361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:55.448 [2024-11-18 23:24:14.556371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:55.448 [2024-11-18 23:24:14.556410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:55.448 [2024-11-18 23:24:14.556430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:55.448 [2024-11-18 23:24:14.556440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:55.448 [2024-11-18 23:24:14.556455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:55.448 [2024-11-18 23:24:14.556511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:55.448 [2024-11-18 23:24:14.556536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:55.448 [2024-11-18 23:24:14.556547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:55.448 [2024-11-18 23:24:14.556559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:55.448 [2024-11-18 23:24:14.556621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:55.448 [2024-11-18 23:24:14.556644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:55.448 [2024-11-18 23:24:14.556657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:55.448 [2024-11-18 23:24:14.556668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:55.448 [2024-11-18 23:24:14.556841] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 85.530 ms, result 0 00:28:55.448 true 00:28:55.448 23:24:14 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 93362 00:28:55.448 23:24:14 ftl.ftl_restore_fast -- common/autotest_common.sh@950 -- # '[' -z 93362 ']' 00:28:55.448 23:24:14 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # kill -0 93362 00:28:55.448 23:24:14 ftl.ftl_restore_fast -- common/autotest_common.sh@955 -- # uname 00:28:55.448 23:24:14 ftl.ftl_restore_fast -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:55.448 23:24:14 ftl.ftl_restore_fast -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93362 00:28:55.448 23:24:14 ftl.ftl_restore_fast -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:55.448 23:24:14 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:55.448 killing process with pid 93362 00:28:55.448 23:24:14 ftl.ftl_restore_fast -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93362' 00:28:55.448 23:24:14 ftl.ftl_restore_fast -- common/autotest_common.sh@969 -- # kill 93362 00:28:55.448 23:24:14 ftl.ftl_restore_fast -- common/autotest_common.sh@974 -- # wait 93362 00:29:00.745 23:24:19 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:29:04.046 262144+0 records in 00:29:04.046 262144+0 records out 00:29:04.046 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.01483 s, 267 MB/s 00:29:04.046 23:24:23 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:29:06.589 23:24:25 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:06.589 [2024-11-18 23:24:25.422011] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:06.589 [2024-11-18 23:24:25.422175] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93584 ] 00:29:06.589 [2024-11-18 23:24:25.575136] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:06.589 [2024-11-18 23:24:25.648968] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:06.589 [2024-11-18 23:24:25.799298] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:06.589 [2024-11-18 23:24:25.799402] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:06.589 [2024-11-18 23:24:25.962291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.589 [2024-11-18 23:24:25.962352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:06.589 [2024-11-18 23:24:25.962373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:06.589 [2024-11-18 23:24:25.962387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.589 [2024-11-18 23:24:25.962459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.589 [2024-11-18 23:24:25.962475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:06.589 [2024-11-18 23:24:25.962485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:29:06.589 [2024-11-18 23:24:25.962494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.589 [2024-11-18 23:24:25.962516] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:06.589 [2024-11-18 23:24:25.962954] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:06.589 [2024-11-18 23:24:25.963013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.589 [2024-11-18 23:24:25.963023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:06.589 [2024-11-18 23:24:25.963038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.503 ms 00:29:06.589 [2024-11-18 23:24:25.963055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.850 [2024-11-18 23:24:25.965663] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:06.850 [2024-11-18 23:24:25.970130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.850 [2024-11-18 23:24:25.970208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:06.850 [2024-11-18 23:24:25.970223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.470 ms 00:29:06.850 [2024-11-18 23:24:25.970233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.850 [2024-11-18 23:24:25.970330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.850 [2024-11-18 23:24:25.970342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:06.850 [2024-11-18 23:24:25.970364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:29:06.850 [2024-11-18 23:24:25.970373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.850 [2024-11-18 23:24:25.982111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.850 [2024-11-18 23:24:25.982178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:06.850 [2024-11-18 23:24:25.982194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.683 ms 00:29:06.850 [2024-11-18 23:24:25.982213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.850 [2024-11-18 23:24:25.982323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.850 [2024-11-18 23:24:25.982334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:06.850 [2024-11-18 23:24:25.982343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:29:06.850 [2024-11-18 23:24:25.982351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.850 [2024-11-18 23:24:25.982426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.850 [2024-11-18 23:24:25.982438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:06.850 [2024-11-18 23:24:25.982448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:29:06.850 [2024-11-18 23:24:25.982455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.850 [2024-11-18 23:24:25.982492] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:06.850 [2024-11-18 23:24:25.985040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.850 [2024-11-18 23:24:25.985089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:06.850 [2024-11-18 23:24:25.985099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.561 ms 00:29:06.850 [2024-11-18 23:24:25.985108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.850 [2024-11-18 23:24:25.985145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.850 [2024-11-18 23:24:25.985170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:06.850 [2024-11-18 23:24:25.985180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:29:06.850 [2024-11-18 23:24:25.985188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.850 [2024-11-18 23:24:25.985217] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:06.850 [2024-11-18 23:24:25.985251] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:06.850 [2024-11-18 23:24:25.985291] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:06.850 [2024-11-18 23:24:25.985308] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:06.850 [2024-11-18 23:24:25.985420] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:06.850 [2024-11-18 23:24:25.985439] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:06.850 [2024-11-18 23:24:25.985455] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:06.850 [2024-11-18 23:24:25.985465] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:06.850 [2024-11-18 23:24:25.985479] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:06.850 [2024-11-18 23:24:25.985487] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:06.850 [2024-11-18 23:24:25.985496] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:06.850 [2024-11-18 23:24:25.985508] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:06.850 [2024-11-18 23:24:25.985516] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:06.850 [2024-11-18 23:24:25.985526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.850 [2024-11-18 23:24:25.985534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:06.850 [2024-11-18 23:24:25.985542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:29:06.850 [2024-11-18 23:24:25.985551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.850 [2024-11-18 23:24:25.985637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.850 [2024-11-18 23:24:25.985654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:06.850 [2024-11-18 23:24:25.985662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:29:06.850 [2024-11-18 23:24:25.985670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.850 [2024-11-18 23:24:25.985773] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:06.851 [2024-11-18 23:24:25.985795] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:06.851 [2024-11-18 23:24:25.985804] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:06.851 [2024-11-18 23:24:25.985821] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:06.851 [2024-11-18 23:24:25.985830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:06.851 [2024-11-18 23:24:25.985838] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:06.851 [2024-11-18 23:24:25.985846] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:06.851 [2024-11-18 23:24:25.985854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:06.851 [2024-11-18 23:24:25.985862] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:06.851 [2024-11-18 23:24:25.985869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:06.851 [2024-11-18 23:24:25.985878] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:06.851 [2024-11-18 23:24:25.985886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:06.851 [2024-11-18 23:24:25.985897] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:06.851 [2024-11-18 23:24:25.985905] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:06.851 [2024-11-18 23:24:25.985913] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:06.851 [2024-11-18 23:24:25.985921] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:06.851 [2024-11-18 23:24:25.985933] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:06.851 [2024-11-18 23:24:25.985942] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:06.851 [2024-11-18 23:24:25.985950] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:06.851 [2024-11-18 23:24:25.985958] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:06.851 [2024-11-18 23:24:25.985966] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:06.851 [2024-11-18 23:24:25.985975] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:06.851 [2024-11-18 23:24:25.985982] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:06.851 [2024-11-18 23:24:25.985990] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:06.851 [2024-11-18 23:24:25.985998] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:06.851 [2024-11-18 23:24:25.986005] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:06.851 [2024-11-18 23:24:25.986014] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:06.851 [2024-11-18 23:24:25.986022] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:06.851 [2024-11-18 23:24:25.986038] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:06.851 [2024-11-18 23:24:25.986047] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:06.851 [2024-11-18 23:24:25.986055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:06.851 [2024-11-18 23:24:25.986064] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:06.851 [2024-11-18 23:24:25.986071] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:06.851 [2024-11-18 23:24:25.986077] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:06.851 [2024-11-18 23:24:25.986083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:06.851 [2024-11-18 23:24:25.986090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:06.851 [2024-11-18 23:24:25.986097] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:06.851 [2024-11-18 23:24:25.986104] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:06.851 [2024-11-18 23:24:25.986111] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:06.851 [2024-11-18 23:24:25.986118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:06.851 [2024-11-18 23:24:25.986124] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:06.851 [2024-11-18 23:24:25.986130] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:06.851 [2024-11-18 23:24:25.986137] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:06.851 [2024-11-18 23:24:25.986144] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:06.851 [2024-11-18 23:24:25.986173] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:06.851 [2024-11-18 23:24:25.986181] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:06.851 [2024-11-18 23:24:25.986196] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:06.851 [2024-11-18 23:24:25.986204] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:06.851 [2024-11-18 23:24:25.986213] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:06.851 [2024-11-18 23:24:25.986221] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:06.851 [2024-11-18 23:24:25.986228] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:06.851 [2024-11-18 23:24:25.986235] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:06.851 [2024-11-18 23:24:25.986243] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:06.851 [2024-11-18 23:24:25.986253] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:06.851 [2024-11-18 23:24:25.986268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:06.851 [2024-11-18 23:24:25.986278] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:06.851 [2024-11-18 23:24:25.986286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:06.851 [2024-11-18 23:24:25.986294] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:06.851 [2024-11-18 23:24:25.986302] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:06.851 [2024-11-18 23:24:25.986310] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:06.851 [2024-11-18 23:24:25.986321] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:06.851 [2024-11-18 23:24:25.986328] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:06.851 [2024-11-18 23:24:25.986335] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:06.851 [2024-11-18 23:24:25.986342] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:06.851 [2024-11-18 23:24:25.986349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:06.851 [2024-11-18 23:24:25.986357] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:06.851 [2024-11-18 23:24:25.986364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:06.851 [2024-11-18 23:24:25.986371] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:06.851 [2024-11-18 23:24:25.986380] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:06.851 [2024-11-18 23:24:25.986387] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:06.851 [2024-11-18 23:24:25.986396] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:06.851 [2024-11-18 23:24:25.986405] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:06.851 [2024-11-18 23:24:25.986412] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:06.851 [2024-11-18 23:24:25.986419] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:06.851 [2024-11-18 23:24:25.986427] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:06.851 [2024-11-18 23:24:25.986434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.851 [2024-11-18 23:24:25.986445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:06.851 [2024-11-18 23:24:25.986453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.731 ms 00:29:06.851 [2024-11-18 23:24:25.986461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.851 [2024-11-18 23:24:26.030145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.851 [2024-11-18 23:24:26.030266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:06.851 [2024-11-18 23:24:26.030323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.613 ms 00:29:06.851 [2024-11-18 23:24:26.030345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.851 [2024-11-18 23:24:26.030630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.851 [2024-11-18 23:24:26.030671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:06.851 [2024-11-18 23:24:26.030697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.212 ms 00:29:06.851 [2024-11-18 23:24:26.030717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.851 [2024-11-18 23:24:26.043099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.851 [2024-11-18 23:24:26.043138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:06.851 [2024-11-18 23:24:26.043148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.249 ms 00:29:06.851 [2024-11-18 23:24:26.043170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.851 [2024-11-18 23:24:26.043202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.851 [2024-11-18 23:24:26.043212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:06.851 [2024-11-18 23:24:26.043221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:06.851 [2024-11-18 23:24:26.043238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.851 [2024-11-18 23:24:26.043743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.851 [2024-11-18 23:24:26.043781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:06.851 [2024-11-18 23:24:26.043793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.458 ms 00:29:06.851 [2024-11-18 23:24:26.043803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.851 [2024-11-18 23:24:26.043948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.852 [2024-11-18 23:24:26.043969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:06.852 [2024-11-18 23:24:26.043979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:29:06.852 [2024-11-18 23:24:26.043989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.852 [2024-11-18 23:24:26.050552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.852 [2024-11-18 23:24:26.050586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:06.852 [2024-11-18 23:24:26.050602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.541 ms 00:29:06.852 [2024-11-18 23:24:26.050611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.852 [2024-11-18 23:24:26.053806] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:29:06.852 [2024-11-18 23:24:26.053852] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:06.852 [2024-11-18 23:24:26.053867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.852 [2024-11-18 23:24:26.053875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:06.852 [2024-11-18 23:24:26.053885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.165 ms 00:29:06.852 [2024-11-18 23:24:26.053892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.852 [2024-11-18 23:24:26.069050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.852 [2024-11-18 23:24:26.069094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:06.852 [2024-11-18 23:24:26.069111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.041 ms 00:29:06.852 [2024-11-18 23:24:26.069123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.852 [2024-11-18 23:24:26.071122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.852 [2024-11-18 23:24:26.071171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:06.852 [2024-11-18 23:24:26.071182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.866 ms 00:29:06.852 [2024-11-18 23:24:26.071190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.852 [2024-11-18 23:24:26.072742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.852 [2024-11-18 23:24:26.072775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:06.852 [2024-11-18 23:24:26.072784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.517 ms 00:29:06.852 [2024-11-18 23:24:26.072791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.852 [2024-11-18 23:24:26.073129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.852 [2024-11-18 23:24:26.073152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:06.852 [2024-11-18 23:24:26.073174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:29:06.852 [2024-11-18 23:24:26.073182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.852 [2024-11-18 23:24:26.091615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.852 [2024-11-18 23:24:26.091667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:06.852 [2024-11-18 23:24:26.091683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.416 ms 00:29:06.852 [2024-11-18 23:24:26.091692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.852 [2024-11-18 23:24:26.099336] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:06.852 [2024-11-18 23:24:26.102068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.852 [2024-11-18 23:24:26.102100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:06.852 [2024-11-18 23:24:26.102112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.336 ms 00:29:06.852 [2024-11-18 23:24:26.102128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.852 [2024-11-18 23:24:26.102198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.852 [2024-11-18 23:24:26.102212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:06.852 [2024-11-18 23:24:26.102223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:29:06.852 [2024-11-18 23:24:26.102232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.852 [2024-11-18 23:24:26.102305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.852 [2024-11-18 23:24:26.102316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:06.852 [2024-11-18 23:24:26.102324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:29:06.852 [2024-11-18 23:24:26.102332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.852 [2024-11-18 23:24:26.102357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.852 [2024-11-18 23:24:26.102365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:06.852 [2024-11-18 23:24:26.102374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:06.852 [2024-11-18 23:24:26.102386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.852 [2024-11-18 23:24:26.102424] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:06.852 [2024-11-18 23:24:26.102435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.852 [2024-11-18 23:24:26.102445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:06.852 [2024-11-18 23:24:26.102453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:29:06.852 [2024-11-18 23:24:26.102461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.852 [2024-11-18 23:24:26.106013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.852 [2024-11-18 23:24:26.106053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:06.852 [2024-11-18 23:24:26.106063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.534 ms 00:29:06.852 [2024-11-18 23:24:26.106072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.852 [2024-11-18 23:24:26.106142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.852 [2024-11-18 23:24:26.106152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:06.852 [2024-11-18 23:24:26.106172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:29:06.852 [2024-11-18 23:24:26.106180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.852 [2024-11-18 23:24:26.107264] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 144.478 ms, result 0 00:29:07.788  [2024-11-18T23:24:28.541Z] Copying: 21/1024 [MB] (21 MBps) [2024-11-18T23:24:29.483Z] Copying: 56/1024 [MB] (34 MBps) [2024-11-18T23:24:30.425Z] Copying: 86/1024 [MB] (30 MBps) [2024-11-18T23:24:31.366Z] Copying: 105/1024 [MB] (19 MBps) [2024-11-18T23:24:32.310Z] Copying: 138/1024 [MB] (32 MBps) [2024-11-18T23:24:33.254Z] Copying: 165/1024 [MB] (26 MBps) [2024-11-18T23:24:34.195Z] Copying: 194/1024 [MB] (28 MBps) [2024-11-18T23:24:35.189Z] Copying: 225/1024 [MB] (31 MBps) [2024-11-18T23:24:36.136Z] Copying: 248/1024 [MB] (23 MBps) [2024-11-18T23:24:37.512Z] Copying: 267/1024 [MB] (18 MBps) [2024-11-18T23:24:38.456Z] Copying: 287/1024 [MB] (20 MBps) [2024-11-18T23:24:39.398Z] Copying: 303/1024 [MB] (16 MBps) [2024-11-18T23:24:40.344Z] Copying: 324/1024 [MB] (20 MBps) [2024-11-18T23:24:41.281Z] Copying: 340/1024 [MB] (15 MBps) [2024-11-18T23:24:42.218Z] Copying: 358/1024 [MB] (18 MBps) [2024-11-18T23:24:43.153Z] Copying: 386/1024 [MB] (27 MBps) [2024-11-18T23:24:44.530Z] Copying: 401/1024 [MB] (15 MBps) [2024-11-18T23:24:45.465Z] Copying: 417/1024 [MB] (15 MBps) [2024-11-18T23:24:46.400Z] Copying: 462/1024 [MB] (45 MBps) [2024-11-18T23:24:47.336Z] Copying: 479/1024 [MB] (16 MBps) [2024-11-18T23:24:48.276Z] Copying: 501/1024 [MB] (22 MBps) [2024-11-18T23:24:49.221Z] Copying: 526/1024 [MB] (25 MBps) [2024-11-18T23:24:50.164Z] Copying: 549/1024 [MB] (22 MBps) [2024-11-18T23:24:51.552Z] Copying: 565/1024 [MB] (16 MBps) [2024-11-18T23:24:52.124Z] Copying: 591/1024 [MB] (25 MBps) [2024-11-18T23:24:53.512Z] Copying: 607/1024 [MB] (15 MBps) [2024-11-18T23:24:54.456Z] Copying: 622/1024 [MB] (15 MBps) [2024-11-18T23:24:55.397Z] Copying: 636/1024 [MB] (13 MBps) [2024-11-18T23:24:56.339Z] Copying: 670/1024 [MB] (34 MBps) [2024-11-18T23:24:57.284Z] Copying: 696/1024 [MB] (25 MBps) [2024-11-18T23:24:58.229Z] Copying: 712/1024 [MB] (16 MBps) [2024-11-18T23:24:59.174Z] Copying: 723/1024 [MB] (11 MBps) [2024-11-18T23:25:00.561Z] Copying: 758/1024 [MB] (34 MBps) [2024-11-18T23:25:01.134Z] Copying: 786/1024 [MB] (28 MBps) [2024-11-18T23:25:02.522Z] Copying: 810/1024 [MB] (24 MBps) [2024-11-18T23:25:03.468Z] Copying: 833/1024 [MB] (22 MBps) [2024-11-18T23:25:04.414Z] Copying: 858/1024 [MB] (25 MBps) [2024-11-18T23:25:05.359Z] Copying: 873/1024 [MB] (14 MBps) [2024-11-18T23:25:06.303Z] Copying: 894/1024 [MB] (20 MBps) [2024-11-18T23:25:07.326Z] Copying: 911/1024 [MB] (17 MBps) [2024-11-18T23:25:08.269Z] Copying: 927/1024 [MB] (16 MBps) [2024-11-18T23:25:09.209Z] Copying: 951/1024 [MB] (23 MBps) [2024-11-18T23:25:10.148Z] Copying: 979/1024 [MB] (27 MBps) [2024-11-18T23:25:11.532Z] Copying: 998/1024 [MB] (18 MBps) [2024-11-18T23:25:12.104Z] Copying: 1008/1024 [MB] (10 MBps) [2024-11-18T23:25:12.104Z] Copying: 1024/1024 [MB] (average 22 MBps)[2024-11-18 23:25:11.940219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.726 [2024-11-18 23:25:11.940265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:52.726 [2024-11-18 23:25:11.940278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:52.726 [2024-11-18 23:25:11.940285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.726 [2024-11-18 23:25:11.940306] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:52.726 [2024-11-18 23:25:11.940847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.726 [2024-11-18 23:25:11.940872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:52.726 [2024-11-18 23:25:11.940880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.525 ms 00:29:52.726 [2024-11-18 23:25:11.940887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.726 [2024-11-18 23:25:11.942503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.726 [2024-11-18 23:25:11.942540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:52.726 [2024-11-18 23:25:11.942548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.599 ms 00:29:52.726 [2024-11-18 23:25:11.942554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.726 [2024-11-18 23:25:11.942576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.726 [2024-11-18 23:25:11.942586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:29:52.726 [2024-11-18 23:25:11.942596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:52.726 [2024-11-18 23:25:11.942602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.726 [2024-11-18 23:25:11.942644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.726 [2024-11-18 23:25:11.942651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:29:52.726 [2024-11-18 23:25:11.942658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:29:52.726 [2024-11-18 23:25:11.942671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.726 [2024-11-18 23:25:11.942682] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:52.726 [2024-11-18 23:25:11.942692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:52.726 [2024-11-18 23:25:11.942703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:52.726 [2024-11-18 23:25:11.942710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:52.726 [2024-11-18 23:25:11.942717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:52.726 [2024-11-18 23:25:11.942723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:52.726 [2024-11-18 23:25:11.942730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:52.726 [2024-11-18 23:25:11.942736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:52.726 [2024-11-18 23:25:11.942742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:52.726 [2024-11-18 23:25:11.942748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:52.726 [2024-11-18 23:25:11.942754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:52.726 [2024-11-18 23:25:11.942760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:52.726 [2024-11-18 23:25:11.942766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:52.726 [2024-11-18 23:25:11.942772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:52.727 [2024-11-18 23:25:11.942778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:52.727 [2024-11-18 23:25:11.942784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:52.727 [2024-11-18 23:25:11.942790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:52.727 [2024-11-18 23:25:11.942802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:52.727 [2024-11-18 23:25:11.942808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:52.727 [2024-11-18 23:25:11.942814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:52.727 [2024-11-18 23:25:11.942820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:52.727 [2024-11-18 23:25:11.942826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:52.727 [2024-11-18 23:25:11.942832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:52.727 [2024-11-18 23:25:11.942838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:52.727 [2024-11-18 23:25:11.942844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:52.727 [2024-11-18 23:25:11.942852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:52.727 [2024-11-18 23:25:11.942859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:52.727 [2024-11-18 23:25:11.942865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:52.727 [2024-11-18 23:25:11.942871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:52.727 [2024-11-18 23:25:11.942877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:52.727 [2024-11-18 23:25:11.942883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:52.727 [2024-11-18 23:25:11.942889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:52.727 [2024-11-18 23:25:11.942894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:52.727 [2024-11-18 23:25:11.942900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:52.727 [2024-11-18 23:25:11.942906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:52.727 [2024-11-18 23:25:11.942912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:52.727 [2024-11-18 23:25:11.942919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:52.727 [2024-11-18 23:25:11.942924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:52.727 [2024-11-18 23:25:11.942930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:52.727 [2024-11-18 23:25:11.942935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:52.727 [2024-11-18 23:25:11.942941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:52.727 [2024-11-18 23:25:11.942947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:52.727 [2024-11-18 23:25:11.942953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:52.727 [2024-11-18 23:25:11.942959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:52.727 [2024-11-18 23:25:11.942965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:52.727 [2024-11-18 23:25:11.942971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:52.727 [2024-11-18 23:25:11.942977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:52.727 [2024-11-18 23:25:11.942983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:52.727 [2024-11-18 23:25:11.942988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:52.727 [2024-11-18 23:25:11.942994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:52.727 [2024-11-18 23:25:11.943000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:52.727 [2024-11-18 23:25:11.943006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:52.727 [2024-11-18 23:25:11.943012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:52.727 [2024-11-18 23:25:11.943018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:52.727 [2024-11-18 23:25:11.943023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:52.727 [2024-11-18 23:25:11.943029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:52.727 [2024-11-18 23:25:11.943035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:52.727 [2024-11-18 23:25:11.943042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:52.727 [2024-11-18 23:25:11.943049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:52.727 [2024-11-18 23:25:11.943055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:52.727 [2024-11-18 23:25:11.943061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:52.727 [2024-11-18 23:25:11.943067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:52.727 [2024-11-18 23:25:11.943072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:52.727 [2024-11-18 23:25:11.943078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:52.727 [2024-11-18 23:25:11.943084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:52.727 [2024-11-18 23:25:11.943090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:52.727 [2024-11-18 23:25:11.943096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:52.727 [2024-11-18 23:25:11.943102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:52.727 [2024-11-18 23:25:11.943108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:52.727 [2024-11-18 23:25:11.943114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:52.727 [2024-11-18 23:25:11.943120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:52.727 [2024-11-18 23:25:11.943126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:52.727 [2024-11-18 23:25:11.943131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:52.727 [2024-11-18 23:25:11.943137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:52.727 [2024-11-18 23:25:11.943143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:52.727 [2024-11-18 23:25:11.943150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:52.727 [2024-11-18 23:25:11.943167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:52.727 [2024-11-18 23:25:11.943173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:52.727 [2024-11-18 23:25:11.943178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:52.727 [2024-11-18 23:25:11.943184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:52.727 [2024-11-18 23:25:11.943190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:52.727 [2024-11-18 23:25:11.943196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:52.727 [2024-11-18 23:25:11.943202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:52.727 [2024-11-18 23:25:11.943208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:52.728 [2024-11-18 23:25:11.943213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:52.728 [2024-11-18 23:25:11.943219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:52.728 [2024-11-18 23:25:11.943225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:52.728 [2024-11-18 23:25:11.943230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:52.728 [2024-11-18 23:25:11.943236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:52.728 [2024-11-18 23:25:11.943247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:52.728 [2024-11-18 23:25:11.943253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:52.728 [2024-11-18 23:25:11.943259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:52.728 [2024-11-18 23:25:11.943265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:52.728 [2024-11-18 23:25:11.943271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:52.728 [2024-11-18 23:25:11.943277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:52.728 [2024-11-18 23:25:11.943283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:52.728 [2024-11-18 23:25:11.943288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:52.728 [2024-11-18 23:25:11.943294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:52.728 [2024-11-18 23:25:11.943299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:52.728 [2024-11-18 23:25:11.943305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:52.728 [2024-11-18 23:25:11.943311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:52.728 [2024-11-18 23:25:11.943323] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:52.728 [2024-11-18 23:25:11.943331] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 010f5a10-8b64-4503-bab5-5638e96e85f0 00:29:52.728 [2024-11-18 23:25:11.943338] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:52.728 [2024-11-18 23:25:11.943345] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:29:52.728 [2024-11-18 23:25:11.943351] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:52.728 [2024-11-18 23:25:11.943356] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:52.728 [2024-11-18 23:25:11.943362] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:52.728 [2024-11-18 23:25:11.943368] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:52.728 [2024-11-18 23:25:11.943374] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:52.728 [2024-11-18 23:25:11.943379] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:52.728 [2024-11-18 23:25:11.943384] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:52.728 [2024-11-18 23:25:11.943390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.728 [2024-11-18 23:25:11.943396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:52.728 [2024-11-18 23:25:11.943403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.709 ms 00:29:52.728 [2024-11-18 23:25:11.943411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.728 [2024-11-18 23:25:11.945084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.728 [2024-11-18 23:25:11.945114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:52.728 [2024-11-18 23:25:11.945121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.660 ms 00:29:52.728 [2024-11-18 23:25:11.945127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.728 [2024-11-18 23:25:11.945224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:52.728 [2024-11-18 23:25:11.945233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:52.728 [2024-11-18 23:25:11.945240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:29:52.728 [2024-11-18 23:25:11.945249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.728 [2024-11-18 23:25:11.950333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:52.728 [2024-11-18 23:25:11.950357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:52.728 [2024-11-18 23:25:11.950364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:52.728 [2024-11-18 23:25:11.950371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.728 [2024-11-18 23:25:11.950418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:52.728 [2024-11-18 23:25:11.950425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:52.728 [2024-11-18 23:25:11.950432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:52.728 [2024-11-18 23:25:11.950449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.728 [2024-11-18 23:25:11.950473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:52.728 [2024-11-18 23:25:11.950480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:52.728 [2024-11-18 23:25:11.950486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:52.728 [2024-11-18 23:25:11.950491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.728 [2024-11-18 23:25:11.950503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:52.728 [2024-11-18 23:25:11.950509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:52.728 [2024-11-18 23:25:11.950515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:52.728 [2024-11-18 23:25:11.950521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.728 [2024-11-18 23:25:11.961541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:52.728 [2024-11-18 23:25:11.961580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:52.728 [2024-11-18 23:25:11.961590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:52.728 [2024-11-18 23:25:11.961597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.728 [2024-11-18 23:25:11.969926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:52.728 [2024-11-18 23:25:11.969974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:52.728 [2024-11-18 23:25:11.969984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:52.728 [2024-11-18 23:25:11.969994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.728 [2024-11-18 23:25:11.970064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:52.728 [2024-11-18 23:25:11.970073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:52.728 [2024-11-18 23:25:11.970080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:52.728 [2024-11-18 23:25:11.970086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.728 [2024-11-18 23:25:11.970107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:52.728 [2024-11-18 23:25:11.970114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:52.728 [2024-11-18 23:25:11.970120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:52.728 [2024-11-18 23:25:11.970130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.728 [2024-11-18 23:25:11.970186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:52.728 [2024-11-18 23:25:11.970194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:52.728 [2024-11-18 23:25:11.970200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:52.728 [2024-11-18 23:25:11.970206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.728 [2024-11-18 23:25:11.970228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:52.728 [2024-11-18 23:25:11.970235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:52.728 [2024-11-18 23:25:11.970242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:52.728 [2024-11-18 23:25:11.970251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.729 [2024-11-18 23:25:11.970284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:52.729 [2024-11-18 23:25:11.970294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:52.729 [2024-11-18 23:25:11.970300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:52.729 [2024-11-18 23:25:11.970307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.729 [2024-11-18 23:25:11.970345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:52.729 [2024-11-18 23:25:11.970353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:52.729 [2024-11-18 23:25:11.970360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:52.729 [2024-11-18 23:25:11.970366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.729 [2024-11-18 23:25:11.970487] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 30.229 ms, result 0 00:29:53.300 00:29:53.300 00:29:53.300 23:25:12 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:29:53.300 [2024-11-18 23:25:12.447701] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:29:53.300 [2024-11-18 23:25:12.447820] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94100 ] 00:29:53.300 [2024-11-18 23:25:12.596708] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:53.300 [2024-11-18 23:25:12.656302] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:53.561 [2024-11-18 23:25:12.756771] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:53.561 [2024-11-18 23:25:12.756832] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:53.561 [2024-11-18 23:25:12.914378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.561 [2024-11-18 23:25:12.914425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:53.561 [2024-11-18 23:25:12.914459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:53.561 [2024-11-18 23:25:12.914473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.561 [2024-11-18 23:25:12.914525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.561 [2024-11-18 23:25:12.914541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:53.561 [2024-11-18 23:25:12.914549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:29:53.561 [2024-11-18 23:25:12.914557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.561 [2024-11-18 23:25:12.914577] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:53.561 [2024-11-18 23:25:12.914833] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:53.561 [2024-11-18 23:25:12.914849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.561 [2024-11-18 23:25:12.914856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:53.561 [2024-11-18 23:25:12.914871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:29:53.561 [2024-11-18 23:25:12.914881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.561 [2024-11-18 23:25:12.915168] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:29:53.561 [2024-11-18 23:25:12.915192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.561 [2024-11-18 23:25:12.915201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:53.561 [2024-11-18 23:25:12.915214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:29:53.561 [2024-11-18 23:25:12.915222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.561 [2024-11-18 23:25:12.915272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.561 [2024-11-18 23:25:12.915284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:53.561 [2024-11-18 23:25:12.915293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:29:53.561 [2024-11-18 23:25:12.915299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.561 [2024-11-18 23:25:12.915583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.561 [2024-11-18 23:25:12.915595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:53.561 [2024-11-18 23:25:12.915606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.248 ms 00:29:53.561 [2024-11-18 23:25:12.915614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.562 [2024-11-18 23:25:12.915690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.562 [2024-11-18 23:25:12.915702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:53.562 [2024-11-18 23:25:12.915710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:29:53.562 [2024-11-18 23:25:12.915717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.562 [2024-11-18 23:25:12.915742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.562 [2024-11-18 23:25:12.915751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:53.562 [2024-11-18 23:25:12.915758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:53.562 [2024-11-18 23:25:12.915765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.562 [2024-11-18 23:25:12.915784] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:53.562 [2024-11-18 23:25:12.917747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.562 [2024-11-18 23:25:12.917775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:53.562 [2024-11-18 23:25:12.917791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.967 ms 00:29:53.562 [2024-11-18 23:25:12.917798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.562 [2024-11-18 23:25:12.917829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.562 [2024-11-18 23:25:12.917836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:53.562 [2024-11-18 23:25:12.917844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:29:53.562 [2024-11-18 23:25:12.917854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.562 [2024-11-18 23:25:12.917886] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:53.562 [2024-11-18 23:25:12.917906] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:53.562 [2024-11-18 23:25:12.917942] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:53.562 [2024-11-18 23:25:12.917957] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:53.562 [2024-11-18 23:25:12.918065] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:53.562 [2024-11-18 23:25:12.918076] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:53.562 [2024-11-18 23:25:12.918090] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:53.562 [2024-11-18 23:25:12.918100] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:53.562 [2024-11-18 23:25:12.918112] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:53.562 [2024-11-18 23:25:12.918125] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:53.562 [2024-11-18 23:25:12.918133] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:53.562 [2024-11-18 23:25:12.918140] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:53.562 [2024-11-18 23:25:12.918147] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:53.562 [2024-11-18 23:25:12.918171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.562 [2024-11-18 23:25:12.918178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:53.562 [2024-11-18 23:25:12.918186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:29:53.562 [2024-11-18 23:25:12.918193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.562 [2024-11-18 23:25:12.918275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.562 [2024-11-18 23:25:12.918283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:53.562 [2024-11-18 23:25:12.918293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:29:53.562 [2024-11-18 23:25:12.918300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.562 [2024-11-18 23:25:12.918417] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:53.562 [2024-11-18 23:25:12.918432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:53.562 [2024-11-18 23:25:12.918452] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:53.562 [2024-11-18 23:25:12.918462] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:53.562 [2024-11-18 23:25:12.918473] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:53.562 [2024-11-18 23:25:12.918488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:53.562 [2024-11-18 23:25:12.918496] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:53.562 [2024-11-18 23:25:12.918504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:53.562 [2024-11-18 23:25:12.918511] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:53.562 [2024-11-18 23:25:12.918519] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:53.562 [2024-11-18 23:25:12.918529] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:53.562 [2024-11-18 23:25:12.918537] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:53.562 [2024-11-18 23:25:12.918545] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:53.562 [2024-11-18 23:25:12.918553] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:53.562 [2024-11-18 23:25:12.918561] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:53.562 [2024-11-18 23:25:12.918569] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:53.562 [2024-11-18 23:25:12.918576] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:53.562 [2024-11-18 23:25:12.918584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:53.562 [2024-11-18 23:25:12.918592] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:53.562 [2024-11-18 23:25:12.918601] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:53.562 [2024-11-18 23:25:12.918610] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:53.562 [2024-11-18 23:25:12.918618] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:53.562 [2024-11-18 23:25:12.918626] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:53.562 [2024-11-18 23:25:12.918634] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:53.562 [2024-11-18 23:25:12.918641] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:53.562 [2024-11-18 23:25:12.918649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:53.562 [2024-11-18 23:25:12.918656] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:53.562 [2024-11-18 23:25:12.918663] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:53.562 [2024-11-18 23:25:12.918671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:53.562 [2024-11-18 23:25:12.918679] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:53.562 [2024-11-18 23:25:12.918687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:53.562 [2024-11-18 23:25:12.918694] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:53.562 [2024-11-18 23:25:12.918702] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:53.562 [2024-11-18 23:25:12.918709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:53.562 [2024-11-18 23:25:12.918718] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:53.562 [2024-11-18 23:25:12.918725] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:53.562 [2024-11-18 23:25:12.918736] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:53.562 [2024-11-18 23:25:12.918743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:53.562 [2024-11-18 23:25:12.918751] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:53.562 [2024-11-18 23:25:12.918758] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:53.562 [2024-11-18 23:25:12.918765] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:53.562 [2024-11-18 23:25:12.918773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:53.562 [2024-11-18 23:25:12.918781] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:53.562 [2024-11-18 23:25:12.918789] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:53.562 [2024-11-18 23:25:12.918798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:53.562 [2024-11-18 23:25:12.918807] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:53.562 [2024-11-18 23:25:12.918815] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:53.562 [2024-11-18 23:25:12.918826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:53.562 [2024-11-18 23:25:12.918832] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:53.562 [2024-11-18 23:25:12.918840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:53.562 [2024-11-18 23:25:12.918847] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:53.562 [2024-11-18 23:25:12.918853] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:53.562 [2024-11-18 23:25:12.918862] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:53.562 [2024-11-18 23:25:12.918871] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:53.562 [2024-11-18 23:25:12.918881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:53.562 [2024-11-18 23:25:12.918889] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:53.562 [2024-11-18 23:25:12.918896] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:53.562 [2024-11-18 23:25:12.918904] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:53.562 [2024-11-18 23:25:12.918911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:53.562 [2024-11-18 23:25:12.918918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:53.562 [2024-11-18 23:25:12.918925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:53.562 [2024-11-18 23:25:12.918932] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:53.563 [2024-11-18 23:25:12.918939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:53.563 [2024-11-18 23:25:12.918946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:53.563 [2024-11-18 23:25:12.918953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:53.563 [2024-11-18 23:25:12.918960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:53.563 [2024-11-18 23:25:12.918967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:53.563 [2024-11-18 23:25:12.918975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:53.563 [2024-11-18 23:25:12.918984] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:53.563 [2024-11-18 23:25:12.918991] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:53.563 [2024-11-18 23:25:12.919002] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:53.563 [2024-11-18 23:25:12.919013] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:53.563 [2024-11-18 23:25:12.919020] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:53.563 [2024-11-18 23:25:12.919027] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:53.563 [2024-11-18 23:25:12.919036] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:53.563 [2024-11-18 23:25:12.919044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.563 [2024-11-18 23:25:12.919051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:53.563 [2024-11-18 23:25:12.919065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.699 ms 00:29:53.563 [2024-11-18 23:25:12.919073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.823 [2024-11-18 23:25:12.936227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.823 [2024-11-18 23:25:12.936272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:53.823 [2024-11-18 23:25:12.936285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.103 ms 00:29:53.823 [2024-11-18 23:25:12.936294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.823 [2024-11-18 23:25:12.936386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.823 [2024-11-18 23:25:12.936396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:53.823 [2024-11-18 23:25:12.936410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:29:53.823 [2024-11-18 23:25:12.936418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.823 [2024-11-18 23:25:12.949043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.823 [2024-11-18 23:25:12.949093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:53.823 [2024-11-18 23:25:12.949106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.566 ms 00:29:53.823 [2024-11-18 23:25:12.949115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.823 [2024-11-18 23:25:12.949175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.823 [2024-11-18 23:25:12.949187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:53.823 [2024-11-18 23:25:12.949198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:29:53.823 [2024-11-18 23:25:12.949207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.823 [2024-11-18 23:25:12.949315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.823 [2024-11-18 23:25:12.949329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:53.823 [2024-11-18 23:25:12.949343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:29:53.823 [2024-11-18 23:25:12.949352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.823 [2024-11-18 23:25:12.949510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.823 [2024-11-18 23:25:12.949522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:53.824 [2024-11-18 23:25:12.949532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:29:53.824 [2024-11-18 23:25:12.949545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.824 [2024-11-18 23:25:12.956718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.824 [2024-11-18 23:25:12.956756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:53.824 [2024-11-18 23:25:12.956773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.151 ms 00:29:53.824 [2024-11-18 23:25:12.956784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.824 [2024-11-18 23:25:12.956893] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:53.824 [2024-11-18 23:25:12.956906] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:53.824 [2024-11-18 23:25:12.956919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.824 [2024-11-18 23:25:12.956933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:53.824 [2024-11-18 23:25:12.956941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:29:53.824 [2024-11-18 23:25:12.956948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.824 [2024-11-18 23:25:12.969408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.824 [2024-11-18 23:25:12.969448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:53.824 [2024-11-18 23:25:12.969458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.442 ms 00:29:53.824 [2024-11-18 23:25:12.969465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.824 [2024-11-18 23:25:12.969595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.824 [2024-11-18 23:25:12.969611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:53.824 [2024-11-18 23:25:12.969619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:29:53.824 [2024-11-18 23:25:12.969629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.824 [2024-11-18 23:25:12.969679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.824 [2024-11-18 23:25:12.969688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:53.824 [2024-11-18 23:25:12.969700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:29:53.824 [2024-11-18 23:25:12.969707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.824 [2024-11-18 23:25:12.970025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.824 [2024-11-18 23:25:12.970035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:53.824 [2024-11-18 23:25:12.970043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:29:53.824 [2024-11-18 23:25:12.970055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.824 [2024-11-18 23:25:12.970072] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:29:53.824 [2024-11-18 23:25:12.970082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.824 [2024-11-18 23:25:12.970089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:53.824 [2024-11-18 23:25:12.970099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:29:53.824 [2024-11-18 23:25:12.970107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.824 [2024-11-18 23:25:12.979831] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:53.824 [2024-11-18 23:25:12.979985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.824 [2024-11-18 23:25:12.979997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:53.824 [2024-11-18 23:25:12.980007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.859 ms 00:29:53.824 [2024-11-18 23:25:12.980015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.824 [2024-11-18 23:25:12.982420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.824 [2024-11-18 23:25:12.982465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:53.824 [2024-11-18 23:25:12.982475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.377 ms 00:29:53.824 [2024-11-18 23:25:12.982482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.824 [2024-11-18 23:25:12.982582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.824 [2024-11-18 23:25:12.982593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:53.824 [2024-11-18 23:25:12.982602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:29:53.824 [2024-11-18 23:25:12.982610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.824 [2024-11-18 23:25:12.982755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.824 [2024-11-18 23:25:12.982764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:53.824 [2024-11-18 23:25:12.982772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:53.824 [2024-11-18 23:25:12.982779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.824 [2024-11-18 23:25:12.982813] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:53.824 [2024-11-18 23:25:12.982823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.824 [2024-11-18 23:25:12.982834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:53.824 [2024-11-18 23:25:12.982842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:29:53.824 [2024-11-18 23:25:12.982850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.824 [2024-11-18 23:25:12.989114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.824 [2024-11-18 23:25:12.989185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:53.824 [2024-11-18 23:25:12.989197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.247 ms 00:29:53.824 [2024-11-18 23:25:12.989205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.824 [2024-11-18 23:25:12.989289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.824 [2024-11-18 23:25:12.989300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:53.824 [2024-11-18 23:25:12.989309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:29:53.824 [2024-11-18 23:25:12.989317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.824 [2024-11-18 23:25:12.991084] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 76.239 ms, result 0 00:29:55.205  [2024-11-18T23:25:15.524Z] Copying: 11/1024 [MB] (11 MBps) [2024-11-18T23:25:16.464Z] Copying: 25/1024 [MB] (14 MBps) [2024-11-18T23:25:17.405Z] Copying: 41/1024 [MB] (16 MBps) [2024-11-18T23:25:18.347Z] Copying: 53/1024 [MB] (11 MBps) [2024-11-18T23:25:19.288Z] Copying: 70/1024 [MB] (17 MBps) [2024-11-18T23:25:20.226Z] Copying: 88/1024 [MB] (17 MBps) [2024-11-18T23:25:21.610Z] Copying: 104/1024 [MB] (16 MBps) [2024-11-18T23:25:22.182Z] Copying: 127/1024 [MB] (22 MBps) [2024-11-18T23:25:23.564Z] Copying: 153/1024 [MB] (26 MBps) [2024-11-18T23:25:24.504Z] Copying: 169/1024 [MB] (16 MBps) [2024-11-18T23:25:25.445Z] Copying: 180/1024 [MB] (10 MBps) [2024-11-18T23:25:26.386Z] Copying: 191/1024 [MB] (10 MBps) [2024-11-18T23:25:27.328Z] Copying: 203/1024 [MB] (12 MBps) [2024-11-18T23:25:28.272Z] Copying: 216/1024 [MB] (13 MBps) [2024-11-18T23:25:29.216Z] Copying: 230/1024 [MB] (13 MBps) [2024-11-18T23:25:30.192Z] Copying: 241/1024 [MB] (11 MBps) [2024-11-18T23:25:31.580Z] Copying: 255/1024 [MB] (13 MBps) [2024-11-18T23:25:32.523Z] Copying: 275/1024 [MB] (19 MBps) [2024-11-18T23:25:33.467Z] Copying: 290/1024 [MB] (15 MBps) [2024-11-18T23:25:34.408Z] Copying: 306/1024 [MB] (15 MBps) [2024-11-18T23:25:35.351Z] Copying: 316/1024 [MB] (10 MBps) [2024-11-18T23:25:36.295Z] Copying: 327/1024 [MB] (10 MBps) [2024-11-18T23:25:37.239Z] Copying: 339/1024 [MB] (12 MBps) [2024-11-18T23:25:38.258Z] Copying: 361/1024 [MB] (21 MBps) [2024-11-18T23:25:39.205Z] Copying: 372/1024 [MB] (11 MBps) [2024-11-18T23:25:40.590Z] Copying: 384/1024 [MB] (11 MBps) [2024-11-18T23:25:41.536Z] Copying: 398/1024 [MB] (14 MBps) [2024-11-18T23:25:42.481Z] Copying: 412/1024 [MB] (14 MBps) [2024-11-18T23:25:43.426Z] Copying: 424/1024 [MB] (12 MBps) [2024-11-18T23:25:44.371Z] Copying: 438/1024 [MB] (13 MBps) [2024-11-18T23:25:45.316Z] Copying: 457/1024 [MB] (19 MBps) [2024-11-18T23:25:46.262Z] Copying: 473/1024 [MB] (15 MBps) [2024-11-18T23:25:47.207Z] Copying: 496/1024 [MB] (23 MBps) [2024-11-18T23:25:48.597Z] Copying: 511/1024 [MB] (15 MBps) [2024-11-18T23:25:49.541Z] Copying: 525/1024 [MB] (13 MBps) [2024-11-18T23:25:50.485Z] Copying: 542/1024 [MB] (17 MBps) [2024-11-18T23:25:51.432Z] Copying: 566/1024 [MB] (23 MBps) [2024-11-18T23:25:52.376Z] Copying: 576/1024 [MB] (10 MBps) [2024-11-18T23:25:53.318Z] Copying: 594/1024 [MB] (17 MBps) [2024-11-18T23:25:54.263Z] Copying: 616/1024 [MB] (21 MBps) [2024-11-18T23:25:55.207Z] Copying: 633/1024 [MB] (17 MBps) [2024-11-18T23:25:56.594Z] Copying: 663/1024 [MB] (30 MBps) [2024-11-18T23:25:57.538Z] Copying: 683/1024 [MB] (19 MBps) [2024-11-18T23:25:58.480Z] Copying: 707/1024 [MB] (24 MBps) [2024-11-18T23:25:59.422Z] Copying: 729/1024 [MB] (21 MBps) [2024-11-18T23:26:00.364Z] Copying: 752/1024 [MB] (23 MBps) [2024-11-18T23:26:01.308Z] Copying: 773/1024 [MB] (21 MBps) [2024-11-18T23:26:02.253Z] Copying: 792/1024 [MB] (18 MBps) [2024-11-18T23:26:03.197Z] Copying: 815/1024 [MB] (23 MBps) [2024-11-18T23:26:04.584Z] Copying: 837/1024 [MB] (21 MBps) [2024-11-18T23:26:05.528Z] Copying: 851/1024 [MB] (13 MBps) [2024-11-18T23:26:06.471Z] Copying: 873/1024 [MB] (22 MBps) [2024-11-18T23:26:07.415Z] Copying: 893/1024 [MB] (19 MBps) [2024-11-18T23:26:08.361Z] Copying: 916/1024 [MB] (22 MBps) [2024-11-18T23:26:09.305Z] Copying: 932/1024 [MB] (16 MBps) [2024-11-18T23:26:10.304Z] Copying: 951/1024 [MB] (18 MBps) [2024-11-18T23:26:11.274Z] Copying: 965/1024 [MB] (14 MBps) [2024-11-18T23:26:12.217Z] Copying: 985/1024 [MB] (19 MBps) [2024-11-18T23:26:13.607Z] Copying: 1004/1024 [MB] (19 MBps) [2024-11-18T23:26:13.607Z] Copying: 1020/1024 [MB] (16 MBps) [2024-11-18T23:26:13.607Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-11-18 23:26:13.560633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.229 [2024-11-18 23:26:13.560741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:54.229 [2024-11-18 23:26:13.560767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:54.229 [2024-11-18 23:26:13.560777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.229 [2024-11-18 23:26:13.560806] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:54.229 [2024-11-18 23:26:13.561840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.229 [2024-11-18 23:26:13.561874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:54.229 [2024-11-18 23:26:13.561888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.010 ms 00:30:54.229 [2024-11-18 23:26:13.561898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.229 [2024-11-18 23:26:13.562191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.229 [2024-11-18 23:26:13.562218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:54.229 [2024-11-18 23:26:13.562228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:30:54.229 [2024-11-18 23:26:13.562237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.229 [2024-11-18 23:26:13.562273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.229 [2024-11-18 23:26:13.562289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:30:54.229 [2024-11-18 23:26:13.562299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:30:54.229 [2024-11-18 23:26:13.562309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.229 [2024-11-18 23:26:13.562384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.229 [2024-11-18 23:26:13.562396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:30:54.229 [2024-11-18 23:26:13.562406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:30:54.229 [2024-11-18 23:26:13.562415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.229 [2024-11-18 23:26:13.562444] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:54.229 [2024-11-18 23:26:13.562467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:30:54.229 [2024-11-18 23:26:13.562478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:54.229 [2024-11-18 23:26:13.562486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:54.229 [2024-11-18 23:26:13.562495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:54.229 [2024-11-18 23:26:13.562503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:54.229 [2024-11-18 23:26:13.562511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:54.229 [2024-11-18 23:26:13.562519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:54.229 [2024-11-18 23:26:13.562528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:54.229 [2024-11-18 23:26:13.562537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.562546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.562554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.562563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.562571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.562579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.562592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.562601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.562608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.562615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.562623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.562631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.562639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.562647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.562655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.562663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.562670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.562678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.562686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.562693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.562701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.562708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.562716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.562723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.562742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.562750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.562757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.562765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.562774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.562781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.562788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.562804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.562813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.562822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.562831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.562839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.562847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.562855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.562866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.562876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.562885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.562894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.562902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.562910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.562919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.562927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.562935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.562943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.562952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.562959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.562969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.562977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.562985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.562993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.563002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.563011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.563019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.563028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.563037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.563045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.563054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.563063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.563071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.563079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.563087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.563095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.563103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.563111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.563120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.563128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.563139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.563147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.563174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.563182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.563192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.563200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.563208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.563218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.563227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.563235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.563243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.563252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.563261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.563269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.563278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.563286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.563295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.563303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:54.230 [2024-11-18 23:26:13.563311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:54.231 [2024-11-18 23:26:13.563320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:54.231 [2024-11-18 23:26:13.563328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:54.231 [2024-11-18 23:26:13.563336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:54.231 [2024-11-18 23:26:13.563353] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:54.231 [2024-11-18 23:26:13.563365] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 010f5a10-8b64-4503-bab5-5638e96e85f0 00:30:54.231 [2024-11-18 23:26:13.563373] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:30:54.231 [2024-11-18 23:26:13.563382] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:30:54.231 [2024-11-18 23:26:13.563391] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:54.231 [2024-11-18 23:26:13.563400] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:54.231 [2024-11-18 23:26:13.563412] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:54.231 [2024-11-18 23:26:13.563421] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:54.231 [2024-11-18 23:26:13.563429] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:54.231 [2024-11-18 23:26:13.563435] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:54.231 [2024-11-18 23:26:13.563442] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:54.231 [2024-11-18 23:26:13.563451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.231 [2024-11-18 23:26:13.563460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:54.231 [2024-11-18 23:26:13.563473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.009 ms 00:30:54.231 [2024-11-18 23:26:13.563481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.231 [2024-11-18 23:26:13.566668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.231 [2024-11-18 23:26:13.566711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:54.231 [2024-11-18 23:26:13.566722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.164 ms 00:30:54.231 [2024-11-18 23:26:13.566732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.231 [2024-11-18 23:26:13.566892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.231 [2024-11-18 23:26:13.566903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:54.231 [2024-11-18 23:26:13.566912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:30:54.231 [2024-11-18 23:26:13.566927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.231 [2024-11-18 23:26:13.576993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:54.231 [2024-11-18 23:26:13.577049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:54.231 [2024-11-18 23:26:13.577062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:54.231 [2024-11-18 23:26:13.577072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.231 [2024-11-18 23:26:13.577173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:54.231 [2024-11-18 23:26:13.577184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:54.231 [2024-11-18 23:26:13.577202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:54.231 [2024-11-18 23:26:13.577215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.231 [2024-11-18 23:26:13.577294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:54.231 [2024-11-18 23:26:13.577306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:54.231 [2024-11-18 23:26:13.577315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:54.231 [2024-11-18 23:26:13.577324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.231 [2024-11-18 23:26:13.577347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:54.231 [2024-11-18 23:26:13.577356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:54.231 [2024-11-18 23:26:13.577365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:54.231 [2024-11-18 23:26:13.577374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.231 [2024-11-18 23:26:13.597611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:54.231 [2024-11-18 23:26:13.597685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:54.231 [2024-11-18 23:26:13.597698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:54.231 [2024-11-18 23:26:13.597707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.493 [2024-11-18 23:26:13.614275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:54.493 [2024-11-18 23:26:13.614348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:54.493 [2024-11-18 23:26:13.614361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:54.493 [2024-11-18 23:26:13.614506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.493 [2024-11-18 23:26:13.614582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:54.493 [2024-11-18 23:26:13.614594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:54.493 [2024-11-18 23:26:13.614611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:54.493 [2024-11-18 23:26:13.614620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.493 [2024-11-18 23:26:13.614660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:54.493 [2024-11-18 23:26:13.614670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:54.493 [2024-11-18 23:26:13.614679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:54.493 [2024-11-18 23:26:13.614688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.493 [2024-11-18 23:26:13.614771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:54.493 [2024-11-18 23:26:13.614787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:54.493 [2024-11-18 23:26:13.614800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:54.493 [2024-11-18 23:26:13.614809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.493 [2024-11-18 23:26:13.614842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:54.493 [2024-11-18 23:26:13.614874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:54.493 [2024-11-18 23:26:13.614884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:54.493 [2024-11-18 23:26:13.614894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.493 [2024-11-18 23:26:13.614945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:54.493 [2024-11-18 23:26:13.614965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:54.493 [2024-11-18 23:26:13.614975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:54.493 [2024-11-18 23:26:13.614985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.493 [2024-11-18 23:26:13.615046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:54.493 [2024-11-18 23:26:13.615064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:54.493 [2024-11-18 23:26:13.615075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:54.493 [2024-11-18 23:26:13.615084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.493 [2024-11-18 23:26:13.615270] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 54.598 ms, result 0 00:30:54.754 00:30:54.754 00:30:54.754 23:26:13 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:30:56.668 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:30:56.668 23:26:15 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:30:56.668 [2024-11-18 23:26:15.773810] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:30:56.668 [2024-11-18 23:26:15.773924] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94735 ] 00:30:56.668 [2024-11-18 23:26:15.918612] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:56.668 [2024-11-18 23:26:15.969257] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:30:56.931 [2024-11-18 23:26:16.084584] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:56.931 [2024-11-18 23:26:16.084668] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:56.931 [2024-11-18 23:26:16.244720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.931 [2024-11-18 23:26:16.244784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:56.931 [2024-11-18 23:26:16.244803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:30:56.931 [2024-11-18 23:26:16.244812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.931 [2024-11-18 23:26:16.244872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.931 [2024-11-18 23:26:16.244883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:56.931 [2024-11-18 23:26:16.244892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:30:56.931 [2024-11-18 23:26:16.244901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.931 [2024-11-18 23:26:16.244928] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:56.931 [2024-11-18 23:26:16.245351] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:56.931 [2024-11-18 23:26:16.245397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.931 [2024-11-18 23:26:16.245405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:56.931 [2024-11-18 23:26:16.245422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.479 ms 00:30:56.931 [2024-11-18 23:26:16.245434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.931 [2024-11-18 23:26:16.245763] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:30:56.931 [2024-11-18 23:26:16.245802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.931 [2024-11-18 23:26:16.245814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:56.931 [2024-11-18 23:26:16.245824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:30:56.931 [2024-11-18 23:26:16.245832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.931 [2024-11-18 23:26:16.245901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.931 [2024-11-18 23:26:16.245918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:56.931 [2024-11-18 23:26:16.245927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:30:56.931 [2024-11-18 23:26:16.245935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.931 [2024-11-18 23:26:16.246212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.931 [2024-11-18 23:26:16.246228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:56.931 [2024-11-18 23:26:16.246238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.238 ms 00:30:56.931 [2024-11-18 23:26:16.246246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.931 [2024-11-18 23:26:16.246336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.931 [2024-11-18 23:26:16.246400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:56.931 [2024-11-18 23:26:16.246409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:30:56.931 [2024-11-18 23:26:16.246417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.931 [2024-11-18 23:26:16.246460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.931 [2024-11-18 23:26:16.246469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:56.931 [2024-11-18 23:26:16.246478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:30:56.931 [2024-11-18 23:26:16.246486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.931 [2024-11-18 23:26:16.246508] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:56.931 [2024-11-18 23:26:16.248964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.931 [2024-11-18 23:26:16.249005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:56.931 [2024-11-18 23:26:16.249020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.461 ms 00:30:56.931 [2024-11-18 23:26:16.249029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.931 [2024-11-18 23:26:16.249065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.931 [2024-11-18 23:26:16.249074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:56.931 [2024-11-18 23:26:16.249088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:30:56.931 [2024-11-18 23:26:16.249096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.931 [2024-11-18 23:26:16.249167] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:56.931 [2024-11-18 23:26:16.249192] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:56.931 [2024-11-18 23:26:16.249238] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:56.931 [2024-11-18 23:26:16.249256] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:56.931 [2024-11-18 23:26:16.249364] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:56.931 [2024-11-18 23:26:16.249376] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:56.931 [2024-11-18 23:26:16.249392] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:56.931 [2024-11-18 23:26:16.249404] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:56.931 [2024-11-18 23:26:16.249414] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:56.931 [2024-11-18 23:26:16.249427] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:56.931 [2024-11-18 23:26:16.249439] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:56.931 [2024-11-18 23:26:16.249447] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:56.931 [2024-11-18 23:26:16.249455] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:56.931 [2024-11-18 23:26:16.249464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.931 [2024-11-18 23:26:16.249472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:56.931 [2024-11-18 23:26:16.249480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.315 ms 00:30:56.931 [2024-11-18 23:26:16.249488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.931 [2024-11-18 23:26:16.249573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.931 [2024-11-18 23:26:16.249582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:56.931 [2024-11-18 23:26:16.249590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:30:56.931 [2024-11-18 23:26:16.249600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.931 [2024-11-18 23:26:16.249704] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:56.931 [2024-11-18 23:26:16.249731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:56.931 [2024-11-18 23:26:16.249744] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:56.931 [2024-11-18 23:26:16.249757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:56.931 [2024-11-18 23:26:16.249766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:56.931 [2024-11-18 23:26:16.249779] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:56.931 [2024-11-18 23:26:16.249787] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:56.931 [2024-11-18 23:26:16.249795] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:56.931 [2024-11-18 23:26:16.249802] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:56.931 [2024-11-18 23:26:16.249809] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:56.931 [2024-11-18 23:26:16.249815] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:56.931 [2024-11-18 23:26:16.249822] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:56.931 [2024-11-18 23:26:16.249829] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:56.931 [2024-11-18 23:26:16.249836] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:56.931 [2024-11-18 23:26:16.249843] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:56.931 [2024-11-18 23:26:16.249850] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:56.931 [2024-11-18 23:26:16.249858] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:56.931 [2024-11-18 23:26:16.249868] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:56.931 [2024-11-18 23:26:16.249875] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:56.931 [2024-11-18 23:26:16.249885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:56.931 [2024-11-18 23:26:16.249892] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:56.931 [2024-11-18 23:26:16.249899] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:56.931 [2024-11-18 23:26:16.249906] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:56.931 [2024-11-18 23:26:16.249913] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:56.931 [2024-11-18 23:26:16.249920] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:56.931 [2024-11-18 23:26:16.249927] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:56.931 [2024-11-18 23:26:16.249934] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:56.931 [2024-11-18 23:26:16.249941] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:56.931 [2024-11-18 23:26:16.249947] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:56.931 [2024-11-18 23:26:16.249954] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:56.931 [2024-11-18 23:26:16.249961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:56.931 [2024-11-18 23:26:16.249968] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:56.931 [2024-11-18 23:26:16.249976] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:56.931 [2024-11-18 23:26:16.249983] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:56.931 [2024-11-18 23:26:16.249990] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:56.931 [2024-11-18 23:26:16.250002] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:56.932 [2024-11-18 23:26:16.250008] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:56.932 [2024-11-18 23:26:16.250015] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:56.932 [2024-11-18 23:26:16.250022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:56.932 [2024-11-18 23:26:16.250029] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:56.932 [2024-11-18 23:26:16.250036] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:56.932 [2024-11-18 23:26:16.250042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:56.932 [2024-11-18 23:26:16.250048] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:56.932 [2024-11-18 23:26:16.250055] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:56.932 [2024-11-18 23:26:16.250064] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:56.932 [2024-11-18 23:26:16.250071] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:56.932 [2024-11-18 23:26:16.250079] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:56.932 [2024-11-18 23:26:16.250087] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:56.932 [2024-11-18 23:26:16.250094] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:56.932 [2024-11-18 23:26:16.250103] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:56.932 [2024-11-18 23:26:16.250111] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:56.932 [2024-11-18 23:26:16.250120] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:56.932 [2024-11-18 23:26:16.250127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:56.932 [2024-11-18 23:26:16.250136] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:56.932 [2024-11-18 23:26:16.250149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:56.932 [2024-11-18 23:26:16.250182] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:56.932 [2024-11-18 23:26:16.250191] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:56.932 [2024-11-18 23:26:16.250199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:56.932 [2024-11-18 23:26:16.250206] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:56.932 [2024-11-18 23:26:16.250215] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:56.932 [2024-11-18 23:26:16.250222] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:56.932 [2024-11-18 23:26:16.250230] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:56.932 [2024-11-18 23:26:16.250238] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:56.932 [2024-11-18 23:26:16.250245] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:56.932 [2024-11-18 23:26:16.250252] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:56.932 [2024-11-18 23:26:16.250260] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:56.932 [2024-11-18 23:26:16.250268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:56.932 [2024-11-18 23:26:16.250279] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:56.932 [2024-11-18 23:26:16.250287] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:56.932 [2024-11-18 23:26:16.250295] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:56.932 [2024-11-18 23:26:16.250305] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:56.932 [2024-11-18 23:26:16.250313] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:56.932 [2024-11-18 23:26:16.250321] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:56.932 [2024-11-18 23:26:16.250328] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:56.932 [2024-11-18 23:26:16.250335] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:56.932 [2024-11-18 23:26:16.250343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.932 [2024-11-18 23:26:16.250351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:56.932 [2024-11-18 23:26:16.250362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.707 ms 00:30:56.932 [2024-11-18 23:26:16.250370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.932 [2024-11-18 23:26:16.272220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.932 [2024-11-18 23:26:16.272285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:56.932 [2024-11-18 23:26:16.272310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.801 ms 00:30:56.932 [2024-11-18 23:26:16.272323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.932 [2024-11-18 23:26:16.272463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.932 [2024-11-18 23:26:16.272479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:56.932 [2024-11-18 23:26:16.272501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:30:56.932 [2024-11-18 23:26:16.272513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.932 [2024-11-18 23:26:16.287048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.932 [2024-11-18 23:26:16.287096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:56.932 [2024-11-18 23:26:16.287111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.444 ms 00:30:56.932 [2024-11-18 23:26:16.287119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.932 [2024-11-18 23:26:16.287170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.932 [2024-11-18 23:26:16.287179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:56.932 [2024-11-18 23:26:16.287189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:56.932 [2024-11-18 23:26:16.287197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.932 [2024-11-18 23:26:16.287290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.932 [2024-11-18 23:26:16.287302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:56.932 [2024-11-18 23:26:16.287311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:30:56.932 [2024-11-18 23:26:16.287323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.932 [2024-11-18 23:26:16.287452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.932 [2024-11-18 23:26:16.287463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:56.932 [2024-11-18 23:26:16.287476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:30:56.932 [2024-11-18 23:26:16.287487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.932 [2024-11-18 23:26:16.295848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.932 [2024-11-18 23:26:16.295894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:56.932 [2024-11-18 23:26:16.295905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.337 ms 00:30:56.932 [2024-11-18 23:26:16.295920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:56.932 [2024-11-18 23:26:16.296038] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:56.932 [2024-11-18 23:26:16.296051] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:56.932 [2024-11-18 23:26:16.296062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:56.932 [2024-11-18 23:26:16.296076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:56.932 [2024-11-18 23:26:16.296085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:30:56.932 [2024-11-18 23:26:16.296093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.194 [2024-11-18 23:26:16.308568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.194 [2024-11-18 23:26:16.308618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:57.194 [2024-11-18 23:26:16.308629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.457 ms 00:30:57.194 [2024-11-18 23:26:16.308636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.194 [2024-11-18 23:26:16.308776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.194 [2024-11-18 23:26:16.308785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:57.194 [2024-11-18 23:26:16.308799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:30:57.194 [2024-11-18 23:26:16.308806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.194 [2024-11-18 23:26:16.308857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.194 [2024-11-18 23:26:16.308867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:57.194 [2024-11-18 23:26:16.308875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:30:57.194 [2024-11-18 23:26:16.308885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.194 [2024-11-18 23:26:16.309230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.194 [2024-11-18 23:26:16.309253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:57.194 [2024-11-18 23:26:16.309267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:30:57.194 [2024-11-18 23:26:16.309275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.194 [2024-11-18 23:26:16.309300] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:30:57.194 [2024-11-18 23:26:16.309310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.194 [2024-11-18 23:26:16.309319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:57.194 [2024-11-18 23:26:16.309327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:30:57.194 [2024-11-18 23:26:16.309337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.194 [2024-11-18 23:26:16.319604] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:57.194 [2024-11-18 23:26:16.319755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.194 [2024-11-18 23:26:16.319766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:57.194 [2024-11-18 23:26:16.319776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.399 ms 00:30:57.194 [2024-11-18 23:26:16.319783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.194 [2024-11-18 23:26:16.322217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.194 [2024-11-18 23:26:16.322256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:57.194 [2024-11-18 23:26:16.322266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.411 ms 00:30:57.194 [2024-11-18 23:26:16.322273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.194 [2024-11-18 23:26:16.322370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.194 [2024-11-18 23:26:16.322380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:57.194 [2024-11-18 23:26:16.322389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:30:57.194 [2024-11-18 23:26:16.322397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.194 [2024-11-18 23:26:16.322422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.195 [2024-11-18 23:26:16.322447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:57.195 [2024-11-18 23:26:16.322456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:57.195 [2024-11-18 23:26:16.322463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.195 [2024-11-18 23:26:16.322503] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:57.195 [2024-11-18 23:26:16.322514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.195 [2024-11-18 23:26:16.322525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:57.195 [2024-11-18 23:26:16.322533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:30:57.195 [2024-11-18 23:26:16.322540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.195 [2024-11-18 23:26:16.329048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.195 [2024-11-18 23:26:16.329098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:57.195 [2024-11-18 23:26:16.329115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.485 ms 00:30:57.195 [2024-11-18 23:26:16.329123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.195 [2024-11-18 23:26:16.329224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.195 [2024-11-18 23:26:16.329234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:57.195 [2024-11-18 23:26:16.329245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:30:57.195 [2024-11-18 23:26:16.329253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.195 [2024-11-18 23:26:16.330489] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 85.239 ms, result 0 00:30:58.140  [2024-11-18T23:26:18.461Z] Copying: 11/1024 [MB] (11 MBps) [2024-11-18T23:26:19.414Z] Copying: 22/1024 [MB] (10 MBps) [2024-11-18T23:26:20.363Z] Copying: 55/1024 [MB] (33 MBps) [2024-11-18T23:26:21.750Z] Copying: 95/1024 [MB] (40 MBps) [2024-11-18T23:26:22.693Z] Copying: 114/1024 [MB] (18 MBps) [2024-11-18T23:26:23.636Z] Copying: 125/1024 [MB] (10 MBps) [2024-11-18T23:26:24.577Z] Copying: 136/1024 [MB] (11 MBps) [2024-11-18T23:26:25.522Z] Copying: 146/1024 [MB] (10 MBps) [2024-11-18T23:26:26.467Z] Copying: 156/1024 [MB] (10 MBps) [2024-11-18T23:26:27.410Z] Copying: 167/1024 [MB] (10 MBps) [2024-11-18T23:26:28.350Z] Copying: 177/1024 [MB] (10 MBps) [2024-11-18T23:26:29.741Z] Copying: 187/1024 [MB] (10 MBps) [2024-11-18T23:26:30.684Z] Copying: 197/1024 [MB] (10 MBps) [2024-11-18T23:26:31.630Z] Copying: 208/1024 [MB] (10 MBps) [2024-11-18T23:26:32.571Z] Copying: 223392/1048576 [kB] (10200 kBps) [2024-11-18T23:26:33.515Z] Copying: 228/1024 [MB] (10 MBps) [2024-11-18T23:26:34.458Z] Copying: 239/1024 [MB] (10 MBps) [2024-11-18T23:26:35.403Z] Copying: 257/1024 [MB] (18 MBps) [2024-11-18T23:26:36.347Z] Copying: 268/1024 [MB] (11 MBps) [2024-11-18T23:26:37.735Z] Copying: 279/1024 [MB] (10 MBps) [2024-11-18T23:26:38.680Z] Copying: 296/1024 [MB] (16 MBps) [2024-11-18T23:26:39.622Z] Copying: 308/1024 [MB] (12 MBps) [2024-11-18T23:26:40.565Z] Copying: 320/1024 [MB] (11 MBps) [2024-11-18T23:26:41.518Z] Copying: 331/1024 [MB] (11 MBps) [2024-11-18T23:26:42.523Z] Copying: 349/1024 [MB] (17 MBps) [2024-11-18T23:26:43.466Z] Copying: 380/1024 [MB] (31 MBps) [2024-11-18T23:26:44.410Z] Copying: 393/1024 [MB] (13 MBps) [2024-11-18T23:26:45.353Z] Copying: 413/1024 [MB] (20 MBps) [2024-11-18T23:26:46.736Z] Copying: 427/1024 [MB] (13 MBps) [2024-11-18T23:26:47.679Z] Copying: 441/1024 [MB] (14 MBps) [2024-11-18T23:26:48.624Z] Copying: 456/1024 [MB] (15 MBps) [2024-11-18T23:26:49.564Z] Copying: 472/1024 [MB] (15 MBps) [2024-11-18T23:26:50.506Z] Copying: 485/1024 [MB] (13 MBps) [2024-11-18T23:26:51.452Z] Copying: 497/1024 [MB] (11 MBps) [2024-11-18T23:26:52.396Z] Copying: 510/1024 [MB] (13 MBps) [2024-11-18T23:26:53.783Z] Copying: 523/1024 [MB] (13 MBps) [2024-11-18T23:26:54.356Z] Copying: 540/1024 [MB] (16 MBps) [2024-11-18T23:26:55.745Z] Copying: 553/1024 [MB] (12 MBps) [2024-11-18T23:26:56.685Z] Copying: 568/1024 [MB] (15 MBps) [2024-11-18T23:26:57.630Z] Copying: 590/1024 [MB] (21 MBps) [2024-11-18T23:26:58.565Z] Copying: 604/1024 [MB] (14 MBps) [2024-11-18T23:26:59.499Z] Copying: 620/1024 [MB] (15 MBps) [2024-11-18T23:27:00.433Z] Copying: 645/1024 [MB] (25 MBps) [2024-11-18T23:27:01.367Z] Copying: 662/1024 [MB] (16 MBps) [2024-11-18T23:27:02.747Z] Copying: 685/1024 [MB] (23 MBps) [2024-11-18T23:27:03.688Z] Copying: 705/1024 [MB] (20 MBps) [2024-11-18T23:27:04.619Z] Copying: 717/1024 [MB] (11 MBps) [2024-11-18T23:27:05.558Z] Copying: 754/1024 [MB] (36 MBps) [2024-11-18T23:27:06.500Z] Copying: 794/1024 [MB] (40 MBps) [2024-11-18T23:27:07.444Z] Copying: 817/1024 [MB] (22 MBps) [2024-11-18T23:27:08.381Z] Copying: 832/1024 [MB] (15 MBps) [2024-11-18T23:27:09.767Z] Copying: 861/1024 [MB] (29 MBps) [2024-11-18T23:27:10.708Z] Copying: 875/1024 [MB] (13 MBps) [2024-11-18T23:27:11.648Z] Copying: 886/1024 [MB] (11 MBps) [2024-11-18T23:27:12.585Z] Copying: 910/1024 [MB] (23 MBps) [2024-11-18T23:27:13.581Z] Copying: 937/1024 [MB] (27 MBps) [2024-11-18T23:27:14.526Z] Copying: 958/1024 [MB] (20 MBps) [2024-11-18T23:27:15.471Z] Copying: 975/1024 [MB] (17 MBps) [2024-11-18T23:27:16.413Z] Copying: 989/1024 [MB] (13 MBps) [2024-11-18T23:27:17.355Z] Copying: 1003/1024 [MB] (14 MBps) [2024-11-18T23:27:18.743Z] Copying: 1022/1024 [MB] (18 MBps) [2024-11-18T23:27:18.743Z] Copying: 1048432/1048576 [kB] (1856 kBps) [2024-11-18T23:27:18.743Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-11-18 23:27:18.509864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.365 [2024-11-18 23:27:18.509955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:59.365 [2024-11-18 23:27:18.509974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:59.365 [2024-11-18 23:27:18.509990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.365 [2024-11-18 23:27:18.512588] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:59.365 [2024-11-18 23:27:18.515216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.365 [2024-11-18 23:27:18.515270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:59.365 [2024-11-18 23:27:18.515281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.574 ms 00:31:59.365 [2024-11-18 23:27:18.515291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.365 [2024-11-18 23:27:18.528373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.365 [2024-11-18 23:27:18.528435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:59.365 [2024-11-18 23:27:18.528460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.932 ms 00:31:59.365 [2024-11-18 23:27:18.528469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.365 [2024-11-18 23:27:18.528504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.365 [2024-11-18 23:27:18.528515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:31:59.365 [2024-11-18 23:27:18.528525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:59.365 [2024-11-18 23:27:18.528534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.365 [2024-11-18 23:27:18.528601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.365 [2024-11-18 23:27:18.528611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:31:59.365 [2024-11-18 23:27:18.528621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:31:59.365 [2024-11-18 23:27:18.528632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.365 [2024-11-18 23:27:18.528647] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:59.365 [2024-11-18 23:27:18.528662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 130048 / 261120 wr_cnt: 1 state: open 00:31:59.365 [2024-11-18 23:27:18.528672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:59.365 [2024-11-18 23:27:18.528680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:59.365 [2024-11-18 23:27:18.528688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:59.365 [2024-11-18 23:27:18.528695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:59.365 [2024-11-18 23:27:18.528703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:59.365 [2024-11-18 23:27:18.528710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:59.365 [2024-11-18 23:27:18.528717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:59.365 [2024-11-18 23:27:18.528725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:59.365 [2024-11-18 23:27:18.528733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:59.365 [2024-11-18 23:27:18.528740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:59.365 [2024-11-18 23:27:18.528748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:59.365 [2024-11-18 23:27:18.528755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:59.365 [2024-11-18 23:27:18.528763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:59.365 [2024-11-18 23:27:18.528772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:59.365 [2024-11-18 23:27:18.528780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:59.365 [2024-11-18 23:27:18.528787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:59.365 [2024-11-18 23:27:18.528794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:59.365 [2024-11-18 23:27:18.528802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:59.365 [2024-11-18 23:27:18.528810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:59.365 [2024-11-18 23:27:18.528818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:59.365 [2024-11-18 23:27:18.528826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:59.365 [2024-11-18 23:27:18.528836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:59.365 [2024-11-18 23:27:18.528846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:59.365 [2024-11-18 23:27:18.528854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:59.365 [2024-11-18 23:27:18.528862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:59.365 [2024-11-18 23:27:18.528870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:59.365 [2024-11-18 23:27:18.528878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:59.365 [2024-11-18 23:27:18.528887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:59.365 [2024-11-18 23:27:18.528895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:59.365 [2024-11-18 23:27:18.528903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:59.365 [2024-11-18 23:27:18.528911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:59.365 [2024-11-18 23:27:18.528929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:59.365 [2024-11-18 23:27:18.528936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:59.365 [2024-11-18 23:27:18.528944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:59.365 [2024-11-18 23:27:18.528952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:59.365 [2024-11-18 23:27:18.528959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:59.365 [2024-11-18 23:27:18.528967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:59.365 [2024-11-18 23:27:18.528975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:59.365 [2024-11-18 23:27:18.528983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:59.365 [2024-11-18 23:27:18.528992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:59.365 [2024-11-18 23:27:18.528999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:59.365 [2024-11-18 23:27:18.529007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:59.365 [2024-11-18 23:27:18.529015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:59.365 [2024-11-18 23:27:18.529023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:59.365 [2024-11-18 23:27:18.529031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:59.365 [2024-11-18 23:27:18.529040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:59.365 [2024-11-18 23:27:18.529049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:59.365 [2024-11-18 23:27:18.529056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:59.365 [2024-11-18 23:27:18.529064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:59.365 [2024-11-18 23:27:18.529071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:59.365 [2024-11-18 23:27:18.529078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:59.365 [2024-11-18 23:27:18.529086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:59.365 [2024-11-18 23:27:18.529093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:59.365 [2024-11-18 23:27:18.529104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:59.365 [2024-11-18 23:27:18.529113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:59.365 [2024-11-18 23:27:18.529121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:59.365 [2024-11-18 23:27:18.529130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:59.365 [2024-11-18 23:27:18.529146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:59.366 [2024-11-18 23:27:18.529169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:59.366 [2024-11-18 23:27:18.529177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:59.366 [2024-11-18 23:27:18.529185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:59.366 [2024-11-18 23:27:18.529194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:59.366 [2024-11-18 23:27:18.529202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:59.366 [2024-11-18 23:27:18.529210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:59.366 [2024-11-18 23:27:18.529220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:59.366 [2024-11-18 23:27:18.529229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:59.366 [2024-11-18 23:27:18.529238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:59.366 [2024-11-18 23:27:18.529246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:59.366 [2024-11-18 23:27:18.529254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:59.366 [2024-11-18 23:27:18.529262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:59.366 [2024-11-18 23:27:18.529270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:59.366 [2024-11-18 23:27:18.529279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:59.366 [2024-11-18 23:27:18.529288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:59.366 [2024-11-18 23:27:18.529296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:59.366 [2024-11-18 23:27:18.529304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:59.366 [2024-11-18 23:27:18.529312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:59.366 [2024-11-18 23:27:18.529320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:59.366 [2024-11-18 23:27:18.529327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:59.366 [2024-11-18 23:27:18.529336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:59.366 [2024-11-18 23:27:18.529344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:59.366 [2024-11-18 23:27:18.529352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:59.366 [2024-11-18 23:27:18.529360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:59.366 [2024-11-18 23:27:18.529367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:59.366 [2024-11-18 23:27:18.529376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:59.366 [2024-11-18 23:27:18.529385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:59.366 [2024-11-18 23:27:18.529398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:59.366 [2024-11-18 23:27:18.529408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:59.366 [2024-11-18 23:27:18.529415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:59.366 [2024-11-18 23:27:18.529424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:59.366 [2024-11-18 23:27:18.529432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:59.366 [2024-11-18 23:27:18.529440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:59.366 [2024-11-18 23:27:18.529449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:59.366 [2024-11-18 23:27:18.529458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:59.366 [2024-11-18 23:27:18.529466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:59.366 [2024-11-18 23:27:18.529474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:59.366 [2024-11-18 23:27:18.529483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:59.366 [2024-11-18 23:27:18.529491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:59.366 [2024-11-18 23:27:18.529500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:59.366 [2024-11-18 23:27:18.529508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:59.366 [2024-11-18 23:27:18.529523] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:59.366 [2024-11-18 23:27:18.529534] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 010f5a10-8b64-4503-bab5-5638e96e85f0 00:31:59.366 [2024-11-18 23:27:18.529547] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 130048 00:31:59.366 [2024-11-18 23:27:18.529555] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 130080 00:31:59.366 [2024-11-18 23:27:18.529562] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 130048 00:31:59.366 [2024-11-18 23:27:18.529571] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0002 00:31:59.366 [2024-11-18 23:27:18.529580] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:59.366 [2024-11-18 23:27:18.529588] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:59.366 [2024-11-18 23:27:18.529603] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:59.366 [2024-11-18 23:27:18.529610] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:59.366 [2024-11-18 23:27:18.529616] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:59.366 [2024-11-18 23:27:18.529624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.366 [2024-11-18 23:27:18.529633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:59.366 [2024-11-18 23:27:18.529641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.978 ms 00:31:59.366 [2024-11-18 23:27:18.529655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.366 [2024-11-18 23:27:18.532492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.366 [2024-11-18 23:27:18.532544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:59.366 [2024-11-18 23:27:18.532560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.821 ms 00:31:59.366 [2024-11-18 23:27:18.532571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.366 [2024-11-18 23:27:18.532729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.366 [2024-11-18 23:27:18.532745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:59.366 [2024-11-18 23:27:18.532754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:31:59.366 [2024-11-18 23:27:18.532767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.366 [2024-11-18 23:27:18.541044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:59.366 [2024-11-18 23:27:18.541095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:59.366 [2024-11-18 23:27:18.541110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:59.366 [2024-11-18 23:27:18.541123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.366 [2024-11-18 23:27:18.541209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:59.366 [2024-11-18 23:27:18.541219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:59.366 [2024-11-18 23:27:18.541234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:59.366 [2024-11-18 23:27:18.541245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.366 [2024-11-18 23:27:18.541296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:59.366 [2024-11-18 23:27:18.541314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:59.366 [2024-11-18 23:27:18.541322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:59.366 [2024-11-18 23:27:18.541333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.366 [2024-11-18 23:27:18.541350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:59.366 [2024-11-18 23:27:18.541359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:59.366 [2024-11-18 23:27:18.541367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:59.366 [2024-11-18 23:27:18.541375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.366 [2024-11-18 23:27:18.559222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:59.366 [2024-11-18 23:27:18.559281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:59.366 [2024-11-18 23:27:18.559292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:59.366 [2024-11-18 23:27:18.559311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.366 [2024-11-18 23:27:18.573142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:59.366 [2024-11-18 23:27:18.573211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:59.366 [2024-11-18 23:27:18.573223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:59.366 [2024-11-18 23:27:18.573232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.366 [2024-11-18 23:27:18.573317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:59.366 [2024-11-18 23:27:18.573330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:59.366 [2024-11-18 23:27:18.573339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:59.366 [2024-11-18 23:27:18.573348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.366 [2024-11-18 23:27:18.573397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:59.366 [2024-11-18 23:27:18.573407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:59.366 [2024-11-18 23:27:18.573416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:59.366 [2024-11-18 23:27:18.573424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.366 [2024-11-18 23:27:18.573485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:59.366 [2024-11-18 23:27:18.573497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:59.366 [2024-11-18 23:27:18.573506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:59.366 [2024-11-18 23:27:18.573514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.367 [2024-11-18 23:27:18.573545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:59.367 [2024-11-18 23:27:18.573570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:59.367 [2024-11-18 23:27:18.573578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:59.367 [2024-11-18 23:27:18.573587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.367 [2024-11-18 23:27:18.573633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:59.367 [2024-11-18 23:27:18.573753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:59.367 [2024-11-18 23:27:18.573764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:59.367 [2024-11-18 23:27:18.573773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.367 [2024-11-18 23:27:18.573831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:59.367 [2024-11-18 23:27:18.573845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:59.367 [2024-11-18 23:27:18.573855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:59.367 [2024-11-18 23:27:18.573864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.367 [2024-11-18 23:27:18.574013] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 64.807 ms, result 0 00:32:00.312 00:32:00.312 00:32:00.312 23:27:19 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:32:00.312 [2024-11-18 23:27:19.674501] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:32:00.312 [2024-11-18 23:27:19.674660] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95376 ] 00:32:00.574 [2024-11-18 23:27:19.827531] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:00.574 [2024-11-18 23:27:19.878963] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:32:00.835 [2024-11-18 23:27:19.985576] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:00.835 [2024-11-18 23:27:19.985645] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:00.835 [2024-11-18 23:27:20.144260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:00.835 [2024-11-18 23:27:20.144312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:00.835 [2024-11-18 23:27:20.144328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:00.835 [2024-11-18 23:27:20.144337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.835 [2024-11-18 23:27:20.144386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:00.835 [2024-11-18 23:27:20.144397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:00.835 [2024-11-18 23:27:20.144409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:32:00.835 [2024-11-18 23:27:20.144417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.835 [2024-11-18 23:27:20.144440] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:00.835 [2024-11-18 23:27:20.144917] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:00.835 [2024-11-18 23:27:20.144956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:00.835 [2024-11-18 23:27:20.144967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:00.835 [2024-11-18 23:27:20.144984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.524 ms 00:32:00.835 [2024-11-18 23:27:20.144997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.835 [2024-11-18 23:27:20.145704] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:32:00.835 [2024-11-18 23:27:20.145755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:00.835 [2024-11-18 23:27:20.145766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:00.835 [2024-11-18 23:27:20.145785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:32:00.835 [2024-11-18 23:27:20.145793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.835 [2024-11-18 23:27:20.145847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:00.835 [2024-11-18 23:27:20.145862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:00.835 [2024-11-18 23:27:20.145870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:32:00.835 [2024-11-18 23:27:20.145877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.835 [2024-11-18 23:27:20.146123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:00.835 [2024-11-18 23:27:20.146140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:00.835 [2024-11-18 23:27:20.146151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.210 ms 00:32:00.835 [2024-11-18 23:27:20.146179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.835 [2024-11-18 23:27:20.146254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:00.835 [2024-11-18 23:27:20.146274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:00.835 [2024-11-18 23:27:20.146282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:32:00.835 [2024-11-18 23:27:20.146289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.835 [2024-11-18 23:27:20.146311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:00.835 [2024-11-18 23:27:20.146320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:00.835 [2024-11-18 23:27:20.146328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:32:00.835 [2024-11-18 23:27:20.146336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.835 [2024-11-18 23:27:20.146360] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:00.835 [2024-11-18 23:27:20.148091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:00.835 [2024-11-18 23:27:20.148121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:00.836 [2024-11-18 23:27:20.148130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.735 ms 00:32:00.836 [2024-11-18 23:27:20.148141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.836 [2024-11-18 23:27:20.148189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:00.836 [2024-11-18 23:27:20.148199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:00.836 [2024-11-18 23:27:20.148207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:32:00.836 [2024-11-18 23:27:20.148215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.836 [2024-11-18 23:27:20.148259] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:00.836 [2024-11-18 23:27:20.148280] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:00.836 [2024-11-18 23:27:20.148316] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:00.836 [2024-11-18 23:27:20.148336] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:00.836 [2024-11-18 23:27:20.148440] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:00.836 [2024-11-18 23:27:20.148457] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:00.836 [2024-11-18 23:27:20.148468] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:00.836 [2024-11-18 23:27:20.148479] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:00.836 [2024-11-18 23:27:20.148487] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:00.836 [2024-11-18 23:27:20.148501] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:00.836 [2024-11-18 23:27:20.148512] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:00.836 [2024-11-18 23:27:20.148521] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:00.836 [2024-11-18 23:27:20.148531] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:00.836 [2024-11-18 23:27:20.148539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:00.836 [2024-11-18 23:27:20.148546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:00.836 [2024-11-18 23:27:20.148554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:32:00.836 [2024-11-18 23:27:20.148565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.836 [2024-11-18 23:27:20.148647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:00.836 [2024-11-18 23:27:20.148656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:00.836 [2024-11-18 23:27:20.148663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:32:00.836 [2024-11-18 23:27:20.148672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.836 [2024-11-18 23:27:20.148777] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:00.836 [2024-11-18 23:27:20.148789] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:00.836 [2024-11-18 23:27:20.148798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:00.836 [2024-11-18 23:27:20.148807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:00.836 [2024-11-18 23:27:20.148816] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:00.836 [2024-11-18 23:27:20.148834] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:00.836 [2024-11-18 23:27:20.148843] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:00.836 [2024-11-18 23:27:20.148851] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:00.836 [2024-11-18 23:27:20.148859] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:00.836 [2024-11-18 23:27:20.148867] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:00.836 [2024-11-18 23:27:20.148875] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:00.836 [2024-11-18 23:27:20.148882] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:00.836 [2024-11-18 23:27:20.148891] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:00.836 [2024-11-18 23:27:20.148899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:00.836 [2024-11-18 23:27:20.148907] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:00.836 [2024-11-18 23:27:20.148916] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:00.836 [2024-11-18 23:27:20.148924] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:00.836 [2024-11-18 23:27:20.148932] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:00.836 [2024-11-18 23:27:20.148940] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:00.836 [2024-11-18 23:27:20.148948] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:00.836 [2024-11-18 23:27:20.148956] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:00.836 [2024-11-18 23:27:20.148967] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:00.836 [2024-11-18 23:27:20.148974] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:00.836 [2024-11-18 23:27:20.148982] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:00.836 [2024-11-18 23:27:20.148990] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:00.836 [2024-11-18 23:27:20.148997] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:00.836 [2024-11-18 23:27:20.149005] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:00.836 [2024-11-18 23:27:20.149012] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:00.836 [2024-11-18 23:27:20.149019] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:00.836 [2024-11-18 23:27:20.149027] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:00.836 [2024-11-18 23:27:20.149035] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:00.836 [2024-11-18 23:27:20.149043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:00.836 [2024-11-18 23:27:20.149050] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:00.836 [2024-11-18 23:27:20.149058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:00.836 [2024-11-18 23:27:20.149066] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:00.836 [2024-11-18 23:27:20.149074] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:00.836 [2024-11-18 23:27:20.149082] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:00.836 [2024-11-18 23:27:20.149091] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:00.836 [2024-11-18 23:27:20.149099] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:00.836 [2024-11-18 23:27:20.149106] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:00.836 [2024-11-18 23:27:20.149114] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:00.836 [2024-11-18 23:27:20.149121] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:00.836 [2024-11-18 23:27:20.149128] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:00.836 [2024-11-18 23:27:20.149136] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:00.836 [2024-11-18 23:27:20.149144] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:00.836 [2024-11-18 23:27:20.149166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:00.836 [2024-11-18 23:27:20.149178] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:00.836 [2024-11-18 23:27:20.149186] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:00.836 [2024-11-18 23:27:20.149193] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:00.836 [2024-11-18 23:27:20.149200] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:00.836 [2024-11-18 23:27:20.149207] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:00.836 [2024-11-18 23:27:20.149214] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:00.836 [2024-11-18 23:27:20.149220] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:00.836 [2024-11-18 23:27:20.149233] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:00.836 [2024-11-18 23:27:20.149244] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:00.836 [2024-11-18 23:27:20.149253] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:00.836 [2024-11-18 23:27:20.149260] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:00.836 [2024-11-18 23:27:20.149268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:00.836 [2024-11-18 23:27:20.149275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:00.836 [2024-11-18 23:27:20.149283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:00.836 [2024-11-18 23:27:20.149290] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:00.836 [2024-11-18 23:27:20.149297] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:00.836 [2024-11-18 23:27:20.149304] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:00.836 [2024-11-18 23:27:20.149312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:00.836 [2024-11-18 23:27:20.149319] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:00.836 [2024-11-18 23:27:20.149325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:00.836 [2024-11-18 23:27:20.149333] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:00.836 [2024-11-18 23:27:20.149341] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:00.836 [2024-11-18 23:27:20.149348] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:00.836 [2024-11-18 23:27:20.149358] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:00.836 [2024-11-18 23:27:20.149366] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:00.837 [2024-11-18 23:27:20.149374] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:00.837 [2024-11-18 23:27:20.149381] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:00.837 [2024-11-18 23:27:20.149388] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:00.837 [2024-11-18 23:27:20.149396] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:00.837 [2024-11-18 23:27:20.149403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:00.837 [2024-11-18 23:27:20.149410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:00.837 [2024-11-18 23:27:20.149417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.694 ms 00:32:00.837 [2024-11-18 23:27:20.149424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.837 [2024-11-18 23:27:20.168401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:00.837 [2024-11-18 23:27:20.168471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:00.837 [2024-11-18 23:27:20.168502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.929 ms 00:32:00.837 [2024-11-18 23:27:20.168520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.837 [2024-11-18 23:27:20.168708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:00.837 [2024-11-18 23:27:20.168732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:00.837 [2024-11-18 23:27:20.168753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.135 ms 00:32:00.837 [2024-11-18 23:27:20.168772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.837 [2024-11-18 23:27:20.181361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:00.837 [2024-11-18 23:27:20.181393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:00.837 [2024-11-18 23:27:20.181411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.472 ms 00:32:00.837 [2024-11-18 23:27:20.181418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.837 [2024-11-18 23:27:20.181446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:00.837 [2024-11-18 23:27:20.181454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:00.837 [2024-11-18 23:27:20.181463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:32:00.837 [2024-11-18 23:27:20.181471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.837 [2024-11-18 23:27:20.181547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:00.837 [2024-11-18 23:27:20.181558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:00.837 [2024-11-18 23:27:20.181571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:32:00.837 [2024-11-18 23:27:20.181581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.837 [2024-11-18 23:27:20.181696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:00.837 [2024-11-18 23:27:20.181705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:00.837 [2024-11-18 23:27:20.181713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:32:00.837 [2024-11-18 23:27:20.181721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.837 [2024-11-18 23:27:20.187441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:00.837 [2024-11-18 23:27:20.187471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:00.837 [2024-11-18 23:27:20.187480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.691 ms 00:32:00.837 [2024-11-18 23:27:20.187490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.837 [2024-11-18 23:27:20.187587] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:32:00.837 [2024-11-18 23:27:20.187600] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:00.837 [2024-11-18 23:27:20.187611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:00.837 [2024-11-18 23:27:20.187622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:00.837 [2024-11-18 23:27:20.187636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:32:00.837 [2024-11-18 23:27:20.187644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.837 [2024-11-18 23:27:20.202913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:00.837 [2024-11-18 23:27:20.202956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:00.837 [2024-11-18 23:27:20.202966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.253 ms 00:32:00.837 [2024-11-18 23:27:20.202974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.837 [2024-11-18 23:27:20.203093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:00.837 [2024-11-18 23:27:20.203105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:00.837 [2024-11-18 23:27:20.203112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:32:00.837 [2024-11-18 23:27:20.203120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.837 [2024-11-18 23:27:20.203229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:00.837 [2024-11-18 23:27:20.203239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:00.837 [2024-11-18 23:27:20.203247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:32:00.837 [2024-11-18 23:27:20.203257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.837 [2024-11-18 23:27:20.203557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:00.837 [2024-11-18 23:27:20.203568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:00.837 [2024-11-18 23:27:20.203576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.265 ms 00:32:00.837 [2024-11-18 23:27:20.203587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.837 [2024-11-18 23:27:20.203601] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:32:00.837 [2024-11-18 23:27:20.203613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:00.837 [2024-11-18 23:27:20.203621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:00.837 [2024-11-18 23:27:20.203628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:32:00.837 [2024-11-18 23:27:20.203638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.098 [2024-11-18 23:27:20.212356] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:01.098 [2024-11-18 23:27:20.212475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.098 [2024-11-18 23:27:20.212485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:01.098 [2024-11-18 23:27:20.212493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.822 ms 00:32:01.098 [2024-11-18 23:27:20.212500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.098 [2024-11-18 23:27:20.214957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.098 [2024-11-18 23:27:20.214984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:01.098 [2024-11-18 23:27:20.214993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.439 ms 00:32:01.098 [2024-11-18 23:27:20.215000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.098 [2024-11-18 23:27:20.215050] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:32:01.098 [2024-11-18 23:27:20.215629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.098 [2024-11-18 23:27:20.215650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:01.098 [2024-11-18 23:27:20.215659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.597 ms 00:32:01.098 [2024-11-18 23:27:20.215668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.098 [2024-11-18 23:27:20.215705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.098 [2024-11-18 23:27:20.215715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:01.098 [2024-11-18 23:27:20.215723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:01.098 [2024-11-18 23:27:20.215730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.098 [2024-11-18 23:27:20.215764] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:01.098 [2024-11-18 23:27:20.215774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.098 [2024-11-18 23:27:20.215781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:01.098 [2024-11-18 23:27:20.215795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:32:01.098 [2024-11-18 23:27:20.215802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.098 [2024-11-18 23:27:20.220695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.098 [2024-11-18 23:27:20.220731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:01.098 [2024-11-18 23:27:20.220740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.873 ms 00:32:01.098 [2024-11-18 23:27:20.220748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.098 [2024-11-18 23:27:20.220813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.098 [2024-11-18 23:27:20.220823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:01.098 [2024-11-18 23:27:20.220831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:32:01.098 [2024-11-18 23:27:20.220838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.098 [2024-11-18 23:27:20.221811] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 77.151 ms, result 0 00:32:02.042  [2024-11-18T23:27:22.810Z] Copying: 10/1024 [MB] (10 MBps) [2024-11-18T23:27:23.756Z] Copying: 20/1024 [MB] (10 MBps) [2024-11-18T23:27:24.701Z] Copying: 32/1024 [MB] (11 MBps) [2024-11-18T23:27:25.645Z] Copying: 43/1024 [MB] (11 MBps) [2024-11-18T23:27:26.590Z] Copying: 53/1024 [MB] (10 MBps) [2024-11-18T23:27:27.534Z] Copying: 64/1024 [MB] (11 MBps) [2024-11-18T23:27:28.478Z] Copying: 78/1024 [MB] (13 MBps) [2024-11-18T23:27:29.435Z] Copying: 95/1024 [MB] (16 MBps) [2024-11-18T23:27:30.824Z] Copying: 108/1024 [MB] (13 MBps) [2024-11-18T23:27:31.769Z] Copying: 126/1024 [MB] (18 MBps) [2024-11-18T23:27:32.713Z] Copying: 137/1024 [MB] (11 MBps) [2024-11-18T23:27:33.658Z] Copying: 158/1024 [MB] (20 MBps) [2024-11-18T23:27:34.602Z] Copying: 177/1024 [MB] (18 MBps) [2024-11-18T23:27:35.544Z] Copying: 198/1024 [MB] (20 MBps) [2024-11-18T23:27:36.487Z] Copying: 213/1024 [MB] (15 MBps) [2024-11-18T23:27:37.432Z] Copying: 232/1024 [MB] (19 MBps) [2024-11-18T23:27:38.820Z] Copying: 251/1024 [MB] (18 MBps) [2024-11-18T23:27:39.764Z] Copying: 269/1024 [MB] (18 MBps) [2024-11-18T23:27:40.708Z] Copying: 286/1024 [MB] (16 MBps) [2024-11-18T23:27:41.652Z] Copying: 298/1024 [MB] (12 MBps) [2024-11-18T23:27:42.597Z] Copying: 310/1024 [MB] (11 MBps) [2024-11-18T23:27:43.539Z] Copying: 323/1024 [MB] (12 MBps) [2024-11-18T23:27:44.543Z] Copying: 335/1024 [MB] (12 MBps) [2024-11-18T23:27:45.486Z] Copying: 348/1024 [MB] (12 MBps) [2024-11-18T23:27:46.428Z] Copying: 360/1024 [MB] (12 MBps) [2024-11-18T23:27:47.811Z] Copying: 373/1024 [MB] (12 MBps) [2024-11-18T23:27:48.755Z] Copying: 392/1024 [MB] (19 MBps) [2024-11-18T23:27:49.699Z] Copying: 406/1024 [MB] (13 MBps) [2024-11-18T23:27:50.644Z] Copying: 418/1024 [MB] (12 MBps) [2024-11-18T23:27:51.590Z] Copying: 429/1024 [MB] (10 MBps) [2024-11-18T23:27:52.535Z] Copying: 440/1024 [MB] (11 MBps) [2024-11-18T23:27:53.479Z] Copying: 455/1024 [MB] (15 MBps) [2024-11-18T23:27:54.425Z] Copying: 468/1024 [MB] (13 MBps) [2024-11-18T23:27:55.813Z] Copying: 495/1024 [MB] (26 MBps) [2024-11-18T23:27:56.757Z] Copying: 518/1024 [MB] (22 MBps) [2024-11-18T23:27:57.700Z] Copying: 533/1024 [MB] (15 MBps) [2024-11-18T23:27:58.645Z] Copying: 546/1024 [MB] (13 MBps) [2024-11-18T23:27:59.589Z] Copying: 565/1024 [MB] (18 MBps) [2024-11-18T23:28:00.534Z] Copying: 575/1024 [MB] (10 MBps) [2024-11-18T23:28:01.477Z] Copying: 594/1024 [MB] (18 MBps) [2024-11-18T23:28:02.421Z] Copying: 610/1024 [MB] (15 MBps) [2024-11-18T23:28:03.806Z] Copying: 624/1024 [MB] (14 MBps) [2024-11-18T23:28:04.751Z] Copying: 643/1024 [MB] (19 MBps) [2024-11-18T23:28:05.695Z] Copying: 654/1024 [MB] (10 MBps) [2024-11-18T23:28:06.639Z] Copying: 670/1024 [MB] (16 MBps) [2024-11-18T23:28:07.583Z] Copying: 686/1024 [MB] (15 MBps) [2024-11-18T23:28:08.527Z] Copying: 700/1024 [MB] (14 MBps) [2024-11-18T23:28:09.470Z] Copying: 711/1024 [MB] (10 MBps) [2024-11-18T23:28:10.414Z] Copying: 729/1024 [MB] (18 MBps) [2024-11-18T23:28:11.802Z] Copying: 740/1024 [MB] (10 MBps) [2024-11-18T23:28:12.746Z] Copying: 750/1024 [MB] (10 MBps) [2024-11-18T23:28:13.689Z] Copying: 760/1024 [MB] (10 MBps) [2024-11-18T23:28:14.632Z] Copying: 779/1024 [MB] (18 MBps) [2024-11-18T23:28:15.575Z] Copying: 789/1024 [MB] (10 MBps) [2024-11-18T23:28:16.577Z] Copying: 802/1024 [MB] (13 MBps) [2024-11-18T23:28:17.521Z] Copying: 814/1024 [MB] (11 MBps) [2024-11-18T23:28:18.466Z] Copying: 825/1024 [MB] (10 MBps) [2024-11-18T23:28:19.854Z] Copying: 836/1024 [MB] (11 MBps) [2024-11-18T23:28:20.429Z] Copying: 847/1024 [MB] (10 MBps) [2024-11-18T23:28:21.817Z] Copying: 864/1024 [MB] (16 MBps) [2024-11-18T23:28:22.763Z] Copying: 885/1024 [MB] (20 MBps) [2024-11-18T23:28:23.707Z] Copying: 902/1024 [MB] (17 MBps) [2024-11-18T23:28:24.649Z] Copying: 914/1024 [MB] (12 MBps) [2024-11-18T23:28:25.593Z] Copying: 925/1024 [MB] (10 MBps) [2024-11-18T23:28:26.537Z] Copying: 936/1024 [MB] (11 MBps) [2024-11-18T23:28:27.481Z] Copying: 961/1024 [MB] (25 MBps) [2024-11-18T23:28:28.426Z] Copying: 976/1024 [MB] (14 MBps) [2024-11-18T23:28:29.816Z] Copying: 991/1024 [MB] (14 MBps) [2024-11-18T23:28:30.389Z] Copying: 1005/1024 [MB] (14 MBps) [2024-11-18T23:28:30.652Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-11-18 23:28:30.621633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.274 [2024-11-18 23:28:30.621962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:11.274 [2024-11-18 23:28:30.622071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:11.274 [2024-11-18 23:28:30.622100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.274 [2024-11-18 23:28:30.622174] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:11.274 [2024-11-18 23:28:30.623332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.274 [2024-11-18 23:28:30.623509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:11.274 [2024-11-18 23:28:30.623575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.042 ms 00:33:11.274 [2024-11-18 23:28:30.623601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.274 [2024-11-18 23:28:30.623904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.274 [2024-11-18 23:28:30.623933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:11.274 [2024-11-18 23:28:30.623962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:33:11.274 [2024-11-18 23:28:30.623982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.274 [2024-11-18 23:28:30.624029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.274 [2024-11-18 23:28:30.624052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:33:11.274 [2024-11-18 23:28:30.624075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:33:11.274 [2024-11-18 23:28:30.624214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.274 [2024-11-18 23:28:30.624311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.274 [2024-11-18 23:28:30.624338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:33:11.274 [2024-11-18 23:28:30.624362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:33:11.274 [2024-11-18 23:28:30.624386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.274 [2024-11-18 23:28:30.624413] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:11.274 [2024-11-18 23:28:30.624440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:33:11.274 [2024-11-18 23:28:30.624473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:11.274 [2024-11-18 23:28:30.624483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:11.274 [2024-11-18 23:28:30.624493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:11.274 [2024-11-18 23:28:30.624501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:11.274 [2024-11-18 23:28:30.624510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:11.274 [2024-11-18 23:28:30.624519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:11.274 [2024-11-18 23:28:30.624528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:11.274 [2024-11-18 23:28:30.624537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:11.274 [2024-11-18 23:28:30.624545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:11.274 [2024-11-18 23:28:30.624554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:11.274 [2024-11-18 23:28:30.624563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:11.274 [2024-11-18 23:28:30.624575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:11.274 [2024-11-18 23:28:30.624584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:11.274 [2024-11-18 23:28:30.624592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:11.274 [2024-11-18 23:28:30.624601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:11.274 [2024-11-18 23:28:30.624608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:11.274 [2024-11-18 23:28:30.624617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:11.274 [2024-11-18 23:28:30.624626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:11.274 [2024-11-18 23:28:30.624634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:11.274 [2024-11-18 23:28:30.624643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:11.274 [2024-11-18 23:28:30.624651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:11.274 [2024-11-18 23:28:30.624659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:11.274 [2024-11-18 23:28:30.624668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:11.274 [2024-11-18 23:28:30.624676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:11.274 [2024-11-18 23:28:30.624683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:11.274 [2024-11-18 23:28:30.624691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:11.274 [2024-11-18 23:28:30.624699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:11.274 [2024-11-18 23:28:30.624707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:11.274 [2024-11-18 23:28:30.624715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:11.274 [2024-11-18 23:28:30.624723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:11.274 [2024-11-18 23:28:30.624732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:11.274 [2024-11-18 23:28:30.624755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:11.274 [2024-11-18 23:28:30.624763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:11.274 [2024-11-18 23:28:30.624772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:11.274 [2024-11-18 23:28:30.624780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:11.275 [2024-11-18 23:28:30.624788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:11.275 [2024-11-18 23:28:30.624796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:11.275 [2024-11-18 23:28:30.624804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:11.275 [2024-11-18 23:28:30.624813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:11.275 [2024-11-18 23:28:30.624822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:11.275 [2024-11-18 23:28:30.624830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:11.275 [2024-11-18 23:28:30.624839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:11.275 [2024-11-18 23:28:30.624848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:11.275 [2024-11-18 23:28:30.624955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:11.275 [2024-11-18 23:28:30.624965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:11.275 [2024-11-18 23:28:30.624974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:11.275 [2024-11-18 23:28:30.624983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:11.275 [2024-11-18 23:28:30.624991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:11.275 [2024-11-18 23:28:30.625000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:11.275 [2024-11-18 23:28:30.625009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:11.275 [2024-11-18 23:28:30.625018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:11.275 [2024-11-18 23:28:30.625027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:11.275 [2024-11-18 23:28:30.625036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:11.275 [2024-11-18 23:28:30.625045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:11.275 [2024-11-18 23:28:30.625053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:11.275 [2024-11-18 23:28:30.625062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:11.275 [2024-11-18 23:28:30.625071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:11.275 [2024-11-18 23:28:30.625079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:11.275 [2024-11-18 23:28:30.625088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:11.275 [2024-11-18 23:28:30.625097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:11.275 [2024-11-18 23:28:30.625106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:11.275 [2024-11-18 23:28:30.625114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:11.275 [2024-11-18 23:28:30.625122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:11.275 [2024-11-18 23:28:30.625131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:11.275 [2024-11-18 23:28:30.625139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:11.275 [2024-11-18 23:28:30.625148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:11.275 [2024-11-18 23:28:30.625172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:11.275 [2024-11-18 23:28:30.625181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:11.275 [2024-11-18 23:28:30.625190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:11.275 [2024-11-18 23:28:30.625198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:11.275 [2024-11-18 23:28:30.625207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:11.275 [2024-11-18 23:28:30.625215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:11.275 [2024-11-18 23:28:30.625223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:11.275 [2024-11-18 23:28:30.625232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:11.275 [2024-11-18 23:28:30.625240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:11.275 [2024-11-18 23:28:30.625250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:11.275 [2024-11-18 23:28:30.625260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:11.275 [2024-11-18 23:28:30.625269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:11.275 [2024-11-18 23:28:30.625278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:11.275 [2024-11-18 23:28:30.625286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:11.275 [2024-11-18 23:28:30.625295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:11.275 [2024-11-18 23:28:30.625304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:11.275 [2024-11-18 23:28:30.625312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:11.275 [2024-11-18 23:28:30.625320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:11.275 [2024-11-18 23:28:30.625329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:11.275 [2024-11-18 23:28:30.625338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:11.275 [2024-11-18 23:28:30.625347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:11.275 [2024-11-18 23:28:30.625355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:11.275 [2024-11-18 23:28:30.625364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:11.275 [2024-11-18 23:28:30.625372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:11.275 [2024-11-18 23:28:30.625380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:11.275 [2024-11-18 23:28:30.625388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:11.275 [2024-11-18 23:28:30.625396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:11.275 [2024-11-18 23:28:30.625404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:11.275 [2024-11-18 23:28:30.625412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:11.275 [2024-11-18 23:28:30.625420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:11.275 [2024-11-18 23:28:30.625428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:11.275 [2024-11-18 23:28:30.625436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:11.275 [2024-11-18 23:28:30.625444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:11.275 [2024-11-18 23:28:30.625461] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:11.275 [2024-11-18 23:28:30.625469] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 010f5a10-8b64-4503-bab5-5638e96e85f0 00:33:11.275 [2024-11-18 23:28:30.625484] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:33:11.275 [2024-11-18 23:28:30.625492] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 1056 00:33:11.275 [2024-11-18 23:28:30.625499] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 1024 00:33:11.275 [2024-11-18 23:28:30.625509] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0312 00:33:11.275 [2024-11-18 23:28:30.625516] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:11.275 [2024-11-18 23:28:30.625524] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:11.275 [2024-11-18 23:28:30.625551] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:11.275 [2024-11-18 23:28:30.625558] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:11.275 [2024-11-18 23:28:30.625565] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:11.275 [2024-11-18 23:28:30.625574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.275 [2024-11-18 23:28:30.625583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:11.275 [2024-11-18 23:28:30.625592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.162 ms 00:33:11.275 [2024-11-18 23:28:30.625601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.275 [2024-11-18 23:28:30.630080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.275 [2024-11-18 23:28:30.630267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:11.275 [2024-11-18 23:28:30.630347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.458 ms 00:33:11.275 [2024-11-18 23:28:30.630372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.275 [2024-11-18 23:28:30.630586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.275 [2024-11-18 23:28:30.630624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:11.275 [2024-11-18 23:28:30.630778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:33:11.275 [2024-11-18 23:28:30.630803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.275 [2024-11-18 23:28:30.640958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:11.275 [2024-11-18 23:28:30.641128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:11.275 [2024-11-18 23:28:30.641206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:11.275 [2024-11-18 23:28:30.641241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.275 [2024-11-18 23:28:30.641332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:11.275 [2024-11-18 23:28:30.641356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:11.275 [2024-11-18 23:28:30.641377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:11.275 [2024-11-18 23:28:30.641396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.275 [2024-11-18 23:28:30.641498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:11.276 [2024-11-18 23:28:30.641524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:11.276 [2024-11-18 23:28:30.641547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:11.276 [2024-11-18 23:28:30.641625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.276 [2024-11-18 23:28:30.641670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:11.276 [2024-11-18 23:28:30.641692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:11.276 [2024-11-18 23:28:30.641713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:11.276 [2024-11-18 23:28:30.641733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.537 [2024-11-18 23:28:30.662842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:11.537 [2024-11-18 23:28:30.663047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:11.537 [2024-11-18 23:28:30.663066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:11.537 [2024-11-18 23:28:30.663081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.537 [2024-11-18 23:28:30.679932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:11.537 [2024-11-18 23:28:30.680094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:11.537 [2024-11-18 23:28:30.680151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:11.537 [2024-11-18 23:28:30.680217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.537 [2024-11-18 23:28:30.680317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:11.537 [2024-11-18 23:28:30.680342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:11.538 [2024-11-18 23:28:30.680406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:11.538 [2024-11-18 23:28:30.680430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.538 [2024-11-18 23:28:30.680489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:11.538 [2024-11-18 23:28:30.680513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:11.538 [2024-11-18 23:28:30.680533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:11.538 [2024-11-18 23:28:30.680582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.538 [2024-11-18 23:28:30.680670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:11.538 [2024-11-18 23:28:30.680695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:11.538 [2024-11-18 23:28:30.680716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:11.538 [2024-11-18 23:28:30.680736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.538 [2024-11-18 23:28:30.680773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:11.538 [2024-11-18 23:28:30.680811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:11.538 [2024-11-18 23:28:30.680885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:11.538 [2024-11-18 23:28:30.680909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.538 [2024-11-18 23:28:30.680973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:11.538 [2024-11-18 23:28:30.680995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:11.538 [2024-11-18 23:28:30.681015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:11.538 [2024-11-18 23:28:30.681036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.538 [2024-11-18 23:28:30.681116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:11.538 [2024-11-18 23:28:30.681213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:11.538 [2024-11-18 23:28:30.681239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:11.538 [2024-11-18 23:28:30.681259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.538 [2024-11-18 23:28:30.681448] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 59.773 ms, result 0 00:33:11.798 00:33:11.799 00:33:11.799 23:28:31 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:33:14.349 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:33:14.349 23:28:33 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:33:14.349 23:28:33 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:33:14.349 23:28:33 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:33:14.349 23:28:33 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:33:14.349 23:28:33 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:33:14.349 23:28:33 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 93362 00:33:14.349 23:28:33 ftl.ftl_restore_fast -- common/autotest_common.sh@950 -- # '[' -z 93362 ']' 00:33:14.349 23:28:33 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # kill -0 93362 00:33:14.349 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (93362) - No such process 00:33:14.349 Process with pid 93362 is not found 00:33:14.349 23:28:33 ftl.ftl_restore_fast -- common/autotest_common.sh@977 -- # echo 'Process with pid 93362 is not found' 00:33:14.349 Remove shared memory files 00:33:14.349 23:28:33 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:33:14.349 23:28:33 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:33:14.349 23:28:33 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:33:14.349 23:28:33 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_010f5a10-8b64-4503-bab5-5638e96e85f0_band_md /dev/hugepages/ftl_010f5a10-8b64-4503-bab5-5638e96e85f0_l2p_l1 /dev/hugepages/ftl_010f5a10-8b64-4503-bab5-5638e96e85f0_l2p_l2 /dev/hugepages/ftl_010f5a10-8b64-4503-bab5-5638e96e85f0_l2p_l2_ctx /dev/hugepages/ftl_010f5a10-8b64-4503-bab5-5638e96e85f0_nvc_md /dev/hugepages/ftl_010f5a10-8b64-4503-bab5-5638e96e85f0_p2l_pool /dev/hugepages/ftl_010f5a10-8b64-4503-bab5-5638e96e85f0_sb /dev/hugepages/ftl_010f5a10-8b64-4503-bab5-5638e96e85f0_sb_shm /dev/hugepages/ftl_010f5a10-8b64-4503-bab5-5638e96e85f0_trim_bitmap /dev/hugepages/ftl_010f5a10-8b64-4503-bab5-5638e96e85f0_trim_log /dev/hugepages/ftl_010f5a10-8b64-4503-bab5-5638e96e85f0_trim_md /dev/hugepages/ftl_010f5a10-8b64-4503-bab5-5638e96e85f0_vmap 00:33:14.349 23:28:33 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:33:14.349 23:28:33 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:33:14.349 23:28:33 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:33:14.349 ************************************ 00:33:14.349 END TEST ftl_restore_fast 00:33:14.349 ************************************ 00:33:14.349 00:33:14.349 real 4m27.136s 00:33:14.349 user 4m14.553s 00:33:14.349 sys 0m12.435s 00:33:14.349 23:28:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:14.349 23:28:33 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:33:14.349 23:28:33 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:33:14.349 23:28:33 ftl -- ftl/ftl.sh@14 -- # killprocess 84272 00:33:14.349 23:28:33 ftl -- common/autotest_common.sh@950 -- # '[' -z 84272 ']' 00:33:14.349 23:28:33 ftl -- common/autotest_common.sh@954 -- # kill -0 84272 00:33:14.349 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (84272) - No such process 00:33:14.349 Process with pid 84272 is not found 00:33:14.349 23:28:33 ftl -- common/autotest_common.sh@977 -- # echo 'Process with pid 84272 is not found' 00:33:14.349 23:28:33 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:33:14.349 23:28:33 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=96154 00:33:14.349 23:28:33 ftl -- ftl/ftl.sh@20 -- # waitforlisten 96154 00:33:14.349 23:28:33 ftl -- common/autotest_common.sh@831 -- # '[' -z 96154 ']' 00:33:14.349 23:28:33 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:14.349 23:28:33 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:14.349 23:28:33 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:14.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:14.349 23:28:33 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:14.349 23:28:33 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:14.349 23:28:33 ftl -- common/autotest_common.sh@10 -- # set +x 00:33:14.349 [2024-11-18 23:28:33.624215] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:33:14.349 [2024-11-18 23:28:33.624382] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96154 ] 00:33:14.610 [2024-11-18 23:28:33.775803] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:14.610 [2024-11-18 23:28:33.850055] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:33:15.182 23:28:34 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:15.182 23:28:34 ftl -- common/autotest_common.sh@864 -- # return 0 00:33:15.182 23:28:34 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:33:15.444 nvme0n1 00:33:15.444 23:28:34 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:33:15.444 23:28:34 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:15.444 23:28:34 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:33:15.705 23:28:34 ftl -- ftl/common.sh@28 -- # stores=8d294bbd-be84-4cc8-975e-5ff15e1e36b3 00:33:15.705 23:28:34 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:33:15.705 23:28:34 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8d294bbd-be84-4cc8-975e-5ff15e1e36b3 00:33:15.966 23:28:35 ftl -- ftl/ftl.sh@23 -- # killprocess 96154 00:33:15.966 23:28:35 ftl -- common/autotest_common.sh@950 -- # '[' -z 96154 ']' 00:33:15.966 23:28:35 ftl -- common/autotest_common.sh@954 -- # kill -0 96154 00:33:15.966 23:28:35 ftl -- common/autotest_common.sh@955 -- # uname 00:33:15.966 23:28:35 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:15.966 23:28:35 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96154 00:33:15.966 23:28:35 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:15.966 23:28:35 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:15.966 23:28:35 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96154' 00:33:15.966 killing process with pid 96154 00:33:15.966 23:28:35 ftl -- common/autotest_common.sh@969 -- # kill 96154 00:33:15.966 23:28:35 ftl -- common/autotest_common.sh@974 -- # wait 96154 00:33:16.539 23:28:35 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:33:16.800 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:16.800 Waiting for block devices as requested 00:33:16.800 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:33:16.800 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:33:17.060 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:33:17.060 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:33:22.347 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:33:22.347 23:28:41 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:33:22.347 Remove shared memory files 00:33:22.347 23:28:41 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:33:22.347 23:28:41 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:33:22.347 23:28:41 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:33:22.347 23:28:41 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:33:22.347 23:28:41 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:33:22.347 23:28:41 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:33:22.347 00:33:22.347 real 17m47.555s 00:33:22.347 user 19m41.190s 00:33:22.347 sys 1m26.067s 00:33:22.347 23:28:41 ftl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:22.347 23:28:41 ftl -- common/autotest_common.sh@10 -- # set +x 00:33:22.347 ************************************ 00:33:22.347 END TEST ftl 00:33:22.347 ************************************ 00:33:22.347 23:28:41 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:33:22.347 23:28:41 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:33:22.347 23:28:41 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:33:22.347 23:28:41 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:33:22.347 23:28:41 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:33:22.347 23:28:41 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:33:22.347 23:28:41 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:33:22.347 23:28:41 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:33:22.348 23:28:41 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:33:22.348 23:28:41 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:33:22.348 23:28:41 -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:22.348 23:28:41 -- common/autotest_common.sh@10 -- # set +x 00:33:22.348 23:28:41 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:33:22.348 23:28:41 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:33:22.348 23:28:41 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:33:22.348 23:28:41 -- common/autotest_common.sh@10 -- # set +x 00:33:23.733 INFO: APP EXITING 00:33:23.733 INFO: killing all VMs 00:33:23.733 INFO: killing vhost app 00:33:23.733 INFO: EXIT DONE 00:33:23.994 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:24.565 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:33:24.565 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:33:24.565 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:33:24.565 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:33:24.825 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:25.397 Cleaning 00:33:25.397 Removing: /var/run/dpdk/spdk0/config 00:33:25.397 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:33:25.397 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:33:25.397 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:33:25.397 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:33:25.397 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:33:25.397 Removing: /var/run/dpdk/spdk0/hugepage_info 00:33:25.397 Removing: /var/run/dpdk/spdk0 00:33:25.397 Removing: /var/run/dpdk/spdk_pid69724 00:33:25.397 Removing: /var/run/dpdk/spdk_pid69887 00:33:25.397 Removing: /var/run/dpdk/spdk_pid70083 00:33:25.397 Removing: /var/run/dpdk/spdk_pid70171 00:33:25.397 Removing: /var/run/dpdk/spdk_pid70194 00:33:25.397 Removing: /var/run/dpdk/spdk_pid70305 00:33:25.397 Removing: /var/run/dpdk/spdk_pid70323 00:33:25.397 Removing: /var/run/dpdk/spdk_pid70506 00:33:25.397 Removing: /var/run/dpdk/spdk_pid70579 00:33:25.397 Removing: /var/run/dpdk/spdk_pid70659 00:33:25.397 Removing: /var/run/dpdk/spdk_pid70753 00:33:25.397 Removing: /var/run/dpdk/spdk_pid70834 00:33:25.397 Removing: /var/run/dpdk/spdk_pid70868 00:33:25.397 Removing: /var/run/dpdk/spdk_pid70910 00:33:25.397 Removing: /var/run/dpdk/spdk_pid70975 00:33:25.397 Removing: /var/run/dpdk/spdk_pid71081 00:33:25.397 Removing: /var/run/dpdk/spdk_pid71506 00:33:25.397 Removing: /var/run/dpdk/spdk_pid71548 00:33:25.397 Removing: /var/run/dpdk/spdk_pid71600 00:33:25.397 Removing: /var/run/dpdk/spdk_pid71616 00:33:25.397 Removing: /var/run/dpdk/spdk_pid71674 00:33:25.397 Removing: /var/run/dpdk/spdk_pid71690 00:33:25.397 Removing: /var/run/dpdk/spdk_pid71748 00:33:25.397 Removing: /var/run/dpdk/spdk_pid71764 00:33:25.397 Removing: /var/run/dpdk/spdk_pid71806 00:33:25.397 Removing: /var/run/dpdk/spdk_pid71824 00:33:25.397 Removing: /var/run/dpdk/spdk_pid71866 00:33:25.397 Removing: /var/run/dpdk/spdk_pid71884 00:33:25.397 Removing: /var/run/dpdk/spdk_pid72011 00:33:25.397 Removing: /var/run/dpdk/spdk_pid72053 00:33:25.397 Removing: /var/run/dpdk/spdk_pid72131 00:33:25.397 Removing: /var/run/dpdk/spdk_pid72292 00:33:25.397 Removing: /var/run/dpdk/spdk_pid72365 00:33:25.397 Removing: /var/run/dpdk/spdk_pid72396 00:33:25.397 Removing: /var/run/dpdk/spdk_pid72827 00:33:25.397 Removing: /var/run/dpdk/spdk_pid72919 00:33:25.397 Removing: /var/run/dpdk/spdk_pid73025 00:33:25.397 Removing: /var/run/dpdk/spdk_pid73068 00:33:25.397 Removing: /var/run/dpdk/spdk_pid73088 00:33:25.397 Removing: /var/run/dpdk/spdk_pid73172 00:33:25.397 Removing: /var/run/dpdk/spdk_pid73776 00:33:25.397 Removing: /var/run/dpdk/spdk_pid73807 00:33:25.397 Removing: /var/run/dpdk/spdk_pid74253 00:33:25.397 Removing: /var/run/dpdk/spdk_pid74346 00:33:25.397 Removing: /var/run/dpdk/spdk_pid74455 00:33:25.397 Removing: /var/run/dpdk/spdk_pid74497 00:33:25.397 Removing: /var/run/dpdk/spdk_pid74523 00:33:25.397 Removing: /var/run/dpdk/spdk_pid74543 00:33:25.397 Removing: /var/run/dpdk/spdk_pid76375 00:33:25.397 Removing: /var/run/dpdk/spdk_pid76497 00:33:25.397 Removing: /var/run/dpdk/spdk_pid76501 00:33:25.397 Removing: /var/run/dpdk/spdk_pid76513 00:33:25.398 Removing: /var/run/dpdk/spdk_pid76559 00:33:25.398 Removing: /var/run/dpdk/spdk_pid76563 00:33:25.398 Removing: /var/run/dpdk/spdk_pid76575 00:33:25.398 Removing: /var/run/dpdk/spdk_pid76620 00:33:25.398 Removing: /var/run/dpdk/spdk_pid76624 00:33:25.398 Removing: /var/run/dpdk/spdk_pid76636 00:33:25.398 Removing: /var/run/dpdk/spdk_pid76681 00:33:25.398 Removing: /var/run/dpdk/spdk_pid76685 00:33:25.398 Removing: /var/run/dpdk/spdk_pid76697 00:33:25.398 Removing: /var/run/dpdk/spdk_pid78061 00:33:25.398 Removing: /var/run/dpdk/spdk_pid78147 00:33:25.398 Removing: /var/run/dpdk/spdk_pid79535 00:33:25.398 Removing: /var/run/dpdk/spdk_pid80904 00:33:25.398 Removing: /var/run/dpdk/spdk_pid80964 00:33:25.398 Removing: /var/run/dpdk/spdk_pid81018 00:33:25.398 Removing: /var/run/dpdk/spdk_pid81072 00:33:25.398 Removing: /var/run/dpdk/spdk_pid81149 00:33:25.398 Removing: /var/run/dpdk/spdk_pid81212 00:33:25.398 Removing: /var/run/dpdk/spdk_pid81354 00:33:25.398 Removing: /var/run/dpdk/spdk_pid81697 00:33:25.398 Removing: /var/run/dpdk/spdk_pid81728 00:33:25.398 Removing: /var/run/dpdk/spdk_pid82166 00:33:25.398 Removing: /var/run/dpdk/spdk_pid82337 00:33:25.398 Removing: /var/run/dpdk/spdk_pid82431 00:33:25.398 Removing: /var/run/dpdk/spdk_pid82532 00:33:25.398 Removing: /var/run/dpdk/spdk_pid82564 00:33:25.398 Removing: /var/run/dpdk/spdk_pid82589 00:33:25.398 Removing: /var/run/dpdk/spdk_pid82880 00:33:25.398 Removing: /var/run/dpdk/spdk_pid82918 00:33:25.398 Removing: /var/run/dpdk/spdk_pid82963 00:33:25.398 Removing: /var/run/dpdk/spdk_pid83333 00:33:25.398 Removing: /var/run/dpdk/spdk_pid83478 00:33:25.398 Removing: /var/run/dpdk/spdk_pid84272 00:33:25.398 Removing: /var/run/dpdk/spdk_pid84386 00:33:25.398 Removing: /var/run/dpdk/spdk_pid84542 00:33:25.398 Removing: /var/run/dpdk/spdk_pid84617 00:33:25.398 Removing: /var/run/dpdk/spdk_pid84893 00:33:25.398 Removing: /var/run/dpdk/spdk_pid85124 00:33:25.398 Removing: /var/run/dpdk/spdk_pid85464 00:33:25.398 Removing: /var/run/dpdk/spdk_pid85624 00:33:25.398 Removing: /var/run/dpdk/spdk_pid85765 00:33:25.398 Removing: /var/run/dpdk/spdk_pid85802 00:33:25.398 Removing: /var/run/dpdk/spdk_pid86029 00:33:25.398 Removing: /var/run/dpdk/spdk_pid86046 00:33:25.398 Removing: /var/run/dpdk/spdk_pid86087 00:33:25.398 Removing: /var/run/dpdk/spdk_pid86351 00:33:25.398 Removing: /var/run/dpdk/spdk_pid86575 00:33:25.398 Removing: /var/run/dpdk/spdk_pid87158 00:33:25.398 Removing: /var/run/dpdk/spdk_pid87870 00:33:25.398 Removing: /var/run/dpdk/spdk_pid88521 00:33:25.659 Removing: /var/run/dpdk/spdk_pid89499 00:33:25.659 Removing: /var/run/dpdk/spdk_pid89646 00:33:25.659 Removing: /var/run/dpdk/spdk_pid89725 00:33:25.659 Removing: /var/run/dpdk/spdk_pid90313 00:33:25.659 Removing: /var/run/dpdk/spdk_pid90366 00:33:25.659 Removing: /var/run/dpdk/spdk_pid91102 00:33:25.659 Removing: /var/run/dpdk/spdk_pid91592 00:33:25.659 Removing: /var/run/dpdk/spdk_pid92422 00:33:25.659 Removing: /var/run/dpdk/spdk_pid92548 00:33:25.659 Removing: /var/run/dpdk/spdk_pid92579 00:33:25.659 Removing: /var/run/dpdk/spdk_pid92632 00:33:25.659 Removing: /var/run/dpdk/spdk_pid92682 00:33:25.659 Removing: /var/run/dpdk/spdk_pid92735 00:33:25.659 Removing: /var/run/dpdk/spdk_pid92927 00:33:25.659 Removing: /var/run/dpdk/spdk_pid92996 00:33:25.659 Removing: /var/run/dpdk/spdk_pid93057 00:33:25.659 Removing: /var/run/dpdk/spdk_pid93135 00:33:25.659 Removing: /var/run/dpdk/spdk_pid93171 00:33:25.659 Removing: /var/run/dpdk/spdk_pid93227 00:33:25.659 Removing: /var/run/dpdk/spdk_pid93362 00:33:25.659 Removing: /var/run/dpdk/spdk_pid93584 00:33:25.659 Removing: /var/run/dpdk/spdk_pid94100 00:33:25.659 Removing: /var/run/dpdk/spdk_pid94735 00:33:25.659 Removing: /var/run/dpdk/spdk_pid95376 00:33:25.659 Removing: /var/run/dpdk/spdk_pid96154 00:33:25.659 Clean 00:33:25.659 23:28:44 -- common/autotest_common.sh@1451 -- # return 0 00:33:25.659 23:28:44 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:33:25.659 23:28:44 -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:25.659 23:28:44 -- common/autotest_common.sh@10 -- # set +x 00:33:25.659 23:28:44 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:33:25.659 23:28:44 -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:25.659 23:28:44 -- common/autotest_common.sh@10 -- # set +x 00:33:25.659 23:28:44 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:33:25.659 23:28:44 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:33:25.659 23:28:44 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:33:25.659 23:28:44 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:33:25.659 23:28:45 -- spdk/autotest.sh@394 -- # hostname 00:33:25.659 23:28:45 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:33:25.920 geninfo: WARNING: invalid characters removed from testname! 00:33:52.588 23:29:10 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:54.504 23:29:13 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:56.420 23:29:15 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:58.332 23:29:17 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:00.879 23:29:19 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:02.794 23:29:21 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:04.180 23:29:23 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:34:04.180 23:29:23 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:34:04.180 23:29:23 -- common/autotest_common.sh@1681 -- $ lcov --version 00:34:04.180 23:29:23 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:34:04.180 23:29:23 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:34:04.180 23:29:23 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:34:04.180 23:29:23 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:34:04.180 23:29:23 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:34:04.180 23:29:23 -- scripts/common.sh@336 -- $ IFS=.-: 00:34:04.180 23:29:23 -- scripts/common.sh@336 -- $ read -ra ver1 00:34:04.180 23:29:23 -- scripts/common.sh@337 -- $ IFS=.-: 00:34:04.180 23:29:23 -- scripts/common.sh@337 -- $ read -ra ver2 00:34:04.180 23:29:23 -- scripts/common.sh@338 -- $ local 'op=<' 00:34:04.180 23:29:23 -- scripts/common.sh@340 -- $ ver1_l=2 00:34:04.180 23:29:23 -- scripts/common.sh@341 -- $ ver2_l=1 00:34:04.180 23:29:23 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:34:04.180 23:29:23 -- scripts/common.sh@344 -- $ case "$op" in 00:34:04.180 23:29:23 -- scripts/common.sh@345 -- $ : 1 00:34:04.180 23:29:23 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:34:04.180 23:29:23 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:04.180 23:29:23 -- scripts/common.sh@365 -- $ decimal 1 00:34:04.180 23:29:23 -- scripts/common.sh@353 -- $ local d=1 00:34:04.180 23:29:23 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:34:04.180 23:29:23 -- scripts/common.sh@355 -- $ echo 1 00:34:04.180 23:29:23 -- scripts/common.sh@365 -- $ ver1[v]=1 00:34:04.441 23:29:23 -- scripts/common.sh@366 -- $ decimal 2 00:34:04.441 23:29:23 -- scripts/common.sh@353 -- $ local d=2 00:34:04.441 23:29:23 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:34:04.441 23:29:23 -- scripts/common.sh@355 -- $ echo 2 00:34:04.441 23:29:23 -- scripts/common.sh@366 -- $ ver2[v]=2 00:34:04.441 23:29:23 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:34:04.441 23:29:23 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:34:04.441 23:29:23 -- scripts/common.sh@368 -- $ return 0 00:34:04.441 23:29:23 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:04.441 23:29:23 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:34:04.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:04.441 --rc genhtml_branch_coverage=1 00:34:04.441 --rc genhtml_function_coverage=1 00:34:04.441 --rc genhtml_legend=1 00:34:04.441 --rc geninfo_all_blocks=1 00:34:04.441 --rc geninfo_unexecuted_blocks=1 00:34:04.441 00:34:04.441 ' 00:34:04.441 23:29:23 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:34:04.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:04.441 --rc genhtml_branch_coverage=1 00:34:04.441 --rc genhtml_function_coverage=1 00:34:04.441 --rc genhtml_legend=1 00:34:04.441 --rc geninfo_all_blocks=1 00:34:04.441 --rc geninfo_unexecuted_blocks=1 00:34:04.441 00:34:04.441 ' 00:34:04.441 23:29:23 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:34:04.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:04.441 --rc genhtml_branch_coverage=1 00:34:04.441 --rc genhtml_function_coverage=1 00:34:04.441 --rc genhtml_legend=1 00:34:04.441 --rc geninfo_all_blocks=1 00:34:04.441 --rc geninfo_unexecuted_blocks=1 00:34:04.441 00:34:04.441 ' 00:34:04.441 23:29:23 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:34:04.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:04.441 --rc genhtml_branch_coverage=1 00:34:04.441 --rc genhtml_function_coverage=1 00:34:04.441 --rc genhtml_legend=1 00:34:04.441 --rc geninfo_all_blocks=1 00:34:04.441 --rc geninfo_unexecuted_blocks=1 00:34:04.441 00:34:04.441 ' 00:34:04.441 23:29:23 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:04.441 23:29:23 -- scripts/common.sh@15 -- $ shopt -s extglob 00:34:04.441 23:29:23 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:34:04.441 23:29:23 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:04.441 23:29:23 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:04.442 23:29:23 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:04.442 23:29:23 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:04.442 23:29:23 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:04.442 23:29:23 -- paths/export.sh@5 -- $ export PATH 00:34:04.442 23:29:23 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:04.442 23:29:23 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:34:04.442 23:29:23 -- common/autobuild_common.sh@479 -- $ date +%s 00:34:04.442 23:29:23 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1731972563.XXXXXX 00:34:04.442 23:29:23 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1731972563.cC90pE 00:34:04.442 23:29:23 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:34:04.442 23:29:23 -- common/autobuild_common.sh@485 -- $ '[' -n v23.11 ']' 00:34:04.442 23:29:23 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:34:04.442 23:29:23 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:34:04.442 23:29:23 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:34:04.442 23:29:23 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:34:04.442 23:29:23 -- common/autobuild_common.sh@495 -- $ get_config_params 00:34:04.442 23:29:23 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:34:04.442 23:29:23 -- common/autotest_common.sh@10 -- $ set +x 00:34:04.442 23:29:23 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:34:04.442 23:29:23 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:34:04.442 23:29:23 -- pm/common@17 -- $ local monitor 00:34:04.442 23:29:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:04.442 23:29:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:04.442 23:29:23 -- pm/common@25 -- $ sleep 1 00:34:04.442 23:29:23 -- pm/common@21 -- $ date +%s 00:34:04.442 23:29:23 -- pm/common@21 -- $ date +%s 00:34:04.442 23:29:23 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1731972563 00:34:04.442 23:29:23 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1731972563 00:34:04.442 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1731972563_collect-cpu-load.pm.log 00:34:04.442 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1731972563_collect-vmstat.pm.log 00:34:05.387 23:29:24 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:34:05.387 23:29:24 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:34:05.387 23:29:24 -- spdk/autopackage.sh@14 -- $ timing_finish 00:34:05.387 23:29:24 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:34:05.387 23:29:24 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:34:05.387 23:29:24 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:34:05.387 23:29:24 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:34:05.387 23:29:24 -- pm/common@29 -- $ signal_monitor_resources TERM 00:34:05.387 23:29:24 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:34:05.387 23:29:24 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:05.387 23:29:24 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:34:05.387 23:29:24 -- pm/common@44 -- $ pid=97898 00:34:05.387 23:29:24 -- pm/common@50 -- $ kill -TERM 97898 00:34:05.387 23:29:24 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:05.387 23:29:24 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:34:05.387 23:29:24 -- pm/common@44 -- $ pid=97899 00:34:05.387 23:29:24 -- pm/common@50 -- $ kill -TERM 97899 00:34:05.387 + [[ -n 5768 ]] 00:34:05.387 + sudo kill 5768 00:34:05.398 [Pipeline] } 00:34:05.416 [Pipeline] // timeout 00:34:05.421 [Pipeline] } 00:34:05.438 [Pipeline] // stage 00:34:05.444 [Pipeline] } 00:34:05.460 [Pipeline] // catchError 00:34:05.470 [Pipeline] stage 00:34:05.473 [Pipeline] { (Stop VM) 00:34:05.487 [Pipeline] sh 00:34:05.773 + vagrant halt 00:34:08.322 ==> default: Halting domain... 00:34:14.921 [Pipeline] sh 00:34:15.206 + vagrant destroy -f 00:34:17.751 ==> default: Removing domain... 00:34:18.706 [Pipeline] sh 00:34:18.991 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:34:19.001 [Pipeline] } 00:34:19.015 [Pipeline] // stage 00:34:19.021 [Pipeline] } 00:34:19.035 [Pipeline] // dir 00:34:19.040 [Pipeline] } 00:34:19.054 [Pipeline] // wrap 00:34:19.060 [Pipeline] } 00:34:19.073 [Pipeline] // catchError 00:34:19.085 [Pipeline] stage 00:34:19.088 [Pipeline] { (Epilogue) 00:34:19.102 [Pipeline] sh 00:34:19.387 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:34:24.775 [Pipeline] catchError 00:34:24.777 [Pipeline] { 00:34:24.792 [Pipeline] sh 00:34:25.079 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:34:25.079 Artifacts sizes are good 00:34:25.090 [Pipeline] } 00:34:25.106 [Pipeline] // catchError 00:34:25.119 [Pipeline] archiveArtifacts 00:34:25.127 Archiving artifacts 00:34:25.221 [Pipeline] cleanWs 00:34:25.233 [WS-CLEANUP] Deleting project workspace... 00:34:25.233 [WS-CLEANUP] Deferred wipeout is used... 00:34:25.241 [WS-CLEANUP] done 00:34:25.243 [Pipeline] } 00:34:25.258 [Pipeline] // stage 00:34:25.263 [Pipeline] } 00:34:25.277 [Pipeline] // node 00:34:25.282 [Pipeline] End of Pipeline 00:34:25.321 Finished: SUCCESS